torch / torch.backends
torch.backends¶
torch.backends controls the behavior of various backends that PyTorch supports.
These backends include:
torch.backends.cuda
torch.backends.cudnn
torch.backends.mkl
torch.backends.mkldnn
torch.backends.openmp
torch.backends.cuda¶
-
torch.backends.cuda.
is_built
()[source]¶ Returns whether PyTorch is built with CUDA support. Note that this doesn’t necessarily mean CUDA is available; just that if this PyTorch binary were run a machine with working CUDA drivers and devices, we would be able to use it.
-
torch.backends.cuda.matmul.
allow_tf32
¶ A
bool
that controls whether TensorFloat-32 tensor cores may be used in matrix multiplications on Ampere or newer GPUs. See TensorFloat-32(TF32) on Ampere devices.
torch.backends.cudnn¶
-
torch.backends.cudnn.
is_available
()[source]¶ Returns a bool indicating if CUDNN is currently available.
-
torch.backends.cudnn.
allow_tf32
¶ A
bool
that controls where TensorFloat-32 tensor cores may be used in cuDNN convolutions on Ampere or newer GPUs. See TensorFloat-32(TF32) on Ampere devices.
-
torch.backends.cudnn.
deterministic
¶ A
bool
that, if True, causes cuDNN to only use deterministic convolution algorithms. See alsotorch.is_deterministic()
andtorch.set_deterministic()
.
torch.backends.mkl¶
torch.backends.mkldnn¶
此页内容是否对您有帮助