torch / torch.random
torch.random¶
-
torch.random.
fork_rng
(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices')[source]¶ Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in.
- Parameters
devices (iterable of CUDA IDs) – CUDA devices for which to fork the RNG. CPU RNG state is always forked. By default,
fork_rng()
operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressedenabled (bool) – if
False
, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it.
-
torch.random.
get_rng_state
() → torch.Tensor[source]¶ Returns the random number generator state as a torch.ByteTensor.
-
torch.random.
initial_seed
() → int[source]¶ Returns the initial seed for generating random numbers as a Python long.
-
torch.random.
manual_seed
(seed) → torch._C.Generator[source]¶ Sets the seed for generating random numbers. Returns a torch.Generator object.
- Parameters
seed (int) – The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula 0xffff_ffff_ffff_ffff + seed.
此页内容是否对您有帮助