View source on GitHub |
Random-number generator.
tf.random.Generator(
copy_from=None, state=None, alg=None
)
Used in the notebooks
Used in the guide | Used in the tutorials |
---|---|
Example:
Creating a generator from a seed:
g = tf.random.Generator.from_seed(1234)
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.9356609 , 1.0854305 , -0.93788373],
[-0.5061547 , 1.3169702 , 0.7137579 ]], dtype=float32)>
Creating a generator from a non-deterministic state:
g = tf.random.Generator.from_non_deterministic_state()
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>
All the constructors allow explicitly choosing an Random-Number-Generation
(RNG) algorithm. Supported algorithms are "philox"
and "threefry"
. For
example:
g = tf.random.Generator.from_seed(123, alg="philox")
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.8673864 , -0.29899067, -0.9310337 ],
[-1.5828488 , 1.2481191 , -0.6770643 ]], dtype=float32)>
CPU, GPU and TPU with the same algorithm and seed will generate the same
integer random numbers. Float-point results (such as the output of normal
)
may have small numerical discrepancies between different devices.
This class uses a tf.Variable
to manage its internal state. Every time
random numbers are generated, the state of the generator will change. For
example:
g = tf.random.Generator.from_seed(1234)
g.state
<tf.Variable ... numpy=array([1234, 0, 0])>
g.normal(shape=(2, 3))
<...>
g.state
<tf.Variable ... numpy=array([2770, 0, 0])>
The shape of the state is algorithm-specific.
There is also a global generator:
g = tf.random.get_global_generator()
g.normal(shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>
Args | |
---|---|
copy_from
|
a generator to be copied from. |
state
|
a vector of dtype STATE_TYPE representing the initial state of the RNG, whose length and semantics are algorithm-specific. If it's a variable, the generator will reuse it instead of creating a new variable. |
alg
|
the RNG algorithm. Possible values are
tf.random.Algorithm.PHILOX for the Philox algorithm and
tf.random.Algorithm.THREEFRY for the ThreeFry algorithm
(see paper 'Parallel Random Numbers: As Easy as 1, 2, 3'
[https://www.thesalmons.org/john/random123/papers/random123sc11.pdf]).
The string names "philox" and "threefry" can also be used.
Note PHILOX guarantees the same numbers are produced (given
the same random state) across all architectures (CPU, GPU, XLA etc).
|
Attributes | |
---|---|
algorithm
|
The RNG algorithm id (a Python integer or scalar integer Tensor). |
key
|
The 'key' part of the state of a counter-based RNG.
For a counter-base RNG algorithm such as Philox and ThreeFry (as described in paper 'Parallel Random Numbers: As Easy as 1, 2, 3' [https://www.thesalmons.org/john/random123/papers/random123sc11.pdf]), the RNG state consists of two parts: counter and key. The output is generated via the formula: output=hash(key, counter), i.e. a hashing of the counter parametrized by the key. Two RNGs with two different keys can be thought as generating two independent random-number streams (a stream is formed by increasing the counter). |
state
|
The internal state of the RNG. |
Methods
binomial
binomial(
shape, counts, probs, dtype=tf.dtypes.int32, name=None
)
Outputs random values from a binomial distribution.
The generated values follow a binomial distribution with specified count and probability of success parameters.
Example:
counts = [10., 20.]
# Probability of success.
probs = [0.8]
rng = tf.random.Generator.from_seed(seed=234)
binomial_samples = rng.binomial(shape=[2], counts=counts, probs=probs)
counts = ... # Shape [3, 1, 2]
probs = ... # Shape [1, 4, 2]
shape = [3, 4, 3, 4, 2]
rng = tf.random.Generator.from_seed(seed=1717)
# Sample shape will be [3, 4, 3, 4, 2]
binomial_samples = rng.binomial(shape=shape, counts=counts, probs=probs)
Args | |
---|---|
shape
|
A 1-D integer Tensor or Python array. The shape of the output tensor. |
counts
|
Tensor. The counts of the binomial distribution. Must be
broadcastable with probs , and broadcastable with the rightmost
dimensions of shape .
|
probs
|
Tensor. The probability of success for the
binomial distribution. Must be broadcastable with counts and
broadcastable with the rightmost dimensions of shape .
|
dtype
|
The type of the output. Default: tf.int32 |
name
|
A name for the operation (optional). |
Returns | |
---|---|
samples
|
A Tensor of the specified shape filled with random binomial values. For each i, each samples[i, ...] is an independent draw from the binomial distribution on counts[i] trials with probability of success probs[i]. |
from_key_counter
@classmethod
from_key_counter( key, counter, alg )
Creates a generator from a key and a counter.
This constructor only applies if the algorithm is a counter-based algorithm.
See method key
for the meaning of "key" and "counter".
Args | |
---|---|
key
|
the key for the RNG, a scalar of type STATE_TYPE. |
counter
|
a vector of dtype STATE_TYPE representing the initial counter for the RNG, whose length is algorithm-specific., |
alg
|
the RNG algorithm. If None, it will be auto-selected. See
__init__ for its possible values.
|
Returns | |
---|---|
The new generator. |
Throws:
ValueError
: if the generator is created inside a synchronoustf.distribute
strategy such asMirroredStrategy
orTPUStrategy
, because there is ambiguity on how to replicate a generator (e.g. should it be copied so such each replica will get the same random numbers, or should it be "split" into different generators that generate different random numbers).
from_non_deterministic_state
@classmethod
from_non_deterministic_state( alg=None )
Creates a generator by non-deterministically initializing its state.
The source of the non-determinism will be platform- and time-dependent.
Args | |
---|---|
alg
|
(optional) the RNG algorithm. If None, it will be auto-selected. See
__init__ for its possible values.
|
Returns | |
---|---|
The new generator. |
Throws:
ValueError
: if the generator is created inside a synchronoustf.distribute
strategy such asMirroredStrategy
orTPUStrategy
, because there is ambiguity on how to replicate a generator (e.g. should it be copied so such each replica will get the same random numbers, or should it be "split" into different generators that generate different random numbers).
from_seed
@classmethod
from_seed( seed, alg=None )
Creates a generator from a seed.
A seed is a 1024-bit unsigned integer represented either as a Python integer or a vector of integers. Seeds shorter than 1024-bit will be padded. The padding, the internal structure of a seed and the way a seed is converted to a state are all opaque (unspecified). The only semantics specification of seeds is that two different seeds are likely to produce two independent generators (but no guarantee).
Args | |
---|---|
seed
|
the seed for the RNG. |
alg
|
(optional) the RNG algorithm. If None, it will be auto-selected. See
__init__ for its possible values.
|
Returns | |
---|---|
The new generator. |
Throws:
ValueError
: if the generator is created inside a synchronoustf.distribute
strategy such asMirroredStrategy
orTPUStrategy
, because there is ambiguity on how to replicate a generator (e.g. should it be copied so such each replica will get the same random numbers, or should it be "split" into different generators that generate different random numbers).
from_state
@classmethod
from_state( state, alg )
Creates a generator from a state.
See __init__
for description of state
and alg
.
Args | |
---|---|
state
|
the new state. |
alg
|
the RNG algorithm. |
Returns | |
---|---|
The new generator. |
Throws:
ValueError
: if the generator is created inside a synchronoustf.distribute
strategy such asMirroredStrategy
orTPUStrategy
, because there is ambiguity on how to replicate a generator (e.g. should it be copied so such each replica will get the same random numbers, or should it be "split" into different generators that generate different random numbers).
make_seeds
make_seeds(
count=1
)
Generates seeds for stateless random ops.
For example:
seeds = get_global_generator().make_seeds(count=10)
for i in range(10):
seed = seeds[:, i]
numbers = stateless_random_normal(shape=[2, 3], seed=seed)
...
Args | |
---|---|
count
|
the number of seed pairs (note that stateless random ops need a pair of seeds to invoke). |
Returns | |
---|---|
A tensor of shape [2, count] and dtype int64. |
normal
normal(
shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None
)
Outputs random values from a normal distribution.
Args | |
---|---|
shape
|
A 1-D integer Tensor or Python array. The shape of the output tensor. |
mean
|
A 0-D Tensor or Python value of type dtype . The mean of the normal
distribution.
|
stddev
|
A 0-D Tensor or Python value of type dtype . The standard
deviation of the normal distribution.
|
dtype
|
The type of the output. |
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tensor of the specified shape filled with random normal values. |
reset
reset(
state
)
Resets the generator by a new state.
See __init__
for the meaning of "state".
Args | |
---|---|
state
|
the new state. |
reset_from_key_counter
reset_from_key_counter(
key, counter
)
Resets the generator by a new key-counter pair.
See from_key_counter
for the meaning of "key" and "counter".
Args | |
---|---|
key
|
the new key. |
counter
|
the new counter. |
reset_from_seed
reset_from_seed(
seed
)
Resets the generator by a new seed.
See from_seed
for the meaning of "seed".
Args | |
---|---|
seed
|
the new seed. |
skip
skip(
delta
)
Advance the counter of a counter-based RNG.
Args | |
---|---|
delta
|
the amount of advancement. The state of the RNG after
skip(n) will be the same as that after normal([n])
(or any other distribution). The actual increment added to the
counter is an unspecified implementation detail.
|
split
split(
count=1
)
Returns a list of independent Generator
objects.
Two generators are independent of each other in the sense that the
random-number streams they generate don't have statistically detectable
correlations. The new generators are also independent of the old one.
The old generator's state will be changed (like other random-number
generating methods), so two calls of split
will return different
new generators.
For example:
gens = get_global_generator().split(count=10)
for gen in gens:
numbers = gen.normal(shape=[2, 3])
# ...
gens2 = get_global_generator().split(count=10)
# gens2 will be different from gens
The new generators will be put on the current device (possible different from the old generator's), for example:
with tf.device("/device:CPU:0"):
gen = Generator(seed=1234) # gen is on CPU
with tf.device("/device:GPU:0"):
gens = gen.split(count=10) # gens are on GPU
Args | |
---|---|
count
|
the number of generators to return. |
Returns | |
---|---|
A list (length count ) of Generator objects independent of each other.
The new generators have the same RNG algorithm as the old one.
|
truncated_normal
truncated_normal(
shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None
)
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
Args | |
---|---|
shape
|
A 1-D integer Tensor or Python array. The shape of the output tensor. |
mean
|
A 0-D Tensor or Python value of type dtype . The mean of the
truncated normal distribution.
|
stddev
|
A 0-D Tensor or Python value of type dtype . The standard
deviation of the normal distribution, before truncation.
|
dtype
|
The type of the output. |
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tensor of the specified shape filled with random truncated normal values. |
uniform
uniform(
shape, minval=0, maxval=None, dtype=tf.dtypes.float32, name=None
)
Outputs random values from a uniform distribution.
The generated values follow a uniform distribution in the range
[minval, maxval)
. The lower bound minval
is included in the range, while
the upper bound maxval
is excluded. (For float numbers especially
low-precision types like bfloat16, because of
rounding, the result may sometimes include maxval
.)
For floats, the default range is [0, 1)
. For ints, at least maxval
must
be specified explicitly.
In the integer case, the random integers are slightly biased unless
maxval - minval
is an exact power of two. The bias is small for values of
maxval - minval
significantly smaller than the range of the output (either
2**32
or 2**64
).
For full-range random integers, pass minval=None
and maxval=None
with an
integer dtype
(for integer dtypes, minval
and maxval
must be both
None
or both not None
).
Args | |
---|---|
shape
|
A 1-D integer Tensor or Python array. The shape of the output tensor. |
minval
|
A Tensor or Python value of type dtype , broadcastable with
shape (for integer types, broadcasting is not supported, so it needs
to be a scalar). The lower bound (included) on the range of random
values to generate. Pass None for full-range integers. Defaults to 0.
|
maxval
|
A Tensor or Python value of type dtype , broadcastable with
shape (for integer types, broadcasting is not supported, so it needs
to be a scalar). The upper bound (excluded) on the range of random
values to generate. Pass None for full-range integers. Defaults to 1
if dtype is floating point.
|
dtype
|
The type of the output. |
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tensor of the specified shape filled with random uniform values. |
Raises | |
---|---|
ValueError
|
If dtype is integral and maxval is not specified.
|
uniform_full_int
uniform_full_int(
shape, dtype=tf.dtypes.uint64, name=None
)
Uniform distribution on an integer type's entire range.
This method is the same as setting minval
and maxval
to None
in the
uniform
method.
Args | |
---|---|
shape
|
the shape of the output. |
dtype
|
(optional) the integer type, default to uint64. |
name
|
(optional) the name of the node. |
Returns | |
---|---|
A tensor of random numbers of the required shape. |