![]() |
Compute the GeneralizedExtremeValue CDF.
Inherits From: AutoCompositeTensorBijector
, Bijector
tfp.substrates.numpy.bijectors.GeneralizedExtremeValueCDF(
loc=0.0,
scale=1.0,
concentration=0,
validate_args=False,
name='generalizedextremevalue_cdf'
)
Compute Y = g(X) = exp(-t(X))
,
where t(x)
is defined to be: *(1 + conc * (x - loc) / scale) ) ** (-1 / conc)
when conc != 0
; *exp(-(x - loc) / scale)
when conc = 0
.
This bijector maps inputs from the domain to [0, 1]
, where the domain is
- [loc - scale/conc, inf) when conc > 0;
- (-inf, loc - scale/conc] when conc < 0;
- (-inf, inf) when conc = 0;
The inverse of the bijector applied to a uniform random variable X ~ U(0, 1)
gives back a random variable with the Generalized extreme value distribution:
When concentration -> +-inf
, the probability mass concentrates near loc
.
Y ~ GeneralizedExtremeValueCDF(loc, scale, conc)
pdf(y; loc, scale, conc) = t(y; loc, scale, conc) ** (1 + conc) * exp(
- t(y; loc, scale, conc) ) / scale
where t(x) =
* (1 + conc * (x - loc) / scale) ) ** (-1 / conc) when conc != 0;
* exp(-(x - loc) / scale) when conc = 0.
Methods
copy
copy(
**override_parameters_kwargs
)
Creates a copy of the bijector.
Args | |
---|---|
**override_parameters_kwargs | String/value dictionary of initialization arguments to override with new values. |
Returns | |
---|---|
bijector | A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs) . |
experimental_batch_shape
experimental_batch_shape(
x_event_ndims=None, y_event_ndims=None
)
Returns the batch shape of this bijector for inputs of the given rank.
The batch shape of a bijector decribes the set of distinct transformations it represents on events of a given size. For example: the bijector tfb.Scale([1., 2.])
has batch shape [2]
for scalar events (event_ndims = 0
), because applying it to a scalar event produces two scalar outputs, the result of two different scaling transformations. The same bijector has batch shape []
for vector events, because applying it to a vector produces (via elementwise multiplication) a single vector output.
Bijectors that operate independently on multiple state parts, such as tfb.JointMap
, must broadcast to a coherent batch shape. Some events may not be valid: for example, the bijector tfd.JointMap([tfb.Scale([1., 2.]), tfb.Scale([1., 2., 3.])])
does not produce a valid batch shape when event_ndims = [0, 0]
, since the batch shapes of the two parts are inconsistent. The same bijector does define valid batch shapes of []
, [2]
, and [3]
if event_ndims
is [1, 1]
, [0, 1]
, or [1, 0]
, respectively.
Since transforming a single event produces a scalar log-det-Jacobian, the batch shape of a bijector with non-constant Jacobian is expected to equal the shape of forward_log_det_jacobian(x, event_ndims=x_event_ndims)
or inverse_log_det_jacobian(y, event_ndims=y_event_ndims)
, for x
or y
of the specified ndims
.
Args | |
---|---|
x_event_ndims | Optional Python int (structure) number of dimensions in a probabilistic event passed to forward ; this must be greater than or equal to self.forward_min_event_ndims . If None , defaults to self.forward_min_event_ndims . Mutually exclusive with y_event_ndims . Default value: None . |
y_event_ndims | Optional Python int (structure) number of dimensions in a probabilistic event passed to inverse ; this must be greater than or equal to self.inverse_min_event_ndims . Mutually exclusive with x_event_ndims . Default value: None . |
Returns | |
---|---|
batch_shape | TensorShape batch shape of this bijector for a value with the given event rank. May be unknown or partially defined. |
experimental_batch_shape_tensor
experimental_batch_shape_tensor(
x_event_ndims=None, y_event_ndims=None
)
Returns the batch shape of this bijector for inputs of the given rank.
The batch shape of a bijector decribes the set of distinct transformations it represents on events of a given size. For example: the bijector tfb.Scale([1., 2.])
has batch shape [2]
for scalar events (event_ndims = 0
), because applying it to a scalar event produces two scalar outputs, the result of two different scaling transformations. The same bijector has batch shape []
for vector events, because applying it to a vector produces (via elementwise multiplication) a single vector output.
Bijectors that operate independently on multiple state parts, such as tfb.JointMap
, must broadcast to a coherent batch shape. Some events may not be valid: for example, the bijector tfd.JointMap([tfb.Scale([1., 2.]), tfb.Scale([1., 2., 3.])])
does not produce a valid batch shape when event_ndims = [0, 0]
, since the batch shapes of the two parts are inconsistent. The same bijector does define valid batch shapes of []
, [2]
, and [3]
if event_ndims
is [1, 1]
, [0, 1]
, or [1, 0]
, respectively.
Since transforming a single event produces a scalar log-det-Jacobian, the batch shape of a bijector with non-constant Jacobian is expected to equal the shape of forward_log_det_jacobian(x, event_ndims=x_event_ndims)
or inverse_log_det_jacobian(y, event_ndims=y_event_ndims)
, for x
or y
of the specified ndims
.
Args | |
---|---|
x_event_ndims | Optional Python int (structure) number of dimensions in a probabilistic event passed to forward ; this must be greater than or equal to self.forward_min_event_ndims . If None , defaults to self.forward_min_event_ndims . Mutually exclusive with y_event_ndims . Default value: None . |
y_event_ndims | Optional Python int (structure) number of dimensions in a probabilistic event passed to inverse ; this must be greater than or equal to self.inverse_min_event_ndims . Mutually exclusive with x_event_ndims . Default value: None . |
Returns | |
---|---|
batch_shape_tensor | integer Tensor batch shape of this bijector for a value with the given event rank. |
experimental_compute_density_correction
experimental_compute_density_correction(
x, tangent_space, backward_compat=False, **kwargs
)
Density correction for this transformation wrt the tangent space, at x.
Subclasses of Bijector may call the most specific applicable method of TangentSpace
, based on whether the transformation is dimension-preserving, coordinate-wise, a projection, or something more general. The backward-compatible assumption is that the transformation is dimension-preserving (goes from R^n to R^n).
Args | |
---|---|
x | Tensor (structure). The point at which to calculate the density. |
tangent_space | TangentSpace or one of its subclasses. The tangent to the support manifold at x . |
backward_compat | bool specifying whether to assume that the Bijector is dimension-preserving. |
**kwargs | Optional keyword arguments forwarded to tangent space methods. |
Returns | |
---|---|
density_correction | Tensor representing the density correction---in log space---under the transformation that this Bijector denotes. |
Raises | |
---|---|
TypeError if backward_compat is False but no method of TangentSpace has been called explicitly. |
forward
forward(
x, name='forward', **kwargs
)
Returns the forward Bijector
evaluation, i.e., X = g(Y).
Args | |
---|---|
x | Tensor (structure). The input to the 'forward' evaluation. |
name | The name to give this op. |
**kwargs | Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure). |
Raises | |
---|---|
TypeError | if self.dtype is specified and x.dtype is not self.dtype . |
NotImplementedError | if _forward is not implemented. |
forward_dtype
forward_dtype(
dtype=UNSPECIFIED, name='forward_dtype', **kwargs
)
Returns the dtype returned by forward
for the provided input.
forward_event_ndims
forward_event_ndims(
event_ndims, **kwargs
)
Returns the number of event dimensions produced by forward
.
Args | |
---|---|
event_ndims | Structure of Python and/or Tensor int s, and/or None values. The structure should match that of self.forward_min_event_ndims , and all non-None values must be greater than or equal to the corresponding value in self.forward_min_event_ndims . |
**kwargs | Optional keyword arguments forwarded to nested bijectors. |
Returns | |
---|---|
forward_event_ndims | Structure of integers and/or None values matching self.inverse_min_event_ndims . These are computed using 'prefer static' semantics: if any inputs are None , some or all of the outputs may be None , indicating that the output dimension could not be inferred (conversely, if all inputs are non-None , all outputs will be non-None ). If all input event_ndims are Python int s, all of the (non-None ) outputs will be Python int s; otherwise, some or all of the outputs may be Tensor int s. |
forward_event_shape
forward_event_shape(
input_shape
)
Shape of a single sample from a single batch as a TensorShape
.
Same meaning as forward_event_shape_tensor
. May be only partially defined.
Args | |
---|---|
input_shape | TensorShape (structure) indicating event-portion shape passed into forward function. |
Returns | |
---|---|
forward_event_shape_tensor | TensorShape (structure) indicating event-portion shape after applying forward . Possibly unknown. |
forward_event_shape_tensor
forward_event_shape_tensor(
input_shape, name='forward_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32
1D Tensor
.
Args | |
---|---|
input_shape | Tensor , int32 vector (structure) indicating event-portion shape passed into forward function. |
name | name to give to the op |
Returns | |
---|---|
forward_event_shape_tensor | Tensor , int32 vector (structure) indicating event-portion shape after applying forward . |
forward_log_det_jacobian
forward_log_det_jacobian(
x, event_ndims=None, name='forward_log_det_jacobian', **kwargs
)
Returns both the forward_log_det_jacobian.
Args | |
---|---|
x | Tensor (structure). The input to the 'forward' Jacobian determinant evaluation. |
event_ndims | Optional number of dimensions in the probabilistic events being transformed; this must be greater than or equal to self.forward_min_event_ndims . If event_ndims is specified, the log Jacobian determinant is summed to produce a scalar log-determinant for each event. Otherwise (if event_ndims is None ), no reduction is performed. Multipart bijectors require structured event_ndims, such that the batch rank rank(y[i]) - event_ndims[i] is the same for all elements i of the structured input. In most cases (with the exception of tfb.JointMap ) they further require that event_ndims[i] - self.inverse_min_event_ndims[i] is the same for all elements i of the structured input. Default value: None (equivalent to self.forward_min_event_ndims ). |
name | The name to give this op. |
**kwargs | Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure), if this bijector is injective. If not injective this is not implemented. |
Raises | |
---|---|
TypeError | if y 's dtype is incompatible with the expected output dtype. |
NotImplementedError | if neither _forward_log_det_jacobian nor {_inverse , _inverse_log_det_jacobian } are implemented, or this is a non-injective bijector. |
ValueError | if the value of event_ndims is not valid for this bijector. |
inverse
inverse(
y, name='inverse', **kwargs
)
Returns the inverse Bijector
evaluation, i.e., X = g^{-1}(Y).
Args | |
---|---|
y | Tensor (structure). The input to the 'inverse' evaluation. |
name | The name to give this op. |
**kwargs | Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure), if this bijector is injective. If not injective, returns the k-tuple containing the unique k points (x1, ..., xk) such that g(xi) = y . |
Raises | |
---|---|
TypeError | if y 's structured dtype is incompatible with the expected output dtype. |
NotImplementedError | if _inverse is not implemented. |
inverse_dtype
inverse_dtype(
dtype=UNSPECIFIED, name='inverse_dtype', **kwargs
)
Returns the dtype returned by inverse
for the provided input.
inverse_event_ndims
inverse_event_ndims(
event_ndims, **kwargs
)
Returns the number of event dimensions produced by inverse
.
Args | |
---|---|
event_ndims | Structure of Python and/or Tensor int s, and/or None values. The structure should match that of self.inverse_min_event_ndims , and all non-None values must be greater than or equal to the corresponding value in self.inverse_min_event_ndims . |
**kwargs | Optional keyword arguments forwarded to nested bijectors. |
Returns | |
---|---|
inverse_event_ndims | Structure of integers and/or None values matching self.forward_min_event_ndims . These are computed using 'prefer static' semantics: if any inputs are None , some or all of the outputs may be None , indicating that the output dimension could not be inferred (conversely, if all inputs are non-None , all outputs will be non-None ). If all input event_ndims are Python int s, all of the (non-None ) outputs will be Python int s; otherwise, some or all of the outputs may be Tensor int s. |
inverse_event_shape
inverse_event_shape(
output_shape
)
Shape of a single sample from a single batch as a TensorShape
.
Same meaning as inverse_event_shape_tensor
. May be only partially defined.
Args | |
---|---|
output_shape | TensorShape (structure) indicating event-portion shape passed into inverse function. |
Returns | |
---|---|
inverse_event_shape_tensor | TensorShape (structure) indicating event-portion shape after applying inverse . Possibly unknown. |
inverse_event_shape_tensor
inverse_event_shape_tensor(
output_shape, name='inverse_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32
1D Tensor
.
Args | |
---|---|
output_shape | Tensor , int32 vector (structure) indicating event-portion shape passed into inverse function. |
name | name to give to the op |
Returns | |
---|---|
inverse_event_shape_tensor | Tensor , int32 vector (structure) indicating event-portion shape after applying inverse . |
inverse_log_det_jacobian
inverse_log_det_jacobian(
y, event_ndims=None, name='inverse_log_det_jacobian', **kwargs
)
Returns the (log o det o Jacobian o inverse)(y).
Mathematically, returns: log(det(dX/dY))(Y)
. (Recall that: X=g^{-1}(Y)
.)
Note that forward_log_det_jacobian
is the negative of this function, evaluated at g^{-1}(y)
.
Args | |
---|---|
y | Tensor (structure). The input to the 'inverse' Jacobian determinant evaluation. |
event_ndims | Optional number of dimensions in the probabilistic events being transformed; this must be greater than or equal to self.inverse_min_event_ndims . If event_ndims is specified, the log Jacobian determinant is summed to produce a scalar log-determinant for each event. Otherwise (if event_ndims is None ), no reduction is performed. Multipart bijectors require structured event_ndims, such that the batch rank rank(y[i]) - event_ndims[i] is the same for all elements i of the structured input. In most cases (with the exception of tfb.JointMap ) they further require that event_ndims[i] - self.inverse_min_event_ndims[i] is the same for all elements i of the structured input. Default value: None (equivalent to self.inverse_min_event_ndims ). |
name | The name to give this op. |
**kwargs | Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
ildj | Tensor , if this bijector is injective. If not injective, returns the tuple of local log det Jacobians, log(det(Dg_i^{-1}(y))) , where g_i is the restriction of g to the ith partition Di . |
Raises | |
---|---|
TypeError | if x 's dtype is incompatible with the expected inverse-dtype. |
NotImplementedError | if _inverse_log_det_jacobian is not implemented. |
ValueError | if the value of event_ndims is not valid for this bijector. |
parameter_properties
@classmethod
parameter_properties( dtype=tf.float32 )
Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the bijector's Tensor
-valued constructor arguments.
Args | |
---|---|
dtype | Optional float dtype to assume for continuous-valued parameters. Some constraining bijectors require advance knowledge of the dtype because certain constants (e.g., tfb.Softplus.low ) must be instantiated with the same dtype as the values to be transformed. |
Returns | |
---|---|
parameter_properties | A str -> tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names to ParameterProperties` instances. |
__call__
__call__(
value, name=None, **kwargs
)
Applies or composes the Bijector
, depending on input type.
This is a convenience function which applies the Bijector
instance in three different ways, depending on the input:
- If the input is a
tfd.Distribution
instance, returntfd.TransformedDistribution(distribution=input, bijector=self)
. - If the input is a
tfb.Bijector
instance, returntfb.Chain([self, input])
. - Otherwise, return
self.forward(input)
Args | |
---|---|
value | A tfd.Distribution , tfb.Bijector , or a (structure of) Tensor . |
name | Python str name given to ops created by this function. |
**kwargs | Additional keyword arguments passed into the created tfd.TransformedDistribution , tfb.Bijector , or self.forward . |
Returns | |
---|---|
composition | A tfd.TransformedDistribution if the input was a tfd.Distribution , a tfb.Chain if the input was a tfb.Bijector , or a (structure of) Tensor computed by self.forward . |
Examples
sigmoid = tfb.Reciprocal()(
tfb.Shift(shift=1.)(
tfb.Exp()(
tfb.Scale(scale=-1.))))
# ==> `tfb.Chain([
# tfb.Reciprocal(),
# tfb.Shift(shift=1.),
# tfb.Exp(),
# tfb.Scale(scale=-1.),
# ])` # ie, `tfb.Sigmoid()`
log_normal = tfb.Exp()(tfd.Normal(0, 1))
# ==> `tfd.TransformedDistribution(tfd.Normal(0, 1), tfb.Exp())`
tfb.Exp()([-1., 0., 1.])
# ==> tf.exp([-1., 0., 1.])
__eq__
__eq__(
other
)
Return self==value.
__getitem__
__getitem__(
slices
)
__iter__
__iter__()