TensorFlow Probability 0.12.1
Release notes
This is the 0.12.1 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.4.0.
Change notes
NOTE: Links point to examples in the TFP 0.12.1 release Colab.
Bijectors:
- Add implementation of GLOW at
tfp.bijectors.Glow
. - Add
RayleighCDF
bijector. - Add
Ascending
bijector and deprecateOrdered
. - Add optional
low
parameter to theSoftplus
bijector. - Enable
ScaleMatvecLinearOperator
bijector to wrap blockwise LinearOperators to form a multipart bijectors. - Allow passing kwargs to
Blockwise
. - Bijectors now share a global cache, keyed by the bijector parameters and the value being transformed.
Distributions:
- BREAKING: Remove deprecated
HiddenMarkovModel.num_states
property. - BREAKING: Change the naming scheme of un-named variables in JointDistributions.
- BREAKING: Remove deprecated
batch_shape
andevent_shape
arguments ofTransformedDistribution
. - Add
Skellam
distribution. JointDistributionCoroutine{AutoBatched}
now uses namedtuples as the sample dtype.- von-Mises Fisher distribution now works for dimensions > 5 and implements
VonMisesFisher.entropy
. - Add
ExpGamma
andExpInverseGamma
distributions. JointDistribution*AutoBatched
now support (reproducible) tensor seeds.- Add KL(VonMisesFisher || SphericalUniform).
- Added
Distribution.parameter_properties
method. experimental_default_event_space_bijector
now accepts additional arguments to pin some distribution parts.- Add
JointDistribution.experimental_pin
andJointDistributionPinned
. - Add
NegativeBinomial.experimental_from_mean_dispersion
method. - Add
tfp.experimental.distribute
, withDistributionStrategy
-aware distributions that support cross-device likelihood computations. HiddenMarkovModel
can now accept time varying observation distributions iftime_varying_observation_distribution
is set.Beta
,Binomial
, andNegativeBinomial
CDF no longer returns nan outside the support.- Remove the "dynamic graph" code path from the Mixture sampler. (
Mixture
now ignores theuse_static_graph
parameter.) Mixture
now computes standard deviations more accurately and robustly.- Fix incorrect
nan
samples generated by several distributions. - Fix KL divergence between
Categorical
distributions when logits contain -inf. - Implement
Bernoulli.cdf
. - Add a
log_rate
parameter totfd.Gamma
. - Add option for parallel filtering and sampling to
LinearGaussianStateSpaceModel
.
MCMC:
- Add
tfp.experimental.mcmc.ProgressBarReducer
. - Update
experimental.mcmc.sample_sequential_monte_carlo
to use new MCMC stateless kernel API. - Add an experimental streaming MCMC framework that supports computing statistics over a (batch of) Markov chain(s) without materializing the samples. Statistics supported (mostly on arbitrary functions of the model variables): mean, (co)variance, central moments of arbitrary rank, and the potential scale reduction factor (R-hat). Also support selectively tracing history of some but not all statistics or model variables. Add algorithms for running mean, variance, covariance, arbitrary higher central moments, and potential scale reduction factor (R-hat) to
tfp.experimental.stats
. - untempered_log_prob_fn added as init kwarg to ReplicaExchangeMC Kernel.
- Add experimental support for mass matrix preconditioning in Hamiltonian Monte Carlo.
- Add ability to temper part of the log prob in ReplicaExchangeMC.
tfp.experimental.mcmc.{sample_fold,sample_chain}
support warm restart.- even_odd_swap exchange function added to replica_exchange_mc.
- Samples from ReplicaExchangeMC can now have a per-replica initial state.
- Add omitted n/(n-1) term to
tfp.mcmc.potential_scale_reduction_factor
. - Add
KernelBuilder
andKernelOutputs
to experimental. - Allow tfp.mcmc.SimpleStepSizeAdaptation and DualAveragingStepSizeAdaptation to take a custom reduction function.
- Replace
make_innermost_getter
et al. withtfp.experimental.unnest
utilities.
VI:
Math + Stats:
- Add
tfp.math.bessel_ive
,tfp.math.bessel_kve
,tfp.math.log_bessel_ive
. - Add optional
weights
totfp.stats.histogram
. - Add
tfp.math.erfcinv
. - Add
tfp.math.reduce_log_harmonic_mean_exp
.
Other:
- Add
tfp.math.psd_kernels.GeneralizedMaternKernel
(generalizesMaternOneHalf
,MaternThreeHalves
andMaternFiveHalves
). - Add
tfp.math.psd_kernels.Parabolic
. - Add
tfp.experimental.unnest
utilities for accessing nested attributes. - Enable pytree flattening for TFP distributions in JAX
- More careful handling of nan and +-inf in {L-,}BFGS.
- Remove Edward2 from TFP. Edward2 is now in its own repo at https://github.com/google/edward2 .
- Support vector-valued offsets in
sts.Sum
. - Make DeferredTensor actually defer computation under JAX/NumPy backends.
Huge thanks to all the contributors to this release!
- Adrian Buzea
- Alexey Radul
- Ben Lee
- Ben Poole
- Brian Patton
- Christopher Suter
- Colin Carroll
- Cyril Chimisov
- Dave Moore
- Du Phan
- Emily Fertig
- Eugene Brevdo
- Federico Tomasi
- François Chollet
- George Karpenkov
- Giovanni Palla
- Ian Langmore
- Jacob Burnim
- Jacob Valdez
- Jake VanderPlas
- Jason Zavaglia
- Jean-Baptiste Lespiau
- Jeff Pollock
- Joan Puigcerver
- Jonas Eschle
- Josh Darrieulat
- Joshua V. Dillon
- Junpeng Lao
- Kapil Sachdeva
- Kate Lin
- Kibeom Kim
- Luke Metz
- Mark Daoust
- Matteo Hessel
- Michal Brys
- Oren Bochman
- Padarn Wilson
- Pavel Sountsov
- Peter Hawkins
- Rif A. Saurous
- Ru Pei
- ST John
- Sharad Vikram
- Simeon Carstens
- Srinivas Vasudevan
- Tom O'Malley
- Tomer Kaftan
- Urs Köster
- Yash Katariya
- Yilei Yang