Generated code description¶
Code description generated automatically from docstrings.
Model¶
Model module Defines the model class and some test models
- class ensnest.model.Model(*args, **kwargs)¶
Class to describe models
- space_bounds¶
the coordinate of the two vertices of the hyperrectangle defining the bounds of the parameters
- Type
2-tuple of
np.ndarray
- volume¶
the hypervolume of the sampling space
- Type
float
- names¶
the names of the variables
- Type
list of str
- livepoint_t¶
the
dtype
used to describe live points- Type
numpy.dtype
- position_t¶
the
dtype
used to describe points in sampling space- Type
numpy.dtype
Note
The
log_prior
andlog_likelihood
functions are user defined and must have one argument only.They also must be capable of managing (*,*, .., space_dimension )-shaped arrays, so make sure every operation is done on the -1 axis of input or use
varenv()
.If input is a single point of shape (space_dimension,) both the functions must return a float ( not a (1,)-shaped array )
- __init__(*args, **kwargs)¶
Initialise and checks the model
- classmethod auto_bound(log_func)¶
Decorator to bound functions.
- Parameters
log_func (function) – A function for which
self.log_func(var)
is valid.- Returns
the bounded function
log_func(var) + log_chi(var)
- Return type
function
Example
>>> class MyModel(model.Model): >>> >>> @model.Model.auto_bound >>> def log_prior(var): >>> return var
- is_inside_bounds(points)¶
Checks if a point is inside the space bounds.
- Parameters
points (np.ndarray) – point to be checked. Must have shape (*,space_dim,).
- Returns
True if all the coordinates lie between bounds
False if at least one is outside.
- Return type
np.ndarray
- log_chi(points)¶
Logarithm of the characteristic function of the domain. Is equivalent to
>>> np.log(model.is_inside_bounds(point).astype(float))
- Parameters
points (np.ndarray) – point to be checked. Must have shape (*,space_dim,).
- Returns
0 if all the coordinates lie between bounds
-
np.inf
if at least one is outside- Return type
np.ndarray
- log_likelihood(var)¶
the log_likelihood function
- log_prior(var)¶
the log_prior function
- set_parameters()¶
Model parameters such bound, names, additional data should be defined here
- classmethod varenv(func)¶
Helper function to index the variables by name inside user-defined functions.
Uses the names defined in the constructor of the model + var0,var1, … for the one which are left unspecified.
Warning
When using with
@auto_bound
, it must be first:>>> @auto_bound >>> @varenv >>> def f(self,var): >>> u = var['A']+var['mu'] >>> #... do stuff
- class ensnest.model.PhaseTransition(*args, **kwargs)¶
The model that exhibits phase transition by Skilling (2006)
- log_likelihood(var)¶
the log_likelihood function
- log_prior(*args)¶
the log_prior function
- set_parameters()¶
Model parameters such bound, names, additional data should be defined here
- class ensnest.model.Rosenbrock(*args, **kwargs)¶
Rosenbrock likelihood, uniform prior
- log_likelihood(var, *args, **kwargs)¶
the log_likelihood function
- log_prior(*args)¶
the log_prior function
- set_parameters()¶
Model parameters such bound, names, additional data should be defined here
- class ensnest.model.UniformJeffreys(*args, **kwargs)¶
Gaussian likelihood, 1/y prior
- log_likelihood(var)¶
the log_likelihood function
- log_prior(*args)¶
the log_prior function
- set_parameters()¶
Model parameters such bound, names, additional data should be defined here
- class ensnest.model.UniformSphere(R=10.0, dim=1.0)¶
An empty model with uniform prior over a sphere.
It is a bit a workaround since the
self.bounds
parameter is the hypercube contained inside the sphere, some plots could have wrong lims- Parameters
R (float) – the radius of the sphere
dim (int) – the space dimension
- __init__(R=10.0, dim=1.0)¶
Initialise and checks the model
- log_prior(var, *args, **kwargs)¶
the log_prior function
- set_parameters()¶
Model parameters such bound, names, additional data should be defined here
- class ensnest.model.nGaussian(dim=1)¶
MVN likelihood, uniform prior.
- __init__(dim=1)¶
Initialise and checks the model
- log_likelihood(var)¶
the log_likelihood function
- log_prior(*args)¶
the log_prior function
- set_parameters()¶
Model parameters such bound, names, additional data should be defined here
Samplers¶
Module containing the samplers used in main calculations.
Since almost every sampler is defined by a markov chain, basic attributes are the model and the length of the chain.
Since is intended to be used in nested sampling, each sampler should support likelihood constrained prior sampling (LCPS).
- class ensnest.samplers.AIESampler(model, mcmc_length, nwalkers=10, a=None)¶
The Affine-Invariant Ensemble sampler (Goodman, Weare, 2010).
After a uniform initialisation step, for each particle k selects a pivot particle an then proposes
\[ \begin{align}\begin{aligned}j = k + random(0 \rightarrow n)\\z \char`~ g(z)\\y = x_j + z (x_k - x_j)\end{aligned}\end{align} \]and then executes a MH-acceptance over y (more information here).
- AIEStep(Lthreshold=None, continuous=False)¶
Single step of AIESampler.
- Parameters
Lthreshold (float, optional) – The threshold of likelihood below which a point is set as impossible to reach
continuous (
bool
, optional) – If true use modular index assignment, overwriting past values asself.elapsed_time_index
>self.length
- __init__(model, mcmc_length, nwalkers=10, a=None)¶
Initialise the chain uniformly over the space bounds.
- get_stretch(size=1)¶
Generates the stretch values given the scale_parameter
a
.Output is distibuted as \(\frac{1}{\sqrt{z}}\) in \([1/a,a]\). Uses inverse transform sampling
- join_chains(burn_in=0.02)¶
Joins the chains for the ensemble after removing
burn_in
% of each single_particle chain.- Parameters
burn_in (float, optional) –
the burn_in percentage.
Must be
burn_in
> 0 andburn_in
< 1.
- sample_prior(Lthreshold=None, progress=False)¶
Fills the chain by sampling the prior.
- class ensnest.samplers.AIEevolver(model, steps, length=None, nwalkers=10, a=None)¶
Class to override some functionalities of the sampler in case only the final state is of interest.
The main difference from AIESampler is that
lenght
andsteps
can be different- __init__(model, steps, length=None, nwalkers=10, a=None)¶
Initialise the chain uniformly over the space bounds.
- bring_over_threshold(logLthreshold)¶
Brings the sampler over threshold.
It is necessary to initialise the sampler before sampling over threshold.
- Parameters
Lthreshold (float) – the logarithm of the likelihood.
- get_new(Lmin, start_ensemble=None, progress=False, allow_resize=True)¶
Returns
nwalkers
different point from prior given likelihood threshold.As for AIEStep, needs that every point is in a valid region (the border is included).
If the length of the sampler is not enough to ensure that all points are different stretches it doubling
self.steps
each time. The stretch is permanent.- Parameters
Lmin (float) – the threshold likelihood that a point must exceed to be accepted
- Returns
new generated points
- Return type
np.ndarray
- class ensnest.samplers.Sampler(model, mcmc_length, nwalkers)¶
Produces samples from model.
It is intended as a base class that has to be further defined. For generality the attribute nwalkers is present, but it can be one for not ensamble-based samplers.
- model¶
Model defined as the set of (log_prior, log_likelihood , bounds)
- Type
- mcmc_lenght¶
the lenght of the single markov chain
- Type
int
- nwalkers¶
the number of walkers the ensamble is made of
- Type
int
- __init__(model, mcmc_length, nwalkers)¶
Initialise the chain uniformly over the space bounds.
NestedSampling.py¶
The nested sampling module. In the first (and probably only) version of ensnest it is mainly tailored onto the AIEsampler class, so there’s no choice for the sampler.
- class ensnest.NestedSampling.NestedSampler(model, nlive=1000, npoints=numpy.inf, evosteps=150, relative_precision=0.0001, load_old=None, filename=None, evo_progress=True, a=None)¶
Class performing nested sampling
- get_ew_samples()¶
Generates equally weghted samples by accept/reject strategy.
- initialise(init_positions=None)¶
Initialises the evolver and Z value
- Parameters
positions (initial position of each walker of the sample) –
- param_stats()¶
Estimates the mean and standard deviation of the parameters
- prepare_save_load()¶
Creates (if not already present) the save directory for the run
- run()¶
Performs nested sampling.
- simulate_shrink_outcomes()¶
Computes samples of logZ and p_i simulating shrinking outcomes.
Each time a new point is harvested the prior mass is multiplied by alpha(N) < 1.
The logL(t) and N(t) arrays are the ones obtained from the run, logX is generated N_Z_SAMPLES times and logZ is evaluated for each outcome of logX.
- update()¶
Updates the value of Z given the current state.
The number of live points is of the form:
nlive
,(jump)2nlive-1
,2nlive-2
, … ,nlive
, (jump)2nlive-1
, ecc.Integration is performed between the two successive times at which
N = nlive
(extrema included), then one extremum is excluded when saving toself.N
.
- varenv_points()¶
Gives usable fields to
self.points['position']
based onmodel.names
- ensnest.NestedSampling.log_worst_t_among(N)¶
Helper function to generate shrink factors
Since max({t}) with t in [0,1], len({t}) = N is distributed as Nt**(N-1), the cumulative function is y = t**(N) and sampling uniformly over y gives the desired sample.
Therefore, max({t}) is equivalent to (unif)**(1/N)
and log(unif**(1/N)) = 1/N*log(unif)
- class ensnest.NestedSampling.mpNestedSampler(*args, **kwargs)¶
Multiprocess version of nested sampling.
Runs
multiprocess.cpu_count()
instances ofNestedSampler
and joins them.- logX¶
- Type
np.ndarray(dtype=np.float64)
- logL¶
- Type
np.ndarray(dtype=np.float64)
- N¶
- Type
np.ndarray(dtype=np.int)
- logZ¶
- Type
np.float64
- Z¶
- Type
np.float64
- logZ_error¶
- Type
np.float64
- Z_error¶
- Type
np.float64
- logZ_samples¶
- Type
np.ndarray(dtype=np.float64)
- nested_samplers¶
The individual runs. Each nested sampler has completely defined attributes.
- Type
list of
NestedSampler
- run_time¶
The time required to perform the runs and merge them.
- Type
np.float64
- error_estimate_time¶
The time required to perform error estimate on
logZ
- Type
np.float64
- how_many_at_given_logL(N, logLs, givenlogL)¶
Helper function that does what the name says.
- merge_all()¶
Merges all the runs
- run()¶
Performs nested sampling.
DiffusiveNestedSampling.py¶
- class ensnest.DiffusiveNestedSampling.DiffusiveNestedSampler(model, max_n_levels=100, nlive=100, chain_length=1000, clean_samples=True, filename=None, load_old=None)¶
- G()¶
Theorethical cumulative density function for X, given that X is smaller than the last level. It is reasonable that the major source of error in assessing the X values is originated from this assumption, so for generality G has its own function in case of future theoretical refinements.
- continue_exploration()¶
Continues the exploration using uniform weighting.
- revise_X(points_logL, current_level_index)¶
Revises the X values of the levels.
- Parameters
points_logL (np.ndarray) – the points generated while sampling the mixture
- run()¶
Runs the code
- class ensnest.DiffusiveNestedSampling.mixtureAIESampler(model, mcmc_length, nwalkers=10, space_scale=None, verbosity=0)¶
An AIE sampler for mixtures of pdf.
It is really problem-specific, forked from AIESampler class because of the great conceptual differences.
Allws only exponential and uniform weighting of the levels.
- chain_jnp.ndarray
the chain of j values for each walker for each time
- level_logLnp.ndarray
likelihood levels
- level_logXnp.ndarray
prior mass levels
- Lambdafloat
exploration factor
- chainnp.ndarray
the array of position, likelihood and prior of each point
- AIEStep(continuous=False, uniform_weights=False)¶
Single step of mixtureAIESampler. Overrides the parent class method.
- Parameters
continuous (
bool
, optional) – If true use modular index assignment, overwriting past values asself.elapsed_time_index
>self.length
uniform_weights (
bool
, optional) – IfTrue
sets uniform weighting between levels instead of exponential.
Note
The diference between the AIEStep function of
AIESampler
is that every single-particle distribution is chosen randomly between the ones in the mixture.This is equivalent to choosing a logL_threshold and rejecting a proposal point if its logL < logL_threshold
- join_chains(burn_in=0.2, clean=True)¶
Joins the walkers, joins chain_j and also clean samples
- sample_prior(progress=False, **kwargs)¶
Fills the chain by sampling the mixture.
- Parameters
progress (bool) – Displays a progress bar. Default:
False
uniform_weights (
bool
, optional) – IfTrue
sets uniform weighting between levels instead of exponential.
stdplots.py¶
standard plots module
- ensnest.stdplots.XLplot(NS, fig_ax=None)¶
Does the (logX, logL) plot.
- Parameters
NS (
NS/mpNS/DNS sampler
) –
- ensnest.stdplots.contour(NS, density_of_points=200, **contour_kwargs)¶
After a kernel density estimation plots the contour of the points’ density (2D data only)
- ensnest.stdplots.contourf(NS, density_of_points=500, **contour_kwargs)¶
After a kernel density estimation plots the contour of the points’ density (2D data only)
- ensnest.stdplots.hist_points(NS)¶
Plots the histogram of the equally weighted points
- Parameters
NS (
NS/mpNS sampler
) –
- ensnest.stdplots.scat(NS, fig_ax=None)¶
Does a 2D scatter plot.
- Parameters
NS (
NS/mpNS sampler
) –
- ensnest.stdplots.scat3D(NS)¶
Does a 3D scatter plot (x,y,L).
- Parameters
NS (
NS/mpNS sampler
) –
- ensnest.stdplots.weightscat(NS, fig_ax=None)¶
Does a 2D scatter plot using a colormap to display weighting.
- Parameters
NS (
NS/mpNS sampler
) –
Utility routines¶
- ensnest.utils.hms(secs)¶
Returns time in
hour minute seconds
fromat given time in seconds.
- ensnest.utils.logsubexp(x1, x2)¶
Helper function to execute \(\log{(e^{x_1} - e^{x_2})}\)
- Parameters
x1 (float) –
x2 (float) –
- ensnest.utils.logsumexp(arg)¶
Utility to sum over log_values. Given a vector [a1,a2,a3, … ] returns \(\log{(e^{a1} + e^{a2} + ...)}\)
- Parameters
arg (np.ndarray) – the array of values to be log-sum-exponentiated
- Returns
\(\log{(e^{a1} + e^{a2} + ...)}\)
- Return type
float