Skip to content

Product Prior

Product of independent marginal distributions with latent space transformations.

Overview

Product defines a prior as a product of independent 1D marginals, each with a bijective map to a chosen latent space. It extends the abstract base class TransformedPrior, which defines the forward/inverse interface.

Product supports two latent-space modes:

  • "hypercube": Maps to/from a bounded hypercube (used with Flow)
  • "standard_normal": Maps to/from standard normal space (used with Gaussian)

Supported Distributions

Type Parameters Description
uniform low, high Uniform distribution
normal mean, std Gaussian distribution
cosine low, high Cosine-weighted distribution
sine low, high Sine-weighted distribution
uvol low, high Uniform-in-volume
triangular a, c, b Triangular distribution (min, mode, max)
lognormal mean, std Log-normal distribution
fixed value Fixed (non-inferred) parameter

Fixed Parameters

Use the fixed distribution type to hold a parameter constant. Fixed parameters are excluded from the inferred parameter space but still appear in the full parameter vector passed to the simulator:

simulator:
  _target_: falcon.priors.Product
  priors:
    - ['normal', 0.0, 1.0]     # Inferred
    - ['fixed', 3.14]           # Held constant
    - ['uniform', -1.0, 1.0]   # Inferred

Usage

from falcon.priors import Product

prior = Product(
    priors=[
        ('normal', 0.0, 1.0),
        ('uniform', -10.0, 10.0),
    ]
)

# Sample from prior
samples = prior.simulate_batch(1000)

# Transform to/from latent space
z = prior.inverse(samples, mode="standard_normal")
x = prior.forward(z, mode="standard_normal")

YAML Configuration

With Flow estimator

simulator:
  _target_: falcon.priors.Product
  priors:
    - ['uniform', -100.0, 100.0]
    - ['uniform', -100.0, 100.0]

With Gaussian estimator

simulator:
  _target_: falcon.priors.Product
  priors:
    - ['normal', 0.0, 1.0]
    - ['normal', 0.0, 1.0]

Class Reference

TransformedPrior

Bases: ABC

Base class for priors that support latent space transformations.

Subclasses must implement forward() and inverse() with a mode parameter: - forward(z, mode): latent space -> parameter space - inverse(x, mode): parameter space -> latent space

Modes
  • "hypercube": Maps to/from bounded hypercube. Use with Flow estimator.
  • "standard_normal": Maps to/from N(0, I). Use with Gaussian estimator.

This base class is used for type checking in estimators like Gaussian that require the transformation interface.

param_dim abstractmethod property

param_dim

Dimension of the parameter space.

forward abstractmethod

forward(z, mode='hypercube')

Transform from latent space to parameter space.

Source code in falcon/priors/product.py
@abstractmethod
def forward(self, z, mode: str = "hypercube"):
    """Transform from latent space to parameter space."""
    pass

inverse abstractmethod

inverse(x, mode='hypercube')

Transform from parameter space to latent space.

Source code in falcon/priors/product.py
@abstractmethod
def inverse(self, x, mode: str = "hypercube"):
    """Transform from parameter space to latent space."""
    pass

simulate_batch abstractmethod

simulate_batch(batch_size)

Sample from the prior distribution.

Source code in falcon/priors/product.py
@abstractmethod
def simulate_batch(self, batch_size: int):
    """Sample from the prior distribution."""
    pass

Product

Product(priors=[], hypercube_range=[-2, 2])

Bases: TransformedPrior

Maps between target distributions and a latent space (hypercube or standard normal).

Supports bi-directional transformation with mode selection at call time
  • forward(z, mode): latent space -> target distribution
  • inverse(x, mode): target distribution -> latent space
Modes
  • "hypercube": Maps to/from hypercube domain (default [-2, 2]). Use with Flow estimator.
  • "standard_normal": Maps to/from N(0, I). Use with Gaussian estimator.
Supported distribution types and their required parameters
  • "uniform": Linear mapping. Parameters: low, high.
  • "cosine": Uses acos transform for pdf ∝ sin(angle). Parameters: low, high.
  • "sine": Uses asin transform. Parameters: low, high.
  • "uvol": Uniform-in-volume. Parameters: low, high.
  • "normal": Normal distribution. Parameters: mean, std.
  • "triangular": Triangular distribution. Parameters: a (min), c (mode), b (max).
  • "fixed": Fixed value (excluded from latent space). Parameters: value.
Example

prior = Product([ ("uniform", -100.0, 100.0), ("fixed", 5.0), # Fixed parameter, not in latent space ("normal", 0.0, 1.0), ])

Latent space has dim=2 (only free params)

Output space has dim=3 (includes fixed params)

For Gaussian estimator (standard normal latent space)

z = prior.inverse(theta, mode="standard_normal") # theta: (..., 3) -> z: (..., 2) theta = prior.forward(z, mode="standard_normal") # z: (..., 2) -> theta: (..., 3)

For Flow estimator (hypercube latent space)

u = prior.inverse(theta, mode="hypercube") theta = prior.forward(u, mode="hypercube")

Initialize Product.

Parameters:

Name Type Description Default
priors

List of tuples (dist_type, param1, param2, ...).

[]
hypercube_range

Range for hypercube mode (default: [-2, 2]).

[-2, 2]
Source code in falcon/priors/product.py
def __init__(self, priors=[], hypercube_range=[-2, 2]):
    """
    Initialize Product.

    Args:
        priors: List of tuples (dist_type, param1, param2, ...).
        hypercube_range: Range for hypercube mode (default: [-2, 2]).
    """
    self.priors = priors
    self.hypercube_range = hypercube_range

    # Separate fixed and free parameters
    self._free_indices = []
    self._fixed_indices = []
    self._fixed_values = {}
    for i, prior in enumerate(priors):
        dist_type = prior[0]
        if dist_type == "fixed":
            self._fixed_indices.append(i)
            self._fixed_values[i] = prior[1]
        else:
            self._free_indices.append(i)

    self._param_dim = len(self._free_indices)  # Latent space dimension
    self._full_param_dim = len(priors)  # Full output dimension

forward

forward(z, mode='hypercube')

Map from latent space to target distribution.

Parameters:

Name Type Description Default
z

Tensor of shape (..., param_dim) in latent space (free params only).

required
mode

"hypercube" or "standard_normal".

'hypercube'

Returns:

Type Description

Tensor of shape (..., full_param_dim) in target distribution space.

Source code in falcon/priors/product.py
def forward(self, z, mode="hypercube"):
    """
    Map from latent space to target distribution.

    Args:
        z: Tensor of shape (..., param_dim) in latent space (free params only).
        mode: "hypercube" or "standard_normal".

    Returns:
        Tensor of shape (..., full_param_dim) in target distribution space.
    """
    # Handle case with no free parameters
    if self._param_dim == 0:
        batch_shape = z.shape[:-1] if z.dim() > 1 else (1,)
        result = torch.zeros(*batch_shape, self._full_param_dim, dtype=z.dtype, device=z.device)
        for idx, val in self._fixed_values.items():
            result[..., idx] = val
        return result

    if mode == "standard_normal":
        # Try direct transforms first (avoids CDF precision issues at tails)
        transformed = [None] * self._full_param_dim
        use_direct = True
        z_idx = 0
        for i, prior in enumerate(self.priors):
            dist_type, *params = prior
            if dist_type == "fixed":
                transformed[i] = torch.full(z.shape[:-1], params[0], dtype=z.dtype, device=z.device)
            else:
                x_i = self._from_standard_normal(z[..., z_idx], dist_type, *params)
                if x_i is None:
                    use_direct = False
                    break
                transformed[i] = x_i
                z_idx += 1
        if use_direct:
            return torch.stack(transformed, dim=-1)
        # Fall through to CDF approach if any distribution lacks direct transform
        u = self._normal_to_uniform(z)
    elif mode == "hypercube":
        u = self._hypercube_to_uniform(z)
    else:
        raise ValueError(f"Unknown mode: {mode}. Use 'hypercube' or 'standard_normal'.")

    # Map [0, 1] to target distributions (CDF approach)
    epsilon = 1e-10  # Supports ~6 sigma tails in float64
    u = torch.clamp(u, epsilon, 1.0 - epsilon)

    transformed = []
    u_idx = 0
    for i, prior in enumerate(self.priors):
        dist_type, *params = prior
        if dist_type == "fixed":
            x_i = torch.full(u.shape[:-1], params[0], dtype=u.dtype, device=u.device)
        else:
            x_i = self._forward_transform(u[..., u_idx], dist_type, *params)
            u_idx += 1
        transformed.append(x_i)

    return torch.stack(transformed, dim=-1)

inverse

inverse(x, mode='hypercube')

Map from target distribution to latent space.

Parameters:

Name Type Description Default
x

Tensor of shape (..., full_param_dim) in target distribution space.

required
mode

"hypercube" or "standard_normal".

'hypercube'

Returns:

Type Description

Tensor of shape (..., param_dim) in latent space (free params only).

Source code in falcon/priors/product.py
def inverse(self, x, mode="hypercube"):
    """
    Map from target distribution to latent space.

    Args:
        x: Tensor of shape (..., full_param_dim) in target distribution space.
        mode: "hypercube" or "standard_normal".

    Returns:
        Tensor of shape (..., param_dim) in latent space (free params only).
    """
    # Handle case with no free parameters
    if self._param_dim == 0:
        batch_shape = x.shape[:-1] if x.dim() > 1 else (1,)
        return torch.zeros(*batch_shape, 0, dtype=x.dtype, device=x.device)

    if mode == "standard_normal":
        # Try direct transforms first (avoids CDF precision issues at tails)
        transformed = []
        use_direct = True
        for i, prior in enumerate(self.priors):
            dist_type, *params = prior
            if dist_type == "fixed":
                continue  # Skip fixed parameters
            z_i = self._to_standard_normal(x[..., i], dist_type, *params)
            if z_i is None:
                use_direct = False
                break
            transformed.append(z_i)
        if use_direct:
            return torch.stack(transformed, dim=-1)
        # Fall through to CDF approach if any distribution lacks direct transform

    # Map target distributions to [0, 1] (CDF approach, free params only)
    uniform = []
    for i, prior in enumerate(self.priors):
        dist_type, *params = prior
        if dist_type == "fixed":
            continue  # Skip fixed parameters
        u_i = self._inverse_transform(x[..., i], dist_type, *params)
        uniform.append(u_i)

    u = torch.stack(uniform, dim=-1)

    # Clamp to avoid numerical issues at boundaries
    epsilon = 1e-10  # Supports ~6 sigma tails in float64
    u = torch.clamp(u, epsilon, 1.0 - epsilon)

    # Convert [0, 1] to latent space
    if mode == "hypercube":
        return self._uniform_to_hypercube(u)
    elif mode == "standard_normal":
        return self._uniform_to_normal(u)
    else:
        raise ValueError(f"Unknown mode: {mode}. Use 'hypercube' or 'standard_normal'.")

simulate_batch

simulate_batch(batch_size)

Generate samples from the target distributions.

Parameters:

Name Type Description Default
batch_size

Number of samples.

required

Returns:

Type Description

numpy array of shape (batch_size, full_param_dim) in target distribution space.

Source code in falcon/priors/product.py
def simulate_batch(self, batch_size):
    """
    Generate samples from the target distributions.

    Args:
        batch_size: Number of samples.

    Returns:
        numpy array of shape (batch_size, full_param_dim) in target distribution space.
    """
    # Sample uniform for free parameters only
    u = torch.rand(batch_size, self._param_dim, dtype=torch.float64)

    transformed = []
    u_idx = 0
    for i, prior in enumerate(self.priors):
        dist_type, *params = prior
        if dist_type == "fixed":
            x_i = torch.full((batch_size,), params[0], dtype=torch.float64)
        else:
            x_i = self._forward_transform(u[..., u_idx], dist_type, *params)
            u_idx += 1
        transformed.append(x_i)

    return torch.stack(transformed, dim=-1).numpy()