Gaussian Estimator¶
Full covariance Gaussian posterior estimation.
Overview¶
Gaussian provides a simpler alternative to Flow for posterior
estimation. Instead of normalizing flows, it models the posterior as a
multivariate Gaussian with full covariance, using eigenvalue-based operations.
Key features:
- Full covariance matrix showing parameter correlations directly
- Eigenvalue-based tempered sampling for exploration
- Simpler and more interpretable than flow-based methods
Note
Gaussian requires a Product prior (with "standard_normal" mode)
as the simulator, not Hypercube.
Configuration¶
Gaussian is configured through the same four groups as Flow: loop, network,
optimizer, and inference.
estimator:
_target_: falcon.estimators.Gaussian
loop:
num_epochs: 1000
batch_size: 128
early_stop_patience: 32
network:
hidden_dim: 128
num_layers: 3
momentum: 0.10
min_var: 1.0e-20
eig_update_freq: 1
embedding:
_target_: model.E_identity
_input_: [x]
optimizer:
lr: 0.01
lr_decay_factor: 0.5
scheduler_patience: 16
inference:
gamma: 1.0
discard_samples: false
log_ratio_threshold: -20.0
Configuration Reference¶
Network (network)¶
| Parameter | Type | Default | Description |
|---|---|---|---|
hidden_dim |
int | 128 | MLP hidden layer dimension |
num_layers |
int | 3 | Number of hidden layers |
momentum |
float | 0.10 | EMA momentum for running statistics |
min_var |
float | 1e-20 | Minimum variance for numerical stability |
eig_update_freq |
int | 1 | Eigendecomposition update frequency |
The loop, optimizer, and inference groups share the same parameters as
Flow.
Complete Example¶
graph:
z:
evidence: [x]
simulator:
_target_: falcon.priors.Product
priors:
- ['normal', 0.0, 1.0]
- ['normal', 0.0, 1.0]
- ['normal', 0.0, 1.0]
estimator:
_target_: falcon.estimators.Gaussian
loop:
num_epochs: 1000
batch_size: 128
early_stop_patience: 32
network:
hidden_dim: 128
num_layers: 3
momentum: 0.10
min_var: 1.0e-20
eig_update_freq: 1
embedding:
_target_: model.E_identity
_input_: [x]
optimizer:
lr: 0.01
lr_decay_factor: 0.5
scheduler_patience: 16
inference:
gamma: 1.0
discard_samples: false
log_ratio_threshold: -20.0
ray:
num_gpus: 1
x:
parents: [z]
simulator:
_target_: model.ExpPlusNoise
sigma: 1.0e-6
observed: "./data/mock_data.npz['x']"
sample:
posterior:
n: 1000
Class Reference¶
Gaussian
¶
Create a LossBasedEstimator with GaussianPosterior.
This is the main entry point for using Gaussian posterior estimation. It provides sensible defaults while allowing full customization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
simulator_instance
|
Prior/simulator instance |
required | |
theta_key
|
Optional[str]
|
Key for theta in batch data |
None
|
condition_keys
|
Optional[List[str]]
|
Keys for condition data in batch |
None
|
config
|
Optional[dict]
|
Configuration dict with sections: - loop: TrainingLoopConfig options - network: GaussianPosteriorConfig options - optimizer: OptimizerConfig options - inference: InferenceConfig options - embedding: Embedding configuration with target - device: Device string (optional) |
None
|
Returns:
| Type | Description |
|---|---|
LossBasedEstimator
|
Configured LossBasedEstimator ready for training |
Example YAML
estimator: target: falcon.estimators.Gaussian network: hidden_dim: 128 num_layers: 3 embedding: target: model.E input: [x]
Source code in falcon/estimators/gaussian.py
GaussianPosterior
¶
GaussianPosterior(param_dim, condition_dim, hidden_dim=128, num_layers=3, momentum=0.01, min_var=1e-06, eig_update_freq=1)
Bases: Module
Full covariance Gaussian posterior with eigenvalue-based operations.
Implements the Posterior contract
- loss(theta, conditions) -> Tensor
- sample(conditions, gamma=None) -> Tensor
- log_prob(theta, conditions) -> Tensor
The posterior is parameterized as
p(theta | conditions) = N(mu(conditions), Sigma)
where mu(conditions) is predicted by an MLP with whitening, and Sigma is the residual covariance matrix estimated from training data.
Source code in falcon/estimators/gaussian.py
loss
¶
Compute negative log likelihood loss, updating statistics only during training.
Source code in falcon/estimators/gaussian.py
log_prob
¶
Compute Gaussian log probability using eigendecomposition.
Source code in falcon/estimators/gaussian.py
sample
¶
Sample from posterior, optionally tempered.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
conditions
|
Tensor
|
Condition tensor of shape (batch, condition_dim) |
required |
gamma
|
Optional[float]
|
Tempering parameter. None = untempered, <1 = widened. |
None
|
Source code in falcon/estimators/gaussian.py
Configuration Classes¶
GaussianConfig
dataclass
¶
GaussianConfig(loop=TrainingLoopConfig(), network=GaussianPosteriorConfig(), optimizer=_default_optimizer_config(), inference=InferenceConfig(), embedding=None, device=None)
Top-level Gaussian estimator configuration.
GaussianPosteriorConfig
dataclass
¶
GaussianPosteriorConfig(hidden_dim=128, num_layers=3, momentum=0.01, min_var=1e-20, eig_update_freq=1)
Configuration for GaussianPosterior network.