Neural Causal Regularization
NCR is a light‑weight Python library that implements the risk‑gap penalty for training deep models that generalise across distribution shifts. It ships with:
- 📦
train_ncr
— 30‑line training loop for anytorch.nn.Module
- 🔌
risk_gap
&risk_variance
penalties (NCR & REx) - 🧰 helpers:
accuracy
,tune_lambda
, toyColoredMNIST
generator
Why NCR?
Standard ERM fits average performance and often memorises
spurious shortcuts. NCR adds an ℓ1 penalty on the risk
difference |R1 − R2|
between
environments, nudging the network toward features that are
stable — typically the causal ones.
Get around
Use the sidebar (left) to jump to:
- Installation: pip / conda commands.
- Getting Started: 20‑line Python example on
ColoredMNIST
. - Theory: formal objective & link to the 2025 paper draft.
- API Reference: every public function with signatures.