A framework that extends causal invariance principles to deep neural networks, enabling improved out-of-distribution generalization and causal feature identification.
Neural Causal Regularization provides a comprehensive framework for causal machine learning with deep models
Extends invariant learning principles to nonlinear deep neural networks, enabling stable prediction across environments.
Improves model performance on out-of-distribution data by focusing on invariant causal relationships.
Identifies causal features that have stable relationships with the target variable across environments.
Seamlessly integrates with existing deep learning workflows in both R and Python.
Comprehensive tools for generating data from various structural equation models for benchmarking.
Backed by theoretical results on out-of-distribution generalization and causal feature identification.
Neural Causal Regularization (NCR) is a framework that extends causal invariance principles to deep neural networks. Traditional machine learning models often struggle with distribution shifts because they may learn spurious correlations that don't hold in new environments.
NCR addresses this challenge by training on data from multiple environments and penalizing the variance of prediction risks across these environments. This encourages the model to learn features that lead to stable performance, which typically correspond to causal relationships.
The framework provides theoretical guarantees on out-of-distribution generalization and causal feature identification, making it a powerful tool for applications where robustness to distribution shifts is critical.
Learn MoreGetting started with Neural Causal Regularization is easy. The package provides a simple API for training models with causal regularization.
Install the package from GitHub:
# install.packages("devtools")
devtools::install_github("username/ncausalreg")
Train a model with Neural Causal Regularization:
library(ncausalreg)
# Create a neural network model
model <- ncr_model(input_dim = p, hidden_dims = c(10, 5))
# Train with causal regularization
result <- train_ncr(
model = model,
x_train_list = x_train_list,
y_train_list = y_train_list,
lambda_reg = 1.0
)
View Examples
Neural Causal Regularization can be applied to a wide range of domains where distribution shifts are common
Develop robust predictive models that generalize across different patient populations and clinical settings.
Build models that remain valid under policy changes and economic shifts.
Create predictive models that generalize across different climate regimes and time periods.
Develop control policies that transfer across different environments and physical conditions.
If you use Neural Causal Regularization in your research, please cite our paper
@article{richter2025neural,
title={Neural Causal Regularization: Extending Causal Invariance to Deep Models},
author={Richter, F. and Rigana, K. and Wit, E.},
journal={},
year={2025}
}