Practical examples and tutorials to help you get the most out of Neural Causal Regularization
This page provides a collection of examples and tutorials that demonstrate how to use the Neural Causal Regularization package for various tasks. Each example includes detailed explanations and code snippets to help you understand the concepts and apply them to your own projects.
Learn the fundamentals of Neural Causal Regularization with a simple linear structural equation model.
View ExampleExplore how Neural Causal Regularization works with nonlinear structural equation models.
View ExampleLearn how Neural Causal Regularization handles scenarios with hidden confounding variables.
View ExampleDiscover how Neural Causal Regularization performs under challenging adversarial environment shifts.
View ExampleLearn how to extract and interpret feature importance from Neural Causal Regularization models.
View ExampleApply Neural Causal Regularization to real-world datasets and compare with baseline methods.
View ExampleIn addition to the examples above, we provide step-by-step tutorials to help you master Neural Causal Regularization:
A comprehensive introduction to Neural Causal Regularization for beginners.
View TutorialAdvanced techniques and best practices for Neural Causal Regularization.
View TutorialLearn how to create custom neural network architectures with Neural Causal Regularization.
View TutorialHere are some useful code snippets to help you get started with Neural Causal Regularization:
# Generate data from a linear structural equation model
p <- 5 # Number of features
gamma <- runif(p, -1, 1) # Effect of shift variable on covariates
beta <- c(1, 0.5, 0, 0, 0) # Only first two variables are causal
data <- generate_linear_sem(
n_samples = 1000,
p = p,
gamma = gamma,
beta = beta,
env_shift = 1.5
)
# Create a neural network model
model <- ncr_model(input_dim = p, hidden_dims = c(10, 5))
# Train with Neural Causal Regularization
result <- train_ncr(
model = model,
x_train_list = x_train_list_torch,
y_train_list = y_train_list_torch,
lambda_reg = 1.0, # Regularization strength
lr = 0.01, # Learning rate
n_epochs = 100, # Number of epochs
batch_size = 32 # Batch size
)
# Evaluate model
mse <- evaluate_model(result$model, x_test, y_test)
# Extract feature importance
importance <- extract_feature_importance(result$model, x_test, y_test)
# Plot feature importance
plot_feature_importance(importance)
# Plot training history
plot_training_history(result)
For more examples and tutorials, check out the following resources: