Skip to main content

ElasticNet

#include <Skigen/LinearModel>

template <typename Scalar = double>
class Skigen::ElasticNet(alpha=1, l1_ratio=0.5, fit_intercept=true, max_iter=1000, tol=1e-4)

Linear regression with combined L1 and L2 priors as regularizer.

Minimizes the objective function:

12nsamplesyXw22+αl1_ratiow1+α(1l1_ratio)2w22\frac{1}{2n_{\mathrm{samples}}} \|y - Xw\|_2^2 + \alpha \cdot \texttt{l1\_ratio} \cdot \|w\|_1 + \frac{\alpha \cdot (1 - \texttt{l1\_ratio})}{2} \|w\|_2^2

If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to:

aw1+b2w22whereα=a+b,  l1_ratio=aa+ba \|w\|_1 + \tfrac{b}{2} \|w\|_2^2 \quad\text{where}\quad \alpha = a + b,\; \texttt{l1\_ratio} = \frac{a}{a + b}

The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. Specifically, l1_ratio = 1 is the Lasso penalty.

Mirrors sklearn.linear_model.ElasticNet.

Read more in the User Guide.


Parameters:

  • alpha : Scalar, default=1 Constant that multiplies the penalty terms (Scalar, default 1). alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with ElasticNet is not advised.

  • l1_ratio : Scalar, default=0.5 The ElasticNet mixing parameter (Scalar, default 0.5). With 0 <= l1_ratio <= 1: l1_ratio = 0 → pure L2 penalty (Ridge); l1_ratio = 1 → pure L1 penalty (Lasso); 0 < l1_ratio < 1 → combination of L1 and L2.

  • fit_intercept : bool, default=true Whether the intercept should be estimated (bool, default true). If false, the data is assumed to be already centered.

  • max_iter : int, default=1000 The maximum number of iterations (int, default 1000).

  • tol : Scalar, default=1e-4 The tolerance for the optimization (Scalar, default 1e-4): if the maximum coordinate update is smaller than tol, the solver stops.


Attributes:

  • coef : RowVectorType Parameter vector ww (1 × n_features).

  • intercept : Scalar Independent term in the decision function.


Methods

fit(X, y)

Fit model with coordinate descent.

Centers the data when fit_intercept is true, then runs coordinate descent with soft-thresholding until convergence or max_iter iterations.

Parameters:

  • X : MatrixType Design matrix of shape (n_samples, n_features).

  • y : VectorType Target vector of shape (n_samples,). Will be cast to Scalar if necessary.

Returns:

  • result : ElasticNet Reference to the fitted estimator (*this).
note

sklearn parity gap: sample_weight and check_input parameters are not yet supported.


predict(X)

Predict using the linear model.

Computes y^=Xw+b\hat{y} = X w + b where ww and bb are the fitted coefficients and intercept.

Parameters:

  • X : MatrixType Sample matrix of shape (n_samples, n_features).

Returns:

  • result : VectorType Predicted values of shape (n_samples,).

Throws:

  • std::runtime_error — if the model has not been fitted.

score(X, y)

Return the R2R^2 coefficient of determination on test data.

The coefficient of determination is defined as R2=1(yiy^i)2(yiyˉ)2R^2 = 1 - \frac{\sum (y_i - \hat{y}_i)^2}{\sum (y_i - \bar{y})^2}. Best possible score is 1.0; it can be negative if the model is arbitrarily worse than predicting the mean.

Parameters:

  • X : MatrixType Test samples of shape (n_samples, n_features).

  • y : VectorType True values of shape (n_samples,).

Returns:

  • result : Scalar R2R^2 score.

Throws:

  • std::runtime_error — if the model has not been fitted.
note

sklearn parity gap: sample_weight parameter is not yet supported.


Example

// Compare L1/L2 ratio effects
std::cout << "=== ElasticNet: l1_ratio sweep (alpha=0.1) ===\n";
for (double ratio : {0.1, 0.3, 0.5, 0.7, 0.9}) {
Skigen::ElasticNet<double> model(0.1, ratio);
model.fit(X_tr, split.y_train);

int nonzero = (model.coef().array().abs() > 1e-10).count();
std::cout << " l1_ratio=" << ratio
<< " non-zero=" << nonzero
<< " R²=" << model.score(X_te, split.y_test) << "\n";
}