Update on 2025-03-10

Shrinkage methods: Lasso and ridge

\[ \begin{aligned} \hat {\boldsymbol \beta}^\rm{ridge} &= \underset{\boldsymbol \beta} {\rm argmin} \bigg\{ \sum_{i = 1}^n (y_i - \sum_{j = 1}^{p}\mathbf x_{ij} \beta_j)^2 \bigg\} \\ &\mbox{subject to} \sum_{j = 1}^p \beta_j^2 \leq s. \end{aligned} \] 显然\(0 \leq s \leq \sum \hat{\beta}_j^2\). 一种更常见的形式: \[ \hat {\boldsymbol \beta}^{\rm ridge} = \underset{\boldsymbol \beta} {\rm argmin} \bigg\{\frac{1}{2}\sum_{i = 1}^n (y_i - \sum_{j = 1}^{p}\mathbf x_{ij} \beta_j)^2 \color{red}{+ \lambda\sum_j^p \beta_j^2} \bigg\}. \] 其中对于每一个\(s > 0\),都有一个\(\lambda > 0\)与之一一对应. 当\(s\)越小, 对应\(\lambda\)越大, 系数\(\boldsymbol \beta\)收敛越厉害.

Ridge 求解

\[ RSS(\lambda) = (\mathbf y - \mathbf X \boldsymbol \beta)^T(\mathbf y - \mathbf X \boldsymbol \beta) \color{red}{ + \lambda \boldsymbol \beta^T \boldsymbol \beta}, \]

\[ \begin{aligned} \hat{\boldsymbol \beta}^{\rm ridge} =& (\mathbf X^T \mathbf X \color{red}{ + \lambda \mathbf I})^{-1} \mathbf X^T \mathbf y \\ =& \mathbf U (\mathbf \Sigma \color{red}{ + \lambda \mathbf I})^{-1} \mathbf U^T \mathbf X^T \mathbf y. \end{aligned} \] 设矩阵\(R = (\mathbf \Sigma + \lambda \mathbf I)^{-1}\),其中 \[ r_{jj} = \frac{1}{d_j \color{red}{ + \lambda}}, \] 对于预测值: \[ \begin{aligned} \hat{\mathbf y}^{\rm ridge} = \mathbf X \hat{\boldsymbol \beta}^{\rm ridge} =& \mathbf X(\mathbf X^T \mathbf X + \lambda \mathbf I)^{-1} \mathbf X^T \mathbf y \\ =& \mathbf X\mathbf U^T(\mathbf \Sigma + \lambda \mathbf I)^{-1} \mathbf U \mathbf X^T \mathbf y. \end{aligned} \]

如何理解Shrinkage?

对于\(\hat{y}_i^{\rm ridge}\): \[ \begin{aligned} \hat{y}_i^{\rm ridge} = \mathbf x_i^T \hat{\boldsymbol \beta}^{\rm ridge} =& \underbrace{\mathbf x_i^T \mathbf U^T}_{\mathbf z_1^T}(\mathbf \Sigma + \lambda \mathbf I)^{-1} \underbrace{\mathbf U \mathbf X^T \mathbf y}_{\mathbf z_2}\\ =& \sum_{j = 1}^p\frac{1}{d_j \color{red}{ + \lambda} } z_{1j}z_{2j}. \end{aligned} \] 对于OLS: \[ \begin{aligned} \hat{y}_i^{\rm OLS} = \mathbf x_i^T \hat{\boldsymbol \beta}^{\rm OLS} =& \underbrace{\mathbf x_i^T \mathbf U^T}_{\mathbf z_1^T}(\mathbf \Sigma)^{-1} \underbrace{\mathbf U \mathbf X^T \mathbf y}_{\mathbf z_2}\\ =& \sum_{j = 1}^p\frac{1}{d_j}z_{1j}z_{2j}. \end{aligned} \]

Lasso: \(L_1\)-penalized regression

\[ \begin{aligned} \hat {\boldsymbol \beta}^\rm{ridge} &= \underset{\boldsymbol \beta} {\rm argmin} \bigg\{ \sum_{i = 1}^n (y_i - \sum_{j = 1}^{p}\mathbf x_{ij} \beta_j)^2 \bigg\} \\ &\mbox{subject to} \sum_{j = 1}^p |\beta_j| \leq s. \end{aligned} \] 显然\(0 \leq s \leq \sum \hat{\beta}_j^2\). 一种更常见的形式: \[ \hat {\boldsymbol \beta}^{\rm ridge} = \underset{\boldsymbol \beta} {\rm argmin} \bigg\{\frac{1}{2} \sum_{i = 1}^n (y_i - \sum_{j = 1}^{p}\mathbf x_{ij} \beta_j)^2 \color{red}{+ \lambda\sum_j^p |\beta_j|} \bigg\}. \]

其中对于每一个\(s > 0\),都有一个\(\lambda > 0\)与之一一对应. 当\(s\)越小, 对应\(\lambda\)越大, 系数\(\boldsymbol \beta\)收敛越厉害.

Lasso求解

最简单的情况: 有且只有一个自变量\(\mathbf x\), 且标准化: \(\sum_i x_i = 0,\ \sum_i x_i^2 = 1,\ \sum_i y_i = 0\), 那么OLS解 \(\hat{\beta} = \color{red}{\mathbf x^T \mathbf y}\),则有:

\[ \begin{aligned} f(\beta) =& \frac{1}{2} (\mathbf y - \mathbf x \beta)^T (\mathbf y - \mathbf x \beta) + \lambda|\beta|\\ =& \frac{1}{2} (\mathbf y^T \mathbf y - 2 \beta^T \color{red}{\mathbf x^T \mathbf y} + \beta^T \mathbf x^T \mathbf x \beta + RSS(\hat{\beta}) - RSS(\hat{\beta})) + \lambda|\beta| \\ =& \frac{1}{2} RSS(\hat{\beta}) + \frac{1}{2}(\beta^2 - 2 \beta \hat{\beta} + 2 \hat{\beta} \hat{\beta} - \hat{\beta}^2) + \lambda|\beta| \end{aligned} \] 优化上述等式等价于: \[ \hat{\beta}^{\rm lasso} = \underset{\beta} {\rm argmin}\ f(\beta) = \underset{\beta} {\rm argmin} \frac{1}{2}(\beta - \hat{\beta})^2 + \lambda|\beta|. \]

Lasso求解

当\(\beta \geq 0\), \[ \frac{df}{d\beta} = \beta - \hat{\beta} + \lambda = 0\ => \beta = \hat{\beta} - \lambda, \] 此时, 当\(\hat{\beta} > \lambda\)时, \(\hat{\beta}^{lasso} = \hat{\beta} - \lambda\); 当\(\hat{\beta} \leq \lambda\)时, 一阶导\(\beta - \hat{\beta} + \lambda \geq 0\), 此时 \(\hat{\beta}^{lasso} = 0.\)

同样的, 当\(\beta \leq 0\), \[ \frac{df}{d\beta} = \beta - \hat{\beta} - \lambda = 0\ => \beta = \hat{\beta} + \lambda, \] 此时, 当\(\hat{\beta} < - \lambda\)时, \(\hat{\beta}^{lasso} = \hat{\beta} + \lambda\), 否则, \(\hat{\beta}^{lasso} = 0.\)

综上, Lasso 解是关于\(\lambda\)的软阈值(soft-threshold) \[ \hat{\beta}^{lasso} = \rm sign(\hat{\beta})(|\hat{\beta}| - \lambda)_+ = \left\{ \begin{array}{rcl} \hat{\beta} - \lambda, & \mbox{for}& \lambda < |\hat{\beta}|\ \rm and\ \hat{\beta} > 0, \\ \hat{\beta} + \lambda, & \mbox{for}& \lambda < |\hat{\beta}|\ \rm and\ \hat{\beta} < 0, \\ 0, & \mbox{for} & \lambda \geq \hat{\beta}. \end{array}\right. \]

软阈值(soft-threshold)

对比Ridge和Lasso

对比Ridge和Lasso

Lasso(左)和Ridge(右)回归系数比较. 红线代表OLS残差平方和的等高线.

Ridge coefficients: plot

Lasso coefficients: plot

应用: 寻找消失的遗传性

Norm

\[ L_q = \sum_j |\beta_j|^q) \] Lasso is a \(L_1\) norm penalty and Ridge is a \(L_2\) norm penalty.

终极武器:Elastic-net

\[ \hat {\boldsymbol \beta}^{\rm e-net} = \underset{\boldsymbol \beta} {\rm argmin} \bigg\{\frac{1}{2} \sum_{i = 1}^n (y_i - \sum_{j = 1}^{p}\mathbf x_{ij} \beta_j)^2 \color{red}{+ \lambda \sum_j^p \big(\alpha |\beta_j| + (1 - \alpha) \beta_j^2\big )} \bigg\}. \]