线性回归推导
樣本(\(x_{i}\),\(y_{i}\))個(gè)數(shù)為\(m\):
\[\{x_{1},x_{2},x_{3}...x_{m}\}\]
\[\{y_{1},y_{2},y_{3}...y_{m}\}\]
其中\(zhòng)(x_{i}\)為\(n-1\)維向量(在最后添加一個(gè)1,和\(w\)的維度對(duì)齊,用于向量相乘):
\[x_{i}=\{x_{i1},x_{i2},x_{i3}...x_{i(n-1)},1\}\]
其中\(zhòng)(w\)為\(n\)維向量:
\[w=\{w_{1},w_{2},w_{3}...w_{n}\}\]
回歸函數(shù):
\[h_{w}(x_{i})=wx_{i}\]
損失函數(shù):
\[J(w)=\frac{1}{2}\sum_{i=1}^{m}(h_{w}(x_{i})-y_{i})^2\]
\[求w->min_{J(w)}\]
損失函數(shù)對(duì)\(w\)中的每個(gè)\(w_{j}\)求偏導(dǎo)數(shù):
\[\frac{\partial J(w)}{\partial w_{j}}=\frac{\partial}{\partial w_{j}}\sum_{i=1}^{m}(h_{w}(x_{i})-y_{i})^2\]
\[=\frac{1}{2}*2*\sum_{i=1}^{m}(h_{w}(x_{i})-y_{i})*\frac{\partial (h_{w}(x_{i})-y_{i})}{\partial w_{j}}\]
\[=\sum_{i=1}^{m}(h_{w}(x_{i})-y_{i})*\frac{\partial (wx_{i}-y_{i})}{\partial w_{j}}\]
\[\frac{\partial J(w)}{\partial w_{j}}=\sum_{i=1}^{m}(h_{w}(x_{i})-y_{i})*x_{ij}\]
更新\(w\)中的每個(gè)\(w_{j}\)的值,其中\(zhòng)(\alpha\)為學(xué)習(xí)速度:
\[w_{j}:=w_{j}-\alpha*\frac{\partial J(w)}{\partial w_{j}}\]
批量梯度下降:使用所有樣本值進(jìn)行更新\(w\)中的每個(gè)\(w_{j}\)的值
\[w_{j}:=w_{j}-\alpha*\sum_{i=1}^{m}(h_{w}(x_{i})-y_{i})*x_{ij}\]
轉(zhuǎn)載于:https://www.cnblogs.com/smallredness/p/11027873.html
總結(jié)
- 上一篇: 《你不知道的JavaScript(上)》
- 下一篇: “每日 4 +1 问”理念之体重记录