日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

鲍鱼数据案例(岭回归 、LASSO回归)

發(fā)布時(shí)間:2023/12/3 编程问答 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 鲍鱼数据案例(岭回归 、LASSO回归) 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

鮑魚(yú)數(shù)據(jù)集案例實(shí)戰(zhàn))

  • 數(shù)據(jù)集探索性分析
  • 鮑魚(yú)數(shù)據(jù)預(yù)處理
  • 對(duì)sex特征進(jìn)行OneHot編碼,便于后續(xù)模型納入啞變量
  • 篩選特征
  • 將鮑魚(yú)數(shù)據(jù)集劃分為訓(xùn)練集和測(cè)試集
  • 實(shí)現(xiàn)線性回歸和嶺回歸
  • 使用numpy實(shí)現(xiàn)線性回歸
  • 使用sklearn實(shí)現(xiàn)線性回歸
  • 使用Numpy實(shí)現(xiàn)嶺回歸
  • 利用sklearn實(shí)現(xiàn)嶺回歸
  • 嶺跡分析
  • 使用LASSO構(gòu)建鮑魚(yú)年齡預(yù)測(cè)模型
  • LASSO的正則化路徑
  • 殘差圖

數(shù)據(jù)集探索性分析

import pandas as pd import warnings warnings.filterwarnings('ignore') data=pd.read_csv(r"E:\大二下\機(jī)器學(xué)習(xí)實(shí)踐\abalone_dataset.csv") data.head() sexlengthdiameterheightwhole weightshucked weightviscera weightshell weightrings01234
M0.4550.3650.0950.51400.22450.10100.15015
M0.3500.2650.0900.22550.09950.04850.0707
F0.5300.4200.1350.67700.25650.14150.2109
M0.4400.3650.1250.51600.21550.11400.15510
I0.3300.2550.0800.20500.08950.03950.0557
#查看數(shù)據(jù)集中樣本數(shù)量和特征數(shù)量 data.shape (4177, 9) #查看數(shù)據(jù)信息,檢查是否有缺失值 data.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 4177 entries, 0 to 4176 Data columns (total 9 columns): sex 4177 non-null object length 4177 non-null float64 diameter 4177 non-null float64 height 4177 non-null float64 whole weight 4177 non-null float64 shucked weight 4177 non-null float64 viscera weight 4177 non-null float64 shell weight 4177 non-null float64 rings 4177 non-null int64 dtypes: float64(7), int64(1), object(1) memory usage: 293.8+ KB data.describe() lengthdiameterheightwhole weightshucked weightviscera weightshell weightringscountmeanstdmin25%50%75%max
4177.0000004177.0000004177.0000004177.0000004177.0000004177.0000004177.0000004177.000000
0.5239920.4078810.1395160.8287420.3593670.1805940.2388319.933684
0.1200930.0992400.0418270.4903890.2219630.1096140.1392033.224169
0.0750000.0550000.0000000.0020000.0010000.0005000.0015001.000000
0.4500000.3500000.1150000.4415000.1860000.0935000.1300008.000000
0.5450000.4250000.1400000.7995000.3360000.1710000.2340009.000000
0.6150000.4800000.1650001.1530000.5020000.2530000.32900011.000000
0.8150000.6500001.1300002.8255001.4880000.7600001.00500029.000000

#觀察sex列的取值的分布情況 import seaborn as sns import matplotlib.pyplot as plt %matplotlib inlinesns.countplot(x = "sex",data=data) <matplotlib.axes._subplots.AxesSubplot at 0x27f16455080>

data['sex'].value_counts() M 1528 I 1342 F 1307 Name: sex, dtype: int64 i=1 #子圖計(jì)數(shù) plt.figure(figsize=(16,8)) for col in data.columns[1:]:plt.subplot(4,2,i)i = i + 1sns.distplot(data[col]) plt.tight_layout()

sns.pairplot(data,hue="sex") <seaborn.axisgrid.PairGrid at 0x27f16d16eb8>

corr_df = data.corr() corr_df lengthdiameterheightwhole weightshucked weightviscera weightshell weightringslengthdiameterheightwhole weightshucked weightviscera weightshell weightrings
1.0000000.9868120.8275540.9252610.8979140.9030180.8977060.556720
0.9868121.0000000.8336840.9254520.8931620.8997240.9053300.574660
0.8275540.8336841.0000000.8192210.7749720.7983190.8173380.557467
0.9252610.9254520.8192211.0000000.9694050.9663750.9553550.540390
0.8979140.8931620.7749720.9694051.0000000.9319610.8826170.420884
0.9030180.8997240.7983190.9663750.9319611.0000000.9076560.503819
0.8977060.9053300.8173380.9553550.8826170.9076561.0000000.627574
0.5567200.5746600.5574670.5403900.4208840.5038190.6275741.000000
fig ,ax =plt.subplots(figsize=(12,12)) ##繪制熱力圖 ax = sns.heatmap(corr_df,linewidths=.5,cmap="Greens",annot=True,xticklabels=corr_df.columns,yticklabels=corr_df.index) ax.xaxis.set_label_position('top') ax.xaxis.tick_top()

鮑魚(yú)數(shù)據(jù)預(yù)處理

對(duì)sex特征進(jìn)行OneHot編碼,便于后續(xù)模型納入啞變量

#只用pandas的get_dummies函數(shù)對(duì)sex特征做OneHot編碼處理 sex_onehot = pd.get_dummies(data["sex"],prefix="sex") data[sex_onehot.columns] = sex_onehot data.head() sexlengthdiameterheightwhole weightshucked weightviscera weightshell weightringssex_Fsex_Isex_M01234
M0.4550.3650.0950.51400.22450.10100.15015001
M0.3500.2650.0900.22550.09950.04850.0707001
F0.5300.4200.1350.67700.25650.14150.2109100
M0.4400.3650.1250.51600.21550.11400.15510001
I0.3300.2550.0800.20500.08950.03950.0557010
data["ones"]=1 data.head() sexlengthdiameterheightwhole weightshucked weightviscera weightshell weightringssex_Fsex_Isex_Mones01234
M0.4550.3650.0950.51400.22450.10100.150150011
M0.3500.2650.0900.22550.09950.04850.07070011
F0.5300.4200.1350.67700.25650.14150.21091001
M0.4400.3650.1250.51600.21550.11400.155100011
I0.3300.2550.0800.20500.08950.03950.05570101
data["age"]=data["rings"] + 1.5 data.head() sexlengthdiameterheightwhole weightshucked weightviscera weightshell weightringssex_Fsex_Isex_Monesage01234
M0.4550.3650.0950.51400.22450.10100.15015001116.5
M0.3500.2650.0900.22550.09950.04850.070700118.5
F0.5300.4200.1350.67700.25650.14150.2109100110.5
M0.4400.3650.1250.51600.21550.11400.15510001111.5
I0.3300.2550.0800.20500.08950.03950.055701018.5

篩選特征

data.columns Index(['sex', 'length', 'diameter', 'height', 'whole weight', 'shucked weight','viscera weight', 'shell weight', 'rings', 'sex_F', 'sex_I', 'sex_M','ones', 'age'],dtype='object') y = data["age"] #因變量 features_with_ones = ["length", "diameter", "height", "whole weight", "shucked weight","viscera weight", "shell weight", "sex_F", "sex_M","ones"] features_without_ones = ["length", "diameter", "height", "whole weight", "shucked weight","viscera weight", "shell weight", "sex_F", "sex_M"] X=data[features_with_ones]

將鮑魚(yú)數(shù)據(jù)集劃分為訓(xùn)練集和測(cè)試集

#拆分訓(xùn)練集和測(cè)試集 from sklearn.model_selection import train_test_splitX_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=111) X lengthdiameterheightwhole weightshucked weightviscera weightshell weightsex_Fsex_Mones01234567891011121314151617181920212223242526272829...414741484149415041514152415341544155415641574158415941604161416241634164416541664167416841694170417141724173417441754176
0.4550.3650.0950.51400.22450.10100.1500011
0.3500.2650.0900.22550.09950.04850.0700011
0.5300.4200.1350.67700.25650.14150.2100101
0.4400.3650.1250.51600.21550.11400.1550011
0.3300.2550.0800.20500.08950.03950.0550001
0.4250.3000.0950.35150.14100.07750.1200001
0.5300.4150.1500.77750.23700.14150.3300101
0.5450.4250.1250.76800.29400.14950.2600101
0.4750.3700.1250.50950.21650.11250.1650011
0.5500.4400.1500.89450.31450.15100.3200101
0.5250.3800.1400.60650.19400.14750.2100101
0.4300.3500.1100.40600.16750.08100.1350011
0.4900.3800.1350.54150.21750.09500.1900011
0.5350.4050.1450.68450.27250.17100.2050101
0.4700.3550.1000.47550.16750.08050.1850101
0.5000.4000.1300.66450.25800.13300.2400011
0.3550.2800.0850.29050.09500.03950.1150001
0.4400.3400.1000.45100.18800.08700.1300101
0.3650.2950.0800.25550.09700.04300.1000011
0.4500.3200.1000.38100.17050.07500.1150011
0.3550.2800.0950.24550.09550.06200.0750011
0.3800.2750.1000.22550.08000.04900.0850001
0.5650.4400.1550.93950.42750.21400.2700101
0.5500.4150.1350.76350.31800.21000.2000101
0.6150.4800.1651.16150.51300.30100.3050101
0.5600.4400.1400.92850.38250.18800.3000101
0.5800.4500.1850.99550.39450.27200.2850101
0.5900.4450.1400.93100.35600.23400.2800011
0.6050.4750.1800.93650.39400.21900.2950011
0.5750.4250.1400.86350.39300.22700.2000011
..............................
0.6950.5500.1951.66450.72700.36000.4450011
0.7700.6050.1752.05050.80050.52600.3550011
0.2800.2150.0700.12400.06300.02150.0300001
0.3300.2300.0800.14000.05650.03650.0460001
0.3500.2500.0750.16950.08350.03550.0410001
0.3700.2800.0900.21800.09950.05450.0615001
0.4300.3150.1150.38400.18850.07150.1100001
0.4350.3300.0950.39300.21900.07500.0885001
0.4400.3500.1100.38050.15750.08950.1150001
0.4750.3700.1100.48950.21850.10700.1460011
0.4750.3600.1400.51350.24100.10450.1550011
0.4800.3550.1100.44950.20100.08900.1400001
0.5600.4400.1350.80250.35000.16150.2590101
0.5850.4750.1651.05300.45800.21700.3000101
0.5850.4550.1700.99450.42550.26300.2845101
0.3850.2550.1000.31750.13700.06800.0920011
0.3900.3100.0850.34400.18100.06950.0790001
0.3900.2900.1000.28450.12550.06350.0810001
0.4050.3000.0850.30350.15000.05050.0880001
0.4750.3650.1150.49900.23200.08850.1560001
0.5000.3800.1250.57700.26900.12650.1535011
0.5150.4000.1250.61500.28650.12300.1765101
0.5200.3850.1650.79100.37500.18000.1815011
0.5500.4300.1300.83950.31550.19550.2405011
0.5600.4300.1550.86750.40000.17200.2290011
0.5650.4500.1650.88700.37000.23900.2490101
0.5900.4400.1350.96600.43900.21450.2605011
0.6000.4750.2051.17600.52550.28750.3080011
0.6250.4850.1501.09450.53100.26100.2960101
0.7100.5550.1951.94850.94550.37650.4950011

4177 rows × 10 columns

實(shí)現(xiàn)線性回歸和嶺回歸

使用numpy實(shí)現(xiàn)線性回歸

import numpy as np def linear_regression(X,y):w = np.zeros_like(X.shape[1])if np.linalg.det(X.T.dot(X)) != 0:w = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)return w #使用上述實(shí)現(xiàn)的線性回歸模型在鮑魚(yú)訓(xùn)練集上訓(xùn)練模型 w1 = linear_regression(X_train,y_train) w1 = pd.DataFrame(data = w1,index=X.columns,columns =["numpy_w"]) w1.round(decimals=2) numpy_wlengthdiameterheightwhole weightshucked weightviscera weightshell weightsex_Fsex_Mones
-1.12
10.00
20.74
9.61
-20.05
-12.07
6.55
0.88
0.87
4.32

使用sklearn實(shí)現(xiàn)線性回歸

from sklearn.linear_model import LinearRegression lr = LinearRegression() lr.fit(X_train[features_without_ones],y_train) print(lr.coef_) [ -1.118146 10.00094599 20.73712616 9.61484657 -20.05079291-12.06849193 6.54529076 0.87855188 0.87283083] w1 numpy_wlengthdiameterheightwhole weightshucked weightviscera weightshell weightsex_Fsex_Mones
-1.118146
10.000946
20.737126
9.614847
-20.050793
-12.068492
6.545291
0.878552
0.872831
4.324477
w_lr=[] w_lr.extend(lr.coef_) w_lr.append(lr.intercept_) w1["lr_sklearn_w"]=w_lr w1.round(decimals=2) numpy_wlr_sklearn_wlengthdiameterheightwhole weightshucked weightviscera weightshell weightsex_Fsex_Mones
-1.12-1.12
10.0010.00
20.7420.74
9.619.61
-20.05-20.05
-12.07-12.07
6.556.55
0.880.88
0.870.87
4.324.32
#做正則化時(shí)不包含b lambda I不是真正的系數(shù)

使用Numpy實(shí)現(xiàn)嶺回歸

def ridge_regression(X,y,ridge_lambda):penalty_matrix = np.eye(X.shape[1])penalty_matrix[X.shape[1]-1][X.shape[1]-1] = 0w = np.linalg.inv(X.T.dot(X) + ridge_lambda * penalty_matrix).dot(X.T).dot(y)return w

在鮑魚(yú)訓(xùn)練集上使用ridge_regression函數(shù)訓(xùn)練嶺回歸模型,正則化系數(shù)設(shè)置為1

w2 = ridge_regression(X_train,y_train,1.0) print(w2) [ 2.30976528 6.72038628 10.23298909 7.05879189 -17.16249532-7.2343118 9.3936994 0.96869974 0.9422174 4.80583032] w1["numpy_ridge_w"] = w2 w1.round(decimals=2) numpy_wlr_sklearn_wnumpy_ridge_wlengthdiameterheightwhole weightshucked weightviscera weightshell weightsex_Fsex_Mones
-1.12-1.122.31
10.0010.006.72
20.7420.7410.23
9.619.617.06
-20.05-20.05-17.16
-12.07-12.07-7.23
6.556.559.39
0.880.880.97
0.870.870.94
4.324.324.81

利用sklearn實(shí)現(xiàn)嶺回歸

與sklearn中嶺回歸對(duì)比,同樣正則化敘述設(shè)置為1

from sklearn.linear_model import Ridge ridge = Ridge(alpha=1.0) ridge.fit(X_train[features_without_ones],y_train) w_ridge = [] w_ridge.extend(ridge.coef_) w_ridge.append(ridge.intercept_) w1["ridge_sklearn_w"] = w_ridge w1.round(decimals=2) numpy_wlr_sklearn_wnumpy_ridge_wridge_sklearn_wlengthdiameterheightwhole weightshucked weightviscera weightshell weightsex_Fsex_Mones
-1.12-1.122.312.31
10.0010.006.726.72
20.7420.7410.2310.23
9.619.617.067.06
-20.05-20.05-17.16-17.16
-12.07-12.07-7.23-7.23
6.556.559.399.39
0.880.880.970.97
0.870.870.940.94
4.324.324.814.81

嶺跡分析

alphas = np.logspace(-10,10,20) coef = pd.DataFrame() for alpha in alphas:ridge_clf = Ridge(alpha=alpha)ridge_clf.fit(X_train[features_without_ones],y_train)df = pd.DataFrame([ridge_clf.coef_],columns=X_train[features_without_ones].columns)df['alpha']=alphacoef =coef.append(df,ignore_index=True) coef.round(decimals=2) lengthdiameterheightwhole weightshucked weightviscera weightshell weightsex_Fsex_Malpha012345678910111213141516171819
-1.1210.0020.749.61-20.05-12.076.550.880.870.000000e+00
-1.1210.0020.749.61-20.05-12.076.550.880.870.000000e+00
-1.1210.0020.749.61-20.05-12.076.550.880.870.000000e+00
-1.1210.0020.749.61-20.05-12.076.550.880.870.000000e+00
-1.1210.0020.749.61-20.05-12.076.550.880.870.000000e+00
-1.1210.0020.749.61-20.05-12.076.550.880.870.000000e+00
-1.1210.0020.739.61-20.05-12.076.550.880.870.000000e+00
-1.109.9820.689.60-20.04-12.056.560.880.870.000000e+00
-0.889.7920.139.50-19.94-11.866.710.880.883.000000e-02
0.738.3315.608.55-18.97-10.057.980.920.903.000000e-01
3.205.025.405.11-13.71-3.679.611.071.003.360000e+00
1.661.761.122.53-3.54-0.093.671.331.113.793000e+01
0.510.470.221.630.180.300.790.890.694.281300e+02
0.120.100.040.460.150.090.160.210.164.832930e+03
0.010.010.000.050.020.010.020.020.025.455595e+04
0.000.000.000.000.000.000.000.000.006.158482e+05
0.000.000.000.000.000.000.000.000.006.951928e+06
0.000.000.000.000.000.000.000.000.007.847600e+07
0.000.000.000.000.000.000.000.000.008.858668e+08
0.000.000.000.000.000.000.000.000.001.000000e+10
plt.rcParams['figure.dpi'] = 300#分辨率 plt.figure(figsize=(9,6)) coef['alpha']=coef['alpha']for feature in X_train.columns[:-1]:plt.plot('alpha',feature,data=coef) ax = plt.gca() ax.set_xscale('log') plt.legend(loc='upper right') plt.xlabel(r'$\alpha$',fontsize=15) plt.ylabel('系數(shù)',fontsize=15) Text(0, 0.5, '系數(shù)')Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '-' [U+2212], substituting with a dummy symbol.

使用LASSO構(gòu)建鮑魚(yú)年齡預(yù)測(cè)模型

from sklearn.linear_model import Lasso lasso = Lasso(alpha=0.01) lasso.fit(X_train[features_without_ones],y_train) print(lasso.coef_) print(lasso.intercept_) [ 0. 6.37435514 0. 4.46703234 -13.44947667-0. 11.85934842 0.98908791 0.93313403] 6.500338023591298

LASSO的正則化路徑

coef = pd.DataFrame() for alpha in np.linspace(0.0001,0.2,20):lasso_clf = Lasso(alpha=alpha)lasso_clf.fit(X_train[features_without_ones],y_train)df = pd.DataFrame([lasso_clf.coef_],columns=X_train[features_without_ones].columns)df['alpha']=alphacoef = coef.append(df,ignore_index=True) coef.head() #繪圖 plt.figure(figsize=(9,6),dpi=600) for feature in X_train.columns[:-1]:plt.plot('alpha',feature,data=coef) plt.legend(loc='upper right') plt.xlabel(r'$\alpha$',fontsize=15) plt.ylabel('系數(shù)',fontsize=15) plt.show()

coef lengthdiameterheightwhole weightshucked weightviscera weightshell weightsex_Fsex_Malpha012345678910111213141516171819
-0.5680439.3927520.3900419.542038-19.995972-11.9003266.6353520.8814960.8751320.000100
0.0000006.025730.0000004.375754-13.127223-0.00000011.8971890.9951370.9341290.010621
0.3849270.000000.0000002.797815-7.702209-0.00000012.4785411.0934790.9482810.021142
0.0000000.000000.0000000.884778-2.7495040.00000011.7059741.0989900.8976730.031663
0.0000000.000000.0000000.322742-0.0000000.0000009.2259191.0729910.8340210.042184
0.0000000.000000.0000001.555502-0.0000000.0000004.6104251.0138240.7578910.052705
0.0000000.000000.0000002.786784-0.0000000.0000000.0000000.9547100.6818210.063226
0.0000000.000000.0000002.797514-0.0000000.0000000.0000000.8484120.5816130.073747
0.0000000.000000.0000002.807843-0.0000000.0000000.0000000.7425290.4817110.084268
0.0000000.000000.0000002.818184-0.0000000.0000000.0000000.6366320.3817990.094789
0.0000000.000000.0000002.828630-0.0000000.0000000.0000000.5306150.2818010.105311
0.0000000.000000.0000002.838944-0.0000000.0000000.0000000.4247500.1819120.115832
0.0000000.000000.0000002.849325-0.0000000.0000000.0000000.3188070.0819670.126353
0.0000000.000000.0000002.851851-0.0000000.0000000.0000000.2250240.0000000.136874
0.0000000.000000.0000002.819079-0.0000000.0000000.0000000.1861570.0000000.147395
0.0000000.000000.0000002.786307-0.0000000.0000000.0000000.1472900.0000000.157916
0.0000000.000000.0000002.7535350.0000000.0000000.0000000.1084220.0000000.168437
0.0000000.000000.0000002.7207620.0000000.0000000.0000000.0695550.0000000.178958
0.0000000.000000.0000002.6879900.0000000.0000000.0000000.0306880.0000000.189479
0.0000000.000000.0000002.6529400.0000000.0000000.0000000.0000000.0000000.200000

from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.metrics import r2_score #MAE y_test_pred_lr = lr.predict(X_test.iloc[:,:-1]) print(round(mean_absolute_error(y_test,y_test_pred_lr),4)) 1.6016 y_test_pred_ridge = ridge.predict(X_test[features_without_ones]) print(round(mean_absolute_error(y_test,y_test_pred_ridge),4)) 1.5984 y_test_pred_lasso = lasso.predict(X_test[features_without_ones]) print(round(mean_absolute_error(y_test,y_test_pred_lasso),4)) 1.6402 #MSE y_test_pred_lr = lr.predict(X_test.iloc[:,:-1]) print(round(mean_squared_error(y_test,y_test_pred_lr),4)) 5.3009 y_test_pred_ridge = ridge.predict(X_test[features_without_ones]) print(round(mean_squared_error(y_test,y_test_pred_ridge),4)) 4.959 y_test_pred_lasso = lasso.predict(X_test[features_without_ones]) print(round(mean_squared_error(y_test,y_test_pred_lasso),4)) 5.1 #R2系數(shù) print(round(r2_score(y_test,y_test_pred_lr),4)) print(round(r2_score(y_test,y_test_pred_ridge),4)) print(round(r2_score(y_test,y_test_pred_lasso),4)) 0.5257 0.5563 0.5437

殘差圖

plt.figure(figsize=(9,6),dpi=600) y_train_pred_ridge = ridge.predict(X_train[features_without_ones]) plt.scatter(y_train_pred_ridge,y_train_pred_ridge - y_train,c="g",alpha=0.6) plt.scatter(y_test_pred_ridge,y_test_pred_ridge - y_test,c="r",alpha=0.6) plt.hlines(y=0,xmin=0,xmax=30,color="b",alpha=0.6) plt.ylabel("Residuals") plt.xlabel("Predict") Text(0.5, 0, 'Predict')

總結(jié)

以上是生活随笔為你收集整理的鲍鱼数据案例(岭回归 、LASSO回归)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。