日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【机器学习】岭回归

發布時間:2023/12/20 编程问答 20 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【机器学习】岭回归 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

import numpy as npimport matplotlib.pyplot as plt %matplotlib inlinefrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import mean_squared_error,r2_score from sklearn import datasets# CV crosss validation :交叉驗證 from sklearn.linear_model import LinearRegression,Ridge,Lasso,ElasticNet,ElasticNetCV,LassoCV diabetes = datasets.load_diabetes() X = diabetes['data'] y = diabetes['target'] X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.15) lr = LinearRegression()lr.fit(X_train,y_train)# 回歸問題的得分,不是準確率 lr.score(X_test,y_test)

0.508409427784998

'''The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum().''' u = ((y_test - y_)**2).sum() v = ((y_test - y_test.mean())**2).sum() r2 = 1 - u/v r2

0.508409427784998

y_ = lr.predict(X_test) display(y_.round(0),y_test)

array([192., 85., 134., 138., 264., 191., 142., 141., 291., 91., 253.,
174., 164., 153., 167., 83., 229., 169., 92., 206., 174., 78.,
197., 53., 163., 157., 104., 139., 211., 106., 77., 125., 117.,
170., 82., 183., 162., 164., 218., 228., 181., 126., 169., 100.,
120., 69., 211., 168., 111., 169., 187., 204., 163., 133., 154.,
157., 165., 76., 153., 82., 114., 115., 97., 148., 71., 186.,
165.])

array([164., 181., 124., 142., 308., 122., 185., 168., 270., 74., 281.,
52., 109., 246., 181., 92., 99., 122., 91., 265., 143., 59.,
131., 48., 216., 55., 65., 93., 288., 118., 77., 97., 61.,
258., 51., 163., 144., 185., 296., 281., 141., 135., 171., 69.,
177., 83., 220., 235., 109., 138., 257., 297., 151., 170., 210.,
259., 110., 55., 185., 42., 87., 96., 84., 97., 134., 129.,
131.])

r2_score(y_test,y_)

0.508409427784998

mean_squared_error(y_test,y_)

2684.848466337077

使用嶺回歸

lr = LinearRegression()lr.fit(X_train,y_train)print(lr.score(X_test,y_test))y_ = lr.predict(X_test)mean_squared_error(y_test,y_)

0.508409427784998

2684.848466337077

rigde = Ridge(alpha=0.001)rigde.fit(X_train,y_train)print(rigde.score(X_test,y_test))y_ = rigde.predict(X_test)mean_squared_error(y_test,y_)

0.5077536734066447

2688.429904298921

在劃分較小數的時候使用np.logspace(-5,1,50),精準效率優于np.linspace(0.01,5,50)

from sklearn.linear_model import RidgeCV ridgeCV = RidgeCV(alphas=np.logspace(-5,1,50),scoring='r2',cv = 6)ridgeCV.fit(X_train,y_train)y_ = ridgeCV.predict(X_test) r2_score(y_test,y_)

0.5021580806301859

ridgeCV = RidgeCV(alphas=np.linspace(0.01,5,50),scoring='r2',cv = 6)ridgeCV.fit(X_train,y_train)y_ = ridgeCV.predict(X_test) r2_score(y_test,y_)

0.5006336933433428

總結

以上是生活随笔為你收集整理的【机器学习】岭回归的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。