日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

营收与预测:线性回归建立预测收入水平的线性回归模型。

發布時間:2023/12/10 编程问答 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 营收与预测:线性回归建立预测收入水平的线性回归模型。 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1.獲取數據

特征含義、

## 獲取數據 from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score import statsmodels.api as sm #是 Python 中一個強大的統計分析包,包含了回歸分析、時間序列分析、假設檢驗等等的功能 import numpy as np import pandas as pd import matplotlib.pyplot as plt data = pd.read_table('C:/Users/lb/Desktop/test/earndata3.txt',sep='\t',engine="python",encoding = 'utf-8') data.columns.values data.head() # # 重命名 # data.rename(columns = {'類型':'type','盈利率':'profit','付費率':'pay','活躍率':'active','收入':'income','觸達比例':'touch', # '轉化比例':'conves','新增比例':'new','運營費用占比':'operate','服務費用占比':'servicce'},inplace = True) # data # 數據框操作,plt.rcParams設置圖像細節,如圖像大小,線條樣式和寬度 # 繪制某兩個維度的散點圖 plt.rcParams['font.sans-serif']=['SimHei'] #用來正常顯示中文標簽 plt.rcParams['axes.unicode_minus']=False #用來正常顯示負號 plt.scatter(data['feature35'],data['收入']) plt.xlabel('feature35') plt.ylabel('收入') plt.show()

2.查看缺失值和填充辦法

#查看缺失值 na_num = pd.isna(data).sum() print(na_num) #缺失值填充 #fillna() #df['taixin'] = df['taixin'].fillna(df['taixin'] .mean()) #均值 #df['taixin'] = df['taixin'].fillna(df['taixin'] .mode()) #眾數 # df['taixin'] = df['taixin'].interpolate() #插值法

異常值 一般用箱線圖

#查看異常值 plt.boxplot(data['feature1']) plt.show()


Seaborn是對matplotlib的extend,是一個數據可視化庫,提供更高級的API封裝,在應用中更加的方便靈活。 1.直方圖和密度圖 2.柱狀圖和熱力圖 3.設置圖形顯示效果 4.調色功能

當Pearson相關系數低于0.4,則表明變量之間存在弱相關關系;當Pearson相關系數在0.4~0.6之間,則說明變量之間存在中度相關關系;當相關系數在0.6以上時,則反映變量之間存在強相關關系
.


導出

# 導出為文件 import csv outputpath='C:/Users/dell/Desktop/cor2.csv' data_cor.to_csv(outputpath,index=True,header=True) #繪制各變量之間的散點圖 sns.pairplot(data) plt.show()

#劃分測試訓練集 from sklearn.model_selection import train_test_split data_x = data.drop(['收入'],axis=1) data_y = data['收入'] train_x,test_x,train_y,test_y = train_test_split(data_x,data_y,test_size=0.3,random_state=6) train_x.head()


3.這里把所有重新恢復成 從 0開始

# #索引恢復 for i in [train_x,test_x]:i.index = range(i.shape[0]) train_x.head()


導入linear_model模塊,然后創建一個線性模型linear_model.LinearRegression,該線性回歸模型創建有幾個參數(可以通過help(linear_model.LinearRegression)來查看):

LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)

1、fit_intercept:bool量,選擇是否需要計算截距,默認為True,如果中心化了的數據可以選擇false

2、normalize:bool量,選擇是否需要標準化(中心化),默認為false,為true,表示標準化處理,一般不使用。可以自行使用standerdscaler

3、copy_x: bool量,選擇是否復制X數據,默認True,如果否,可能會因為中心化把X數據覆蓋

4、n_job:int量,選擇幾核用于計算,默認1,-1表示全速運轉

#建立線性回歸模型 from sklearn.linear_model import LinearRegression #線性回歸 model = LinearRegression() model.fit(train_x,train_y) display(model.intercept_) # 顯示模型的截距 display(model.coef_)#顯示模型的參數

[*zip(train_x.columns,model.coef_)]

[(‘feature1’, -1.306876920532666),
(‘feature2’, -2.135359164945528),
(‘feature3’, 0.3922366623031011),
(‘feature4’, -0.4006529864411556),
(‘feature5’, -0.800071692310333),
(‘feature6’, -0.04600005945256574),
(‘feature7’, -0.11265174384663018),
(‘feature8’, 0.10045483109131019),
(‘feature9’, -0.32704175764925286),
(‘feature10’, 1.2434839400292559),
(‘feature11’, 1.5832422299388336),
(‘feature12’, -0.09226719574100156),
(‘feature13’, -2.459151124321635),
(‘feature14’, 1.9779923876600112),
(‘feature15’, 0.024525225949885543),
(‘feature16’, 0.00918442799938769),
(‘feature17’, 0.006352020349345027),
(‘feature18’, 1.6985207539315572),
(‘feature19’, -4.9467995989630396e-05),
(‘feature20’, -0.022589825867085835),
(‘feature21’, 0.09947572093574145),
(‘feature22’, -1.0843768519460424),
(‘feature23’, -0.000538610562770904),
(‘feature24’, 0.007229716791249371),
(‘feature25’, 0.0011119599539866497),
(‘feature26’, 0.23842187221124864),
(‘feature27’, 0.026069170729882317),
(‘feature28’, 0.00691494578802669),
(‘feature29’, -0.0449676591248237),
(‘feature30’, 0.0011027808324655089),
(‘feature31’, -1.151930096574248),
(‘feature32’, 0.001446787798073345),
(‘feature33’, 0.012505109738047488),
(‘feature34’, 0.3162910511343061),
(‘feature35’, -0.4609002081574919),
(‘feature36’, -0.03493518878291976),
(‘feature37’, 0.000816129764761191),
(‘feature38’, 1.4467629087041338),
(‘feature39’, 0.038077869864662946),
(‘feature40’, 2.4660343230998505e-05)]

4.預測和實際圖像

# 預測與實際圖像 pre_train = model.predict(train_x) # plt.plot(range(len(pre_train)),sorted(pre_train),label = 'yuce') # plt.plot(range(len(train_y)),sorted(train_y),label = 'shiji') plt.plot(range(len(pre_train)),pre_train,label = 'yuce') plt.plot(range(len(train_y)),train_y,label = 'shiji') plt.legend() plt.show()

5.這樣看不上很清楚 ,可以將預測值進行排序 看畫圖

# 預測與實際圖像 pre_train = model.predict(train_x) plt.plot(range(len(pre_train)),sorted(pre_train),label = 'yuce') plt.plot(range(len(train_y)),sorted(train_y),label = 'shiji') # plt.plot(range(len(pre_train)),pre_train,label = 'yuce') # plt.plot(range(len(train_y)),train_y,label = 'shiji') plt.legend() plt.show()

6.模型評估

from sklearn.metrics import mean_squared_error as MSE MSE(train_y,pre_train)

from sklearn.model_selection import cross_val_score #出現負數則為損失 cross_val_score(model,train_x,train_y,cv=10,scoring = "r2").mean()

交叉驗證

#使用sklearn來進行調用,計算MSE from sklearn.metrics import mean_squared_error mean_squared_error(train_y,pre_train) # 求R方 pre_y = model.predict(test_x) from sklearn.metrics import r2_score import statsmodels.api as sm score=r2_score(test_y,pre_y) #第一個是真實值,第二個預測值 score


計算MSE

cross_val_score(model,train_x,train_y,cv=10,scoring = "neg_mean_squared_error").mean()

# 預測值與實際值 pre_test = model.predict(test_x) plt.plot(range(len(pre_test)),sorted(pre_test),label = 'yuce') plt.plot(range(len(test_y)),sorted(test_y),label = 'shiji') plt.legend() plt.show()

7.多重共線性

多重共線性檢查的是自變量之間存在線性關系,存在多重共線性會導致變量的顯著性檢驗將失去效果、OLS數據失真。一般使用方差膨脹因子來進行檢測,若VIF>10,證明存在共線性,若存在多重共線性,可以選擇刪除變量或重新選擇模型(LASSO)。
dmatrices :特征組合起來

#多重共線性,自變量之間存在共線性 from patsy.highlevel import dmatrices from statsmodels.stats.outliers_influence import variance_inflation_factor Y,X = dmatrices( '收入~ feature1+feature2+feature3+feature4+feature5+feature6+feature7+feature8+feature9+feature10+feature11+feature12+feature13+feature14+feature15+feature16+feature17+feature18+feature19+feature20+feature21+feature22+feature23+feature24+feature25+feature26+feature27+feature28+feature29+feature30+feature31+feature32+feature33+feature34+feature35+feature36+feature37+feature38+feature39+feature40', data = data, return_type= 'dataframe') vif = pd.DataFrame()vif[ "VIF Factor"] = [variance_inflation_factor(X.values,i) for i in range(X.shape[1])] vif[ "features"] = X.columns vif

VIF Factor features
0 8.244820 Intercept
1 1.159042 feature1
2 1.833402 feature2
3 1.062635 feature3
4 1.171333 feature4
5 1.024516 feature5
6 4.241079 feature6
7 9.112658 feature7
8 4.629342 feature8
9 4.295007 feature9
10 3.085991 feature10
11 1.106177 feature11
12 1.083720 feature12
13 1.073559 feature13
14 1.013818 feature14
15 1.070490 feature15
16 1.498616 feature16
17 1.248611 feature17
18 1.007976 feature18
19 1.095409 feature19
20 3.652704 feature20
21 3.985798 feature21
22 6.668252 feature22
23 3.120996 feature23
24 1.020012 feature24
25 2.170860 feature25
26 2.018375 feature26
27 1.926179 feature27
28 1.982204 feature28
29 1.492437 feature29
30 1.275029 feature30
31 5.298781 feature31
32 2.267253 feature32
33 2.655571 feature33
34 3.786994 feature34
35 6.317910 feature35
36 1.039711 feature36
37 3.315587 feature37
38 3.098461 feature38
39 1.313444 feature39
40 1.276520 feature40

#強影響點,SR絕對值>3為相對大的影響點,對結果有比較大的影響點,若刪除對模型會產生影響 from statsmodels.stats.outliers_influence import OLSInfluence model1 = sm.OLS(data_y,data_x).fit() OLSInfluence(model1).summary_frame().head() #離群點檢測,原理群體的點,檢測最大標準化殘差值 model2 = sm.OLS(data_y,data_x).fit() outliers = model2.get_influence() outliers #高杠桿點,使用帽子統計量進行統計,>2(p+1)/n(其中p為自變量的個數,n為觀測的個數) leverage = outliers.hat_matrix_diag #使用dffits距離來判斷高杠桿點,DFFITS統計值大于2sqrt((p+1)/n)時 ,則認為該樣本點可能存在異常 dffits = outliers.dffits[ 0] #學生化殘差 resid_stu = outliers.resid_studentized_external #cook距離,值越大則為異常點的可能性就越高 cook = outliers.cooks_distance[ 0] #如果一個樣本的covratio(協方差)值離數值1越遠,則認為該樣本越可能是異常值 covratio = outliers.cov_ratio #幾種檢測進行合并 contat1 = pd.concat([pd.Series(leverage, name = 'leverage'),pd.Series(dffits, name = 'dffits'), pd.Series(resid_stu,name = 'resid_stu'),pd.Series(cook, name = 'cook'), pd.Series(covratio, name = 'covratio'),],axis = 1) data_outliers = pd.concat([data,contat1], axis = 1) data_outliers.head()


以學生會殘差為2 為界限
計算離群值比例

outliers_ratio = sum(np.where((np.abs(data_outliers.resid_stu)> 2), 1, 0))/data_outliers.shape[ 0] outliers_ratio


去掉離群值

data_outliers1 = data_outliers.loc[np.abs(data_outliers.resid_stu)<= 2,] data_outliers1

# 剔除異常值和共線性之后重新建模 from sklearn.linear_model import LinearRegression #線性回歸 data_x1 = data_outliers1.drop(['收入'],axis=1) data_y1 = data_outliers1['收入'] from sklearn.model_selection import train_test_split train_x1,test_x1,train_y1,test_y1 = train_test_split(data_x1,data_y1)#建立新的線性回歸模型 model4 = LinearRegression() model4.fit(train_x1,train_y1) pre_y1 = model4.predict(test_x1) score=r2_score(test_y1,pre_y1) score


sklearn.preprocessing.PolynomialFeatures,對特征進行構造,degree:控制多項式的次數

#模型解釋性 from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(degree = 2).fit(train_x) poly.get_feature_names()

poly.get_feature_names(train_x.columns)

把數據列放進去

X_ = poly.transform(train_x) reg = LinearRegression().fit(X_ ,train_y) reg.score(X_ ,train_y)

[*(zip(poly.get_feature_names(train_x.columns),reg.coef_))]

本來都是以行所以先轉置一下

coeframe = pd.DataFrame([poly.get_feature_names(train_x.columns),reg.coef_]).T coeframe.columns = ['feature','coef'] coeframe.sort_values(by = 'coef').head(10)

8.使用LASSO

Lasso中最重要的參數,alpha : float, 可選,默認 1.0。當 alpha 為 0 時算法等同于普通最小二乘法,可通過 Linear Regression 實現,因此不建議將 alpha 設為 0.

from sklearn.linear_model import Lasso model5 = Lasso(alpha=1.0) model5.fit(train_x,train_y)pre_y5 = model5.predict(test_x) score=r2_score(test_y,pre_y5) score


class sklearn.linear_model.Ridge(alpha=1.0, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, solver=‘auto’, random_state=None)? alpha 正則化系數,較大的值指定更強的正則化,默認1.0,調整 fit_intercept 是否計算模型的截距,默認為True,計算截距 normalize 在需要計算截距時,如果值為True,則變量x在進行回歸之前先進行歸一化,果需要進行標準化則normalize=False copy_X 默認為True,將復制X;否則,X可能在計算中被覆蓋。 max_iter 共軛梯度求解器的最大迭代次數 tol float類型,指定計算精度 solver 在計算過程中選擇的解決器 ,可選svd(奇異值分解法),lsqr(最小二乘法),嶺回歸不用調

9.使用嶺回歸

# 使用嶺回歸 from sklearn.linear_model import Ridge model6 = Ridge(alpha=1.0) model6.fit(train_x,train_y)pre_y6 = model6.predict(test_x) score=r2_score(test_y,pre_y6) score

10.lassocv

#lassocv from sklearn.linear_model import LassoCV alpha = np.logspace(-10,-2,200,base=10) lasso_ = LassoCV(alphas = alpha,cv =10).fit(train_x,train_y) lasso_.mse_path_ #mes


最佳參數

lasso_.alpha_ #最佳參數

使用python自帶波士頓房價做線性回歸

波士頓房價數據集來源于1978年美國某經濟學雜志上。該數據集包含若干波士頓房屋的價格及其各項數據,每個數據項包含14個數據,分別是犯罪率、是否在河邊和平均房間數等相關信息,其中最后一個數據是房屋中間價。

import numpy as np import pandas as pd from pandas import Series,DataFrame import matplotlib.pyplot as plt %matplotlib inline import sklearn.datasets as datasets #導入數據 boston_dataset = datasets.load_boston() X_full = boston_dataset.data Y_full = boston_dataset.target boston = pd.DataFrame(X_full) boston.columns = boston_dataset.feature_names boston['PRICE'] = Y_full print(boston.head()) #查看數據前幾行 # 數據分布 plt.scatter(boston.CHAS, boston.PRICE) plt.xlabel('CHAS') plt.ylabel('PRICE') plt.show()import seaborn as sns sns.set() sns.pairplot(boston)#劃分測試與驗證數據集 from sklearn.model_selection import train_test_split X_train,x_test,y_train,y_true = train_test_split(train,target,test_size=0.2) ##建立模型 from sklearn.linear_model import LinearRegression #線性回歸 from sklearn.linear_model import Ridge # 嶺回歸 from sklearn.linear_model import Lasso # LASSO回歸 from sklearn.linear_model import ElasticNet linear = LinearRegression() ridge = Ridge() lasso = Lasso() elasticnet = ElasticNet() #訓練模型 linear.fit(X_train,y_train) ridge.fit(X_train,y_train) lasso.fit(X_train,y_train) elasticnet.fit(X_train,y_train) ##模型預測 y_pre_linear = linear.predict(x_test) y_pre_ridge = ridge.predict(x_test) y_pre_lasso = lasso.predict(x_test) y_pre_elasticnet = elasticnet.predict(x_test) ##計算分值 from sklearn.metrics import r2_score linear_score=r2_score(y_true,y_pre_linear) ridge_score=r2_score(y_true,y_pre_ridge) lasso_score=r2_score(y_true,y_pre_lasso) elasticnet_score=r2_score(y_true,y_pre_elasticnet) display(linear_score,ridge_score,lasso_score,elasticnet_score) ##對比 #Linear plt.plot(y_true,label='true') plt.plot(y_pre_linear,label='linear') plt.legend() #Ridge plt.plot(y_true,label='true') plt.plot(y_pre_ridge,label='ridge') plt.legend() #lasso plt.plot(y_true,label='true') plt.plot(y_pre_lasso,label='lasso') plt.legend() #elasticnet plt.plot(y_true,label='true') plt.plot(y_pre_elasticnet,label='elasticnet') plt.legend() if __name__ == "__main__":pass

總結

以上是生活随笔為你收集整理的营收与预测:线性回归建立预测收入水平的线性回归模型。的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。