日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

kaggle研究生招生(中)

發(fā)布時間:2024/10/8 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 kaggle研究生招生(中) 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

上次將數(shù)據(jù)訓練了模型

由于數(shù)據(jù)中的大多數(shù)候選人都有70%以上的機會,許多不成功的候選人都沒有很好的預測。

df["Chance of Admit"].plot(kind = 'hist',bins = 200,figsize = (6,6)) plt.title("Chance of Admit") plt.xlabel("Chance of Admit") plt.ylabel("Frequency") plt.show()


為分類準備數(shù)據(jù)

如果候選人的錄取機會大于80%,則該候選人將獲得1個標簽。
如果候選人的錄取機會小于或等于80%,則該候選人將獲得0標簽。

# reading the dataset df = pd.read_csv("../input/Admission_Predict.csv",sep = ",")# it may be needed in the future. serialNo = df["Serial No."].values df.drop(["Serial No."],axis=1,inplace = True)y = df["Chance of Admit"].values x = df.drop(["Chance of Admit"],axis=1)# separating train (80%) and test (%20) sets from sklearn.model_selection import train_test_split x_train, x_test,y_train, y_test = train_test_split(x,y,test_size = 0.20,random_state = 42)# normalization from sklearn.preprocessing import MinMaxScaler scalerX = MinMaxScaler(feature_range=(0, 1)) x_train[x_train.columns] = scalerX.fit_transform(x_train[x_train.columns]) x_test[x_test.columns] = scalerX.transform(x_test[x_test.columns])y_train_01 = [1 if each > 0.8 else 0 for each in y_train] y_test_01 = [1 if each > 0.8 else 0 for each in y_test]# list to array y_train_01 = np.array(y_train_01) y_test_01 = np.array(y_test_01)

邏輯回歸

from sklearn.linear_model import LogisticRegression lrc = LogisticRegression() lrc.fit(x_train,y_train_01) print("score: ", lrc.score(x_test,y_test_01)) print("real value of y_test_01[1]: " + str(y_test_01[1]) + " -> the predict: " + str(lrc.predict(x_test.iloc[[1],:]))) print("real value of y_test_01[2]: " + str(y_test_01[2]) + " -> the predict: " + str(lrc.predict(x_test.iloc[[2],:])))# confusion matrix from sklearn.metrics import confusion_matrix cm_lrc = confusion_matrix(y_test_01,lrc.predict(x_test)) # print("y_test_01 == 1 :" + str(len(y_test_01[y_test_01==1]))) # 29# cm visualization import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_lrc,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.title("Test for Test Dataset") plt.xlabel("predicted y values") plt.ylabel("real y values") plt.show()from sklearn.metrics import precision_score, recall_score print("precision_score: ", precision_score(y_test_01,lrc.predict(x_test))) print("recall_score: ", recall_score(y_test_01,lrc.predict(x_test)))from sklearn.metrics import f1_score print("f1_score: ",f1_score(y_test_01,lrc.predict(x_test)))

score: 0.9
real value of y_test_01[1]: 0 -> the predict: [0]
real value of y_test_01[2]: 1 -> the predict: [1]


precision_score: 0.9565217391304348
recall_score: 0.7586206896551724
f1_score: 0.8461538461538461

Test for Train Dataset:

cm_lrc_train = confusion_matrix(y_train_01,lrc.predict(x_train)) f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_lrc_train,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("predicted y values") plt.ylabel("real y values") plt.title("Test for Train Dataset") plt.show()

SVC

from sklearn.svm import SVC svm = SVC(random_state = 1) svm.fit(x_train,y_train_01) print("score: ", svm.score(x_test,y_test_01)) print("real value of y_test_01[1]: " + str(y_test_01[1]) + " -> the predict: " + str(svm.predict(x_test.iloc[[1],:]))) print("real value of y_test_01[2]: " + str(y_test_01[2]) + " -> the predict: " + str(svm.predict(x_test.iloc[[2],:])))# confusion matrix from sklearn.metrics import confusion_matrix cm_svm = confusion_matrix(y_test_01,svm.predict(x_test)) # print("y_test_01 == 1 :" + str(len(y_test_01[y_test_01==1]))) # 29# cm visualization import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_svm,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.title("Test for Test Dataset") plt.xlabel("predicted y values") plt.ylabel("real y values") plt.show()from sklearn.metrics import precision_score, recall_score print("precision_score: ", precision_score(y_test_01,svm.predict(x_test))) print("recall_score: ", recall_score(y_test_01,svm.predict(x_test)))from sklearn.metrics import f1_score print("f1_score: ",f1_score(y_test_01,svm.predict(x_test)))

score: 0.9
real value of y_test_01[1]: 0 -> the predict: [0]
real value of y_test_01[2]: 1 -> the predict: [1]

precision_score: 0.9565217391304348
recall_score: 0.7586206896551724
f1_score: 0.8461538461538461

Test for Train Dataset

cm_svm_train = confusion_matrix(y_train_01,svm.predict(x_train)) f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_svm_train,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("predicted y values") plt.ylabel("real y values") plt.title("Test for Train Dataset") plt.show()

樸素貝葉斯

from sklearn.naive_bayes import GaussianNB nb = GaussianNB() nb.fit(x_train,y_train_01) print("score: ", nb.score(x_test,y_test_01)) print("real value of y_test_01[1]: " + str(y_test_01[1]) + " -> the predict: " + str(nb.predict(x_test.iloc[[1],:]))) print("real value of y_test_01[2]: " + str(y_test_01[2]) + " -> the predict: " + str(nb.predict(x_test.iloc[[2],:])))# confusion matrix from sklearn.metrics import confusion_matrix cm_nb = confusion_matrix(y_test_01,nb.predict(x_test)) # print("y_test_01 == 1 :" + str(len(y_test_01[y_test_01==1]))) # 29 # cm visualization import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_nb,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.title("Test for Test Dataset") plt.xlabel("predicted y values") plt.ylabel("real y values") plt.show()from sklearn.metrics import precision_score, recall_score print("precision_score: ", precision_score(y_test_01,nb.predict(x_test))) print("recall_score: ", recall_score(y_test_01,nb.predict(x_test)))from sklearn.metrics import f1_score print("f1_score: ",f1_score(y_test_01,nb.predict(x_test)))

score: 0.9625
real value of y_test_01[1]: 0 -> the predict: [0]
real value of y_test_01[2]: 1 -> the predict: [1]


precision_score: 0.9333333333333333
recall_score: 0.9655172413793104
f1_score: 0.9491525423728815

Test for Train Dataset:

cm_nb_train = confusion_matrix(y_train_01,nb.predict(x_train)) f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_nb_train,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("predicted y values") plt.ylabel("real y values") plt.title("Test for Train Dataset") plt.show()

決策樹

from sklearn.tree import DecisionTreeClassifier dtc = DecisionTreeClassifier() dtc.fit(x_train,y_train_01) print("score: ", dtc.score(x_test,y_test_01)) print("real value of y_test_01[1]: " + str(y_test_01[1]) + " -> the predict: " + str(dtc.predict(x_test.iloc[[1],:]))) print("real value of y_test_01[2]: " + str(y_test_01[2]) + " -> the predict: " + str(dtc.predict(x_test.iloc[[2],:])))# confusion matrix from sklearn.metrics import confusion_matrix cm_dtc = confusion_matrix(y_test_01,dtc.predict(x_test)) # print("y_test_01 == 1 :" + str(len(y_test_01[y_test_01==1]))) # 29# cm visualization import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_dtc,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.title("Test for Test Dataset") plt.xlabel("predicted y values") plt.ylabel("real y values") plt.show()from sklearn.metrics import precision_score, recall_score print("precision_score: ", precision_score(y_test_01,dtc.predict(x_test))) print("recall_score: ", recall_score(y_test_01,dtc.predict(x_test)))from sklearn.metrics import f1_score print("f1_score: ",f1_score(y_test_01,dtc.predict(x_test)))

score: 0.9375
real value of y_test_01[1]: 0 -> the predict: [0]
real value of y_test_01[2]: 1 -> the predict: [1]

precision_score: 0.9615384615384616
recall_score: 0.8620689655172413
f1_score: 0.9090909090909091

Test for Train Dataset

cm_dtc_train = confusion_matrix(y_train_01,dtc.predict(x_train)) f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_dtc_train,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("predicted y values") plt.ylabel("real y values") plt.title("Test for Train Dataset") plt.show()

隨機森林

from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier(n_estimators = 100,random_state = 1) rfc.fit(x_train,y_train_01) print("score: ", rfc.score(x_test,y_test_01)) print("real value of y_test_01[1]: " + str(y_test_01[1]) + " -> the predict: " + str(rfc.predict(x_test.iloc[[1],:]))) print("real value of y_test_01[2]: " + str(y_test_01[2]) + " -> the predict: " + str(rfc.predict(x_test.iloc[[2],:])))# confusion matrix from sklearn.metrics import confusion_matrix cm_rfc = confusion_matrix(y_test_01,rfc.predict(x_test)) # print("y_test_01 == 1 :" + str(len(y_test_01[y_test_01==1]))) # 29 # cm visualization import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_rfc,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.title("Test for Test Dataset") plt.xlabel("predicted y values") plt.ylabel("real y values") plt.show()from sklearn.metrics import precision_score, recall_score print("precision_score: ", precision_score(y_test_01,rfc.predict(x_test))) print("recall_score: ", recall_score(y_test_01,rfc.predict(x_test)))from sklearn.metrics import f1_score print("f1_score: ",f1_score(y_test_01,rfc.predict(x_test)))

score: 0.9375
real value of y_test_01[1]: 0 -> the predict: [0]
real value of y_test_01[2]: 1 -> the predict: [1]

precision_score: 0.9615384615384616
recall_score: 0.8620689655172413
f1_score: 0.9090909090909091

Test for Train Dataset

cm_rfc_train = confusion_matrix(y_train_01,rfc.predict(x_train)) f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_rfc_train,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("predicted y values") plt.ylabel("real y values") plt.title("Test for Train Dataset") plt.show()

kNN

from sklearn.neighbors import KNeighborsClassifier# finding k value scores = [] for each in range(1,50):knn_n = KNeighborsClassifier(n_neighbors = each)knn_n.fit(x_train,y_train_01)scores.append(knn_n.score(x_test,y_test_01))plt.plot(range(1,50),scores) plt.xlabel("k") plt.ylabel("accuracy") plt.show()knn = KNeighborsClassifier(n_neighbors = 3) # n_neighbors = k knn.fit(x_train,y_train_01) print("score of 3 :",knn.score(x_test,y_test_01)) print("real value of y_test_01[1]: " + str(y_test_01[1]) + " -> the predict: " + str(knn.predict(x_test.iloc[[1],:]))) print("real value of y_test_01[2]: " + str(y_test_01[2]) + " -> the predict: " + str(knn.predict(x_test.iloc[[2],:])))# confusion matrix from sklearn.metrics import confusion_matrix cm_knn = confusion_matrix(y_test_01,knn.predict(x_test)) # print("y_test_01 == 1 :" + str(len(y_test_01[y_test_01==1]))) # 29# cm visualization import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_knn,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.title("Test for Test Dataset") plt.xlabel("predicted y values") plt.ylabel("real y values") plt.show()from sklearn.metrics import precision_score, recall_score print("precision_score: ", precision_score(y_test_01,knn.predict(x_test))) print("recall_score: ", recall_score(y_test_01,knn.predict(x_test)))from sklearn.metrics import f1_score print("f1_score: ",f1_score(y_test_01,knn.predict(x_test)))


score of 3 : 0.9375
real value of y_test_01[1]: 0 -> the predict: [0]
real value of y_test_01[2]: 1 -> the predict: [1]


precision_score: 0.9285714285714286
recall_score: 0.896551724137931
f1_score: 0.912280701754386

Test for Train Dataset:

cm_knn_train = confusion_matrix(y_train_01,knn.predict(x_train)) f, ax = plt.subplots(figsize =(5,5)) sns.heatmap(cm_knn_train,annot = True,linewidths=0.5,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("predicted y values") plt.ylabel("real y values") plt.title("Test for Train Dataset") plt.show()


所有分類算法都取得了大約90%的成功。最成功的是高斯樸素貝葉斯,得分為96%。

y = np.array([lrc.score(x_test,y_test_01),svm.score(x_test,y_test_01),nb.score(x_test,y_test_01),dtc.score(x_test,y_test_01),rfc.score(x_test,y_test_01),knn.score(x_test,y_test_01)]) #x = ["LogisticRegression","SVM","GaussianNB","DecisionTreeClassifier","RandomForestClassifier","KNeighborsClassifier"] x = ["LogisticReg.","SVM","GNB","Dec.Tree","Ran.Forest","KNN"]plt.bar(x,y) plt.title("Comparison of Classification Algorithms") plt.xlabel("Classfication") plt.ylabel("Score") plt.show()


上文是回歸算法,此文分類

總結(jié)

以上是生活随笔為你收集整理的kaggle研究生招生(中)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 黑人玩弄人妻一区二区三区 | 亚洲免费综合 | 国产专区在线 | 日本在线免费观看视频 | 精品动漫av | 中文字幕一区二区三区在线视频 | 亚洲精品女人 | 香蕉视频一级 | 国产精品乱码久久久 | 特大黑人巨交吊性xxxxhd | 精品人妻天天爽夜夜爽视频 | 美女视频在线观看免费 | 一级黄色播放 | 亚洲一级一级 | 欧美日韩国产成人精品 | 中文一二区 | 精品网站 | 久久精品香蕉视频 | 欧美性猛交xxxx黑人交 | 国产精品久久久精品 | 国产aⅴ一区二区三区 | 日本久久激情 | 久久青青 | 亚洲呦呦| 色多多污| 日本一级视频 | 青青草激情视频 | 一个人看的毛片 | 夜夜夜网 | 久久e热 | 午夜伦伦 | 国产精品手机在线 | 亚州一级| 四虎成人网 | 国产男女裸体做爰爽爽 | 欧美亚洲国产精品 | 亚洲欧美在线视频 | 免费国产一区 | 91香蕉嫩草| 精品一区二区三区四区视频 | 亚洲孕交 | 伊人久久一区二区 | 重囗味sm一区二区三区 | 婷婷人体 | av片在线观看网站 | hs视频在线观看 | 精品久久久久久久久中文字幕 | h片免费网站 | 国产精品一区二区在线播放 | 日韩一区二区免费在线观看 | av片在线看 | 国产乱码精品一区二区三区亚洲人 | 在线免费观看国产视频 | 欧美色精品 | 精品福利一区二区三区 | 天堂中文在线视频 | 中国成熟妇女毛茸茸 | 97自拍视频在线 | 91精品啪在线观看国产线免费 | 久久久人妻无码一区二区 | 亚洲喷潮 | 性xxx法国hd极品 | 以女性视角写的高h爽文 | 亚洲欧美天堂网 | 亚洲一线在线观看 | 狠狠干五月天 | 欧美视频亚洲 | 亚洲 欧美 变态 另类 综合 | 免费在线观看不卡av | 激情综合激情五月 | 精品人妻少妇一区二区三区 | 国产又大又长又粗 | 日韩中文在线视频 | 人人妻人人爽欧美成人一区 | 成人黄色在线看 | 欧美黄色特级视频 | 人妻少妇无码精品视频区 | 婷婷爱五月天 | 国产女人视频 | 精品黄色 | 婷婷的五月 | 人人爽在线 | 欧美少妇xx | 黄色一级大片在线免费看产 | 久久精品一本 | 国产成人精品免高潮在线观看 | 久久视频在线免费观看 | 亚洲aaa| 国产91高清 | 草草影院最新地址 | 欧美日韩国产一级 | 男男一级淫片免费播放 | 亚洲精品在线免费播放 | 日韩中文字幕一区二区 | 蜜臀aⅴ国产精品久久久国产老师 | 欧美人与性动交α欧美精品 | 爱av导航 | www.日本免费| 日本夜夜操 |