日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

数据挖掘 python roc曲线_利用scikitlearn画ROC曲线实例

發(fā)布時(shí)間:2023/12/19 python 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 数据挖掘 python roc曲线_利用scikitlearn画ROC曲线实例 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

一個(gè)完整的數(shù)據(jù)挖掘模型,最后都要進(jìn)行模型評(píng)估,對(duì)于二分類來(lái)說(shuō),AUC,ROC這兩個(gè)指標(biāo)用到最多,所以 利用sklearn里面相應(yīng)的函數(shù)進(jìn)行模塊搭建。

具體實(shí)現(xiàn)的代碼可以參照下面博友的代碼,評(píng)估svm的分類指標(biāo)。注意里面的一些細(xì)節(jié)需要注意,一個(gè)是調(diào)用roc_curve 方法時(shí),指明目標(biāo)標(biāo)簽,否則會(huì)報(bào)錯(cuò)。

具體是這個(gè)參數(shù)的設(shè)置pos_label ,以前在unionbigdata實(shí)習(xí)時(shí)學(xué)到的。

重點(diǎn)是以下的代碼需要根據(jù)實(shí)際改寫:

mean_tpr = 0.0

mean_fpr = np.linspace(0, 1, 100)

all_tpr = []

y_target = np.r_[train_y,test_y]

cv = StratifiedKFold(y_target, n_folds=6)

#畫ROC曲線和計(jì)算AUC

fpr, tpr, thresholds = roc_curve(test_y, predict,pos_label = 2)##指定正例標(biāo)簽,pos_label = ###########在數(shù)之聯(lián)的時(shí)候?qū)W到的,要制定正例

mean_tpr += interp(mean_fpr, fpr, tpr) #對(duì)mean_tpr在mean_fpr處進(jìn)行插值,通過scipy包調(diào)用interp()函數(shù)

mean_tpr[0] = 0.0 #初始處為0

roc_auc = auc(fpr, tpr)

#畫圖,只需要plt.plot(fpr,tpr),變量roc_auc只是記錄auc的值,通過auc()函數(shù)能計(jì)算出來(lái)

plt.plot(fpr, tpr, lw=1, label='ROC %s (area = %0.3f)' % (classifier, roc_auc))

然后是博友的參考代碼:

# -*- coding: utf-8 -*-

"""

Created on Sun Apr 19 08:57:13 2015

@author: shifeng

"""

print(__doc__)

import numpy as np

from scipy import interp

import matplotlib.pyplot as plt

from sklearn import svm, datasets

from sklearn.metrics import roc_curve, auc

from sklearn.cross_validation import StratifiedKFold

###############################################################################

# Data IO and generation,導(dǎo)入iris數(shù)據(jù),做數(shù)據(jù)準(zhǔn)備

# import some data to play with

iris = datasets.load_iris()

X = iris.data

y = iris.target

X, y = X[y != 2], y[y != 2]#去掉了label為2,label只能二分,才可以。

n_samples, n_features = X.shape

# Add noisy features

random_state = np.random.RandomState(0)

X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]

###############################################################################

# Classification and ROC analysis

#分類,做ROC分析

# Run classifier with cross-validation and plot ROC curves

#使用6折交叉驗(yàn)證,并且畫ROC曲線

cv = StratifiedKFold(y, n_folds=6)

classifier = svm.SVC(kernel='linear', probability=True,

random_state=random_state)#注意這里,probability=True,需要,不然預(yù)測(cè)的時(shí)候會(huì)出現(xiàn)異常。另外rbf核效果更好些。

mean_tpr = 0.0

mean_fpr = np.linspace(0, 1, 100)

all_tpr = []

for i, (train, test) in enumerate(cv):

#通過訓(xùn)練數(shù)據(jù),使用svm線性核建立模型,并對(duì)測(cè)試集進(jìn)行測(cè)試,求出預(yù)測(cè)得分

probas_ = classifier.fit(X[train], y[train]).predict_proba(X[test])

# print set(y[train]) #set([0,1]) 即label有兩個(gè)類別

# print len(X[train]),len(X[test]) #訓(xùn)練集有84個(gè),測(cè)試集有16個(gè)

# print "++",probas_ #predict_proba()函數(shù)輸出的是測(cè)試集在lael各類別上的置信度,

# #在哪個(gè)類別上的置信度高,則分為哪類

# Compute ROC curve and area the curve

#通過roc_curve()函數(shù),求出fpr和tpr,以及閾值

fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])

mean_tpr += interp(mean_fpr, fpr, tpr) #對(duì)mean_tpr在mean_fpr處進(jìn)行插值,通過scipy包調(diào)用interp()函數(shù)

mean_tpr[0] = 0.0 #初始處為0

roc_auc = auc(fpr, tpr)

#畫圖,只需要plt.plot(fpr,tpr),變量roc_auc只是記錄auc的值,通過auc()函數(shù)能計(jì)算出來(lái)

plt.plot(fpr, tpr, lw=1, label='ROC fold %d (area = %0.2f)' % (i, roc_auc))

#畫對(duì)角線

plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Luck')

mean_tpr /= len(cv) #在mean_fpr100個(gè)點(diǎn),每個(gè)點(diǎn)處插值插值多次取平均

mean_tpr[-1] = 1.0 #坐標(biāo)最后一個(gè)點(diǎn)為(1,1)

mean_auc = auc(mean_fpr, mean_tpr) #計(jì)算平均AUC值

#畫平均ROC曲線

#print mean_fpr,len(mean_fpr)

#print mean_tpr

plt.plot(mean_fpr, mean_tpr, 'k--',

label='Mean ROC (area = %0.2f)' % mean_auc, lw=2)

plt.xlim([-0.05, 1.05])

plt.ylim([-0.05, 1.05])

plt.xlabel('False Positive Rate')

plt.ylabel('True Positive Rate')

plt.title('Receiver operating characteristic example')

plt.legend(loc="lower right")

plt.show()

補(bǔ)充知識(shí):批量進(jìn)行One-hot-encoder且進(jìn)行特征字段拼接,并完成模型訓(xùn)練demo

import org.apache.spark.ml.Pipeline

import org.apache.spark.ml.feature.{StringIndexer, OneHotEncoder}

import org.apache.spark.ml.feature.VectorAssembler

import ml.dmlc.xgboost4j.scala.spark.{XGBoostEstimator, XGBoostClassificationModel}

import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator

import org.apache.spark.ml.tuning.{ParamGridBuilder, CrossValidator}

import org.apache.spark.ml.PipelineModel

val data = (spark.read.format("csv")

.option("sep", ",")

.option("inferSchema", "true")

.option("header", "true")

.load("/Affairs.csv"))

data.createOrReplaceTempView("res1")

val affairs = "case when affairs>0 then 1 else 0 end as affairs,"

val df = (spark.sql("select " + affairs +

"gender,age,yearsmarried,children,religiousness,education,occupation,rating" +

" from res1 "))

val categoricals = df.dtypes.filter(_._2 == "StringType") map (_._1)

val indexers = categoricals.map(

c => new StringIndexer().setInputCol(c).setOutputCol(s"${c}_idx")

)

val encoders = categoricals.map(

c => new OneHotEncoder().setInputCol(s"${c}_idx").setOutputCol(s"${c}_enc").setDropLast(false)

)

val colArray_enc = categoricals.map(x => x + "_enc")

val colArray_numeric = df.dtypes.filter(_._2 != "StringType") map (_._1)

val final_colArray = (colArray_numeric ++ colArray_enc).filter(!_.contains("affairs"))

val vectorAssembler = new VectorAssembler().setInputCols(final_colArray).setOutputCol("features")

/*

val pipeline = new Pipeline().setStages(indexers ++ encoders ++ Array(vectorAssembler))

pipeline.fit(df).transform(df)

*/

///

// Create an XGBoost Classifier

val xgb = new XGBoostEstimator(Map("num_class" -> 2, "num_rounds" -> 5, "objective" -> "binary:logistic", "booster" -> "gbtree")).setLabelCol("affairs").setFeaturesCol("features")

// XGBoost paramater grid

val xgbParamGrid = (new ParamGridBuilder()

.addGrid(xgb.round, Array(10))

.addGrid(xgb.maxDepth, Array(10,20))

.addGrid(xgb.minChildWeight, Array(0.1))

.addGrid(xgb.gamma, Array(0.1))

.addGrid(xgb.subSample, Array(0.8))

.addGrid(xgb.colSampleByTree, Array(0.90))

.addGrid(xgb.alpha, Array(0.0))

.addGrid(xgb.lambda, Array(0.6))

.addGrid(xgb.scalePosWeight, Array(0.1))

.addGrid(xgb.eta, Array(0.4))

.addGrid(xgb.boosterType, Array("gbtree"))

.addGrid(xgb.objective, Array("binary:logistic"))

.build())

// Create the XGBoost pipeline

val pipeline = new Pipeline().setStages(indexers ++ encoders ++ Array(vectorAssembler, xgb))

// Setup the binary classifier evaluator

val evaluator = (new BinaryClassificationEvaluator()

.setLabelCol("affairs")

.setRawPredictionCol("prediction")

.setMetricName("areaUnderROC"))

// Create the Cross Validation pipeline, using XGBoost as the estimator, the

// Binary Classification evaluator, and xgbParamGrid for hyperparameters

val cv = (new CrossValidator()

.setEstimator(pipeline)

.setEvaluator(evaluator)

.setEstimatorParamMaps(xgbParamGrid)

.setNumFolds(3)

.setSeed(0))

// Create the model by fitting the training data

val xgbModel = cv.fit(df)

// Test the data by scoring the model

val results = xgbModel.transform(df)

// Print out a copy of the parameters used by XGBoost, attention pipeline

(xgbModel.bestModel.asInstanceOf[PipelineModel]

.stages(5).asInstanceOf[XGBoostClassificationModel]

.extractParamMap().toSeq.foreach(println))

results.select("affairs","prediction").show

println("---Confusion Matrix------")

results.stat.crosstab("affairs","prediction").show()

// What was the overall accuracy of the model, using AUC

val auc = evaluator.evaluate(results)

println("----AUC--------")

println("auc="+auc)

以上這篇利用scikitlearn畫ROC曲線實(shí)例就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持我們。

本文標(biāo)題: 利用scikitlearn畫ROC曲線實(shí)例

本文地址: http://www.cppcns.com/jiaoben/python/324428.html

創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎(jiǎng)勵(lì)來(lái)咯,堅(jiān)持創(chuàng)作打卡瓜分現(xiàn)金大獎(jiǎng)

總結(jié)

以上是生活随笔為你收集整理的数据挖掘 python roc曲线_利用scikitlearn画ROC曲线实例的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。