生活随笔
收集整理的這篇文章主要介紹了
python spark pyspark——朴素贝叶斯习题整理
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
貝葉斯分類(lèi):在做算法時(shí)數(shù)據(jù)不能為負(fù)我就將原來(lái)數(shù)據(jù)中的負(fù)號(hào)去掉導(dǎo)致結(jié)果預(yù)測(cè)失敗
優(yōu)點(diǎn):在數(shù)據(jù)較少的情況下仍然有效,可以處理多類(lèi)別問(wèn)題。
缺點(diǎn):對(duì)于輸入數(shù)據(jù)的準(zhǔn)備方式較為敏感。 適用數(shù)據(jù)類(lèi)型:標(biāo)稱(chēng)型數(shù)據(jù)。
#1.導(dǎo)包
from pyspark.ml.classification import NaiveBayes
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.linalg import Vectors,Vector
from pyspark import SparkContext
from pyspark.ml.regression import LinearRegression
from pyspark.ml.feature import VectorAssembler
from pyspark.python.pyspark.shell import spark
from pyspark.ml.feature import StringIndexer
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.clustering import KMeans
#2.讀取hdfs中的文件
sc=SparkContext.getOrCreate()
train_data=sc.textFile("hdfs://master:9000/qw.csv")
def GetParts(line):parts = line.split(',')return parts[0],parts[1],parts[2],parts[3],parts[4],parts[5],parts[6]
header = train_data.first() #獲取第一行內(nèi)容
train_data = train_data.filter(lambda row:row != header) #刪除第一行數(shù)據(jù)
train = train_data.map(lambda line: GetParts(line))
df = spark.createDataFrame(train,["acceleration_x","acceleration_y","acceleration_z","gyro_x","gyro_y","gyro_z","activity"])#將數(shù)據(jù)轉(zhuǎn)化為DataFrame格式
df.show()
#將string類(lèi)型轉(zhuǎn)化為浮點(diǎn)型
df = df.withColumn("acceleration_x", df["acceleration_x"].cast(FloatType()))
df = df.withColumn("acceleration_y", df["acceleration_y"].cast(FloatType()))
df = df.withColumn("acceleration_z", df["acceleration_z"].cast(FloatType()))
df = df.withColumn("gyro_x", df["gyro_x"].cast(FloatType()))
df = df.withColumn("gyro_y", df["gyro_y"].cast(FloatType()))
df = df.withColumn("gyro_z", df["gyro_z"].cast(FloatType()))
df = df.withColumn("activity", df["activity"].cast(FloatType()))
#將數(shù)據(jù)劃分為特征和標(biāo)簽
assembler = VectorAssembler(inputCols=["acceleration_x","acceleration_y","acceleration_z","gyro_x","gyro_y","gyro_z"],outputCol="features")
output = assembler.transform(df)
label_features = output.select("features", "activity").toDF('features','label')
label_features.show(truncate=False)
#貝葉斯
nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
# 訓(xùn)練模型
model = nb.fit(label_features)df1 = spark.createDataFrame([(-1.0602,-0.282,-0.0618,0.8069,-0.9107,1.6153,1)],["acceleration_x","acceleration_y","acceleration_z","gyro_x","gyro_y","gyro_z","activity"])
df1.show()
test_assembler = VectorAssembler(inputCols=["acceleration_x","acceleration_y","acceleration_z","gyro_x","gyro_y","gyro_z"],outputCol="features")
test_output = test_assembler.transform(df1)
test_label_features = test_output.select("features", "activity").toDF('features','label')
test_label_features.show(truncate=False)# df1 = label_features.head(5)
# df1 = spark.createDataFrame(df1)
# df1.show()
# compute accuracy on the test set
result = model.transform(test_label_features)
print(result.collect())
predictionAndLabels = result.select("prediction", "label").collect()
print(predictionAndLabels)
總結(jié)
以上是生活随笔為你收集整理的python spark pyspark——朴素贝叶斯习题整理的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。