朴素贝叶斯分类器python_朴素贝叶斯分类器及Python实现
貝葉斯定理
貝葉斯定理是通過對(duì)觀測(cè)值概率分布的主觀判斷(即先驗(yàn)概率)進(jìn)行修正的定理,在概率論中具有重要地位。
先驗(yàn)概率分布(邊緣概率)是指基于主觀判斷而非樣本分布的概率分布,后驗(yàn)概率(條件概率)是根據(jù)樣本分布和未知參數(shù)的先驗(yàn)概率分布求得的條件概率分布。
貝葉斯公式:
P(A∩B) = P(A)*P(B|A) = P(B)*P(A|B)
變形得:
P(A|B)=P(B|A)*P(A)/P(B)
其中
P(A)是A的先驗(yàn)概率或邊緣概率,稱作"先驗(yàn)"是因?yàn)樗豢紤]B因素。
P(A|B)是已知B發(fā)生后A的條件概率,也稱作A的后驗(yàn)概率。
P(B|A)是已知A發(fā)生后B的條件概率,也稱作B的后驗(yàn)概率,這里稱作似然度。
P(B)是B的先驗(yàn)概率或邊緣概率,這里稱作標(biāo)準(zhǔn)化常量。
P(B|A)/P(B)稱作標(biāo)準(zhǔn)似然度。
樸素貝葉斯分類(Naive Bayes)
樸素貝葉斯分類器在估計(jì)類條件概率時(shí)假設(shè)屬性之間條件獨(dú)立。
首先定義
x = {a1,a2,...}為一個(gè)樣本向量,a為一個(gè)特征屬性
div = {d1 = [l1,u1],...} 特征屬性的一個(gè)劃分
class = {y1,y2,...}樣本所屬的類別
算法流程:
(1) 通過樣本集中類別的分布,對(duì)每個(gè)類別計(jì)算先驗(yàn)概率p(y[i])
(2) 計(jì)算每個(gè)類別下每個(gè)特征屬性劃分的頻率p(a[j] in d[k] | y[i])
(3) 計(jì)算每個(gè)樣本的p(x|y[i])
p(x|y[i]) = p(a[1] in d | y[i]) * p(a[2] in d | y[i]) * ...
樣本的所有特征屬性已知,所以特征屬性所屬的區(qū)間d已知。
可以通過(2)確定p(a[k] in d | y[i])的值,從而求得p(x|y[i])。
(4) 由貝葉斯定理得:
p(y[i]|x) = ( p(x|y[i]) * p(y[i]) ) / p(x)
因?yàn)榉帜赶嗤?#xff0c;只需計(jì)算分子。
p(y[i]|x)是觀測(cè)樣本屬于分類y[i]的概率,找出最大概率對(duì)應(yīng)的分類作為分類結(jié)果。
示例:
導(dǎo)入數(shù)據(jù)集
{a1 = 0, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}
{a1 = 0, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}
{a1 = 0, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}
{a1 = 1, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}
{a1 = 1, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}
{a1 = 1, a2 = 0, C = 0} {a1 = 1, a2 = 0, C = 1}
{a1 = 1, a2 = 1, C = 0} {a1 = 1, a2 = 0, C = 1}
{a1 = 1, a2 = 1, C = 0} {a1 = 1, a2 = 1, C = 1}
{a1 = 1, a2 = 1, C = 0} {a1 = 1, a2 = 1, C = 1}
{a1 = 1, a2 = 1, C = 0} {a1 = 1, a2 = 1, C = 1}
計(jì)算類別的先驗(yàn)概率
P(C = 0) = 0.5
P(C = 1) = 0.5
計(jì)算每個(gè)特征屬性條件概率:
P(a1 = 0 | C = 0) = 0.3
P(a1 = 1 | C = 0) = 0.7
P(a2 = 0 | C = 0) = 0.4
P(a2 = 1 | C = 0) = 0.6
P(a1 = 0 | C = 1) = 0.5
P(a1 = 1 | C = 1) = 0.5
P(a2 = 0 | C = 1) = 0.7
P(a2 = 1 | C = 1) = 0.3
測(cè)試樣本:
x = { a1 = 1, a2 = 2}
p(x | C = 0) = p(a1 = 1 | C = 0) * p( 2 = 2 | C = 0) = 0.3 * 0.6 = 0.18
p(x | C = 1) = p(a1 = 1 | C = 1) * p (a2 = 2 | C = 1) = 0.5 * 0.3 = 0.15
計(jì)算P(C | x) * p(x):
P(C = 0) * p(x | C = 1) = 0.5 * 0.18 = 0.09
P(C = 1) * p(x | C = 2) = 0.5 * 0.15 = 0.075
所以認(rèn)為測(cè)試樣本屬于類型C1
Python實(shí)現(xiàn)
樸素貝葉斯分類器的訓(xùn)練過程為計(jì)算(1),(2)中的概率表,應(yīng)用過程為計(jì)算(3),(4)并尋找最大值。
還是使用原來的接口進(jìn)行類封裝:
from numpy import *
class NaiveBayesClassifier(object):
def __init__(self):
self.dataMat = list()
self.labelMat = list()
self.pLabel1 = 0
self.p0Vec = list()
self.p1Vec = list()
def loadDataSet(self,filename):
fr = open(filename)
for line in fr.readlines():
lineArr = line.strip().split()
dataLine = list()
for i in lineArr:
dataLine.append(float(i))
label = dataLine.pop() # pop the last column referring to label
self.dataMat.append(dataLine)
self.labelMat.append(int(label))
def train(self):
dataNum = len(self.dataMat)
featureNum = len(self.dataMat[0])
self.pLabel1 = sum(self.labelMat)/float(dataNum)
p0Num = zeros(featureNum)
p1Num = zeros(featureNum)
p0Denom = 1.0
p1Denom = 1.0
for i in range(dataNum):
if self.labelMat[i] == 1:
p1Num += self.dataMat[i]
p1Denom += sum(self.dataMat[i])
else:
p0Num += self.dataMat[i]
p0Denom += sum(self.dataMat[i])
self.p0Vec = p0Num/p0Denom
self.p1Vec = p1Num/p1Denom
def classify(self, data):
p1 = reduce(lambda x, y: x * y, data * self.p1Vec) * self.pLabel1
p0 = reduce(lambda x, y: x * y, data * self.p0Vec) * (1.0 - self.pLabel1)
if p1 > p0:
return 1
else:
return 0
def test(self):
self.loadDataSet('testNB.txt')
self.train()
print(self.classify([1, 2]))
if __name__ == '__main__':
NB = NaiveBayesClassifier()
NB.test()
Matlab
Matlab的標(biāo)準(zhǔn)工具箱提供了對(duì)樸素貝葉斯分類器的支持:
trainData = [0 1; -1 0; 2 2; 3 3; -2 -1;-4.5 -4; 2 -1; -1 -3];
group = [1 1 -1 -1 1 1 -1 -1]';
model = fitcnb(trainData, group)
testData = [5 2;3 1;-4 -3];
predict(model, testData)
fitcnb用來訓(xùn)練模型,predict用來預(yù)測(cè)。
總結(jié)
以上是生活随笔為你收集整理的朴素贝叶斯分类器python_朴素贝叶斯分类器及Python实现的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Canal部署linux mysql同步
- 下一篇: python迷宫最短路径_python实