统计学第二章--感知机
生活随笔
收集整理的這篇文章主要介紹了
统计学第二章--感知机
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
感知機(jī)是一種較為簡單的二分類模型,感知機(jī)旨在學(xué)習(xí)能夠?qū)⑤斎霐?shù)據(jù)劃分為+1/-1的線性分離超平面,所以說整體而言感知機(jī)是一種線性模型。?
查看數(shù)據(jù)集
import pandas as pd import numpy as np from sklearn.datasets import load_iris import matplotlib.pyplot as plt # load data iris = load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) df['label'] = iris.targetdf.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label'] print(df.label.value_counts())plt.scatter(df[:50]['sepal length'], df[:50]['sepal width'], label='one') plt.scatter(df[50:100]['sepal length'], df[50:100]['sepal width'], label='two') plt.xlabel('sepal length') plt.ylabel('sepal width') plt.legend() plt.show()?sepal length (cm) ?sepal width (cm) ?petal length (cm) ?petal width (cm)
發(fā)現(xiàn)四個維度的數(shù)據(jù),有兩個維度就可以線性可分.
import pandas as pd import numpy as np from sklearn.datasets import load_iris import matplotlib.pyplot as plt # load data iris = load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) df['label'] = iris.target# df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label'] # print(df.label.value_counts()) # # plt.scatter(df[:50]['sepal length'], df[:50]['sepal width'], label='one') # plt.scatter(df[50:100]['sepal length'], df[50:100]['sepal width'], label='two') # plt.xlabel('sepal length') # plt.ylabel('sepal width') # plt.legend() # plt.show()data = np.array(df.iloc[:100, [0,1,-1]]) print(data) X, y = data[:,:-1], data[:,-1] print(type(X)) y = np.array([1 if i == 1 else -1 for i in y])# 數(shù)據(jù)線性可分,二分類數(shù)據(jù) # 此處為一元一次線性方程 class Model:def __init__(self):self.w = np.ones(len(data[0]) - 1, dtype=np.float32)print('self.w',self.w)self.b = 0self.l_rate = 0.1# self.data = datadef sign(self, x, w, b):y = np.dot(x, w) + breturn y# 隨機(jī)梯度下降法def fit(self, X_train, y_train):is_wrong = Falsewhile not is_wrong:wrong_count = 0for d in range(len(X_train)):X = X_train[d]y = y_train[d]if y * self.sign(X, self.w, self.b) <= 0:self.w = self.w + self.l_rate * np.dot(y, X)self.b = self.b + self.l_rate * ywrong_count += 1if wrong_count == 0:is_wrong = Truereturn 'Perceptron Model!'def score(self):passperceptron = Model() perceptron.fit(X, y)x_points = np.linspace(4, 7,10) print('x_points=',x_points) y_ = -(perceptron.w[0]*x_points + perceptron.b)/perceptron.w[1] plt.plot(x_points, y_)plt.plot(data[:50, 0], data[:50, 1], 'bo', color='blue', label='one') plt.plot(data[50:100, 0], data[50:100, 1], 'bo', color='orange', label='two') plt.xlabel('sepal length') plt.ylabel('sepal width') plt.legend() plt.show()?
總結(jié)
以上是生活随笔為你收集整理的统计学第二章--感知机的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Registry注册机制
- 下一篇: 好用工具推荐