机器学习:多层感知机原理及实现
生活随笔
收集整理的這篇文章主要介紹了
机器学习:多层感知机原理及实现
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
文章目錄
- MLP
- MLP的向量形式:
- MLP的損失函數(shù):
- 感知機(jī)求?L/?w:
- 實現(xiàn)
- 驗證
- 輔助函數(shù)
MLP
MLP的向量形式:
MLP的損失函數(shù):
感知機(jī)求?L/?w:
實現(xiàn)
import numpy as np import matplotlib.pyplot as pltclass MultiPerceptron:def __init__(self):# the shape of w :(n)self._w = Nonedef fit(self, x, y, lr=1e-3, epoch=1000):x = np.asarray(x, np.float32)y = np.asarray(y, np.float32)# (x_features_num,1)self._w = np.zeros([x.shape[1],y.shape[1]])for _ in range(epoch):# 注意這里的實現(xiàn)是x點成wy_pred = x.dot(self._w)# acturly it is dL/dwdw = 2*x.T.dot(y_pred - y)self._w -= lr*dwdef predict(self,x):y_pred = np.asarray(x, np.float32).dot(self._w)return np.argmax(y_pred, axis=1).astype(np.float32)驗證
# x =(200,2) ,y = (200,5) y is one-hot vector x, y = gen_five_clusters() # now y is a class label label = np.argmax(y, axis=1) perceptron = MultiPerceptron() perceptron.fit(x,y) visualize2d(perceptron,x,label,draw_background=True) print("Acc: {:8.6} %".format((perceptron.predict(x) == label).mean()*100))輔助函數(shù)
import numpy as np import matplotlib.pyplot as pltfrom math import pidef gen_five_clusters(size=200):x = np.random.randn(size) * 2y = np.random.randn(size) * 2z = np.full(size, -1)mask1, mask2 = x + y >= 1, x + y >= -1mask3, mask4 = x - y >= 1, x - y >= -1z[mask1 & ~mask4] = 0z[mask1 & mask3] = 1z[~mask2 & mask3] = 2z[~mask2 & ~mask4] = 3z[z == -1] = 4one_hot = np.zeros([size, 5])one_hot[range(size), z] = 1return np.c_[x, y].astype(np.float32), one_hotdef visualize2d(clf, x, y, padding=0.2, draw_background=False):axis, labels = np.array(x).T, np.array(y)nx, ny, padding = 400, 400, paddingx_min, x_max = np.min(axis[0]), np.max(axis[0])y_min, y_max = np.min(axis[1]), np.max(axis[1])x_padding = max(abs(x_min), abs(x_max)) * paddingy_padding = max(abs(y_min), abs(y_max)) * paddingx_min -= x_paddingx_max += x_paddingy_min -= y_paddingy_max += y_paddingdef get_base(nx, ny):xf = np.linspace(x_min, x_max, nx)yf = np.linspace(y_min, y_max, ny)n_xf, n_yf = np.meshgrid(xf, yf)return xf, yf, np.c_[n_xf.ravel(), n_yf.ravel()]xf, yf, base_matrix = get_base(nx, ny)z = clf.predict(base_matrix).reshape([nx, ny])n_label = len(np.unique(labels))xy_xf, xy_yf = np.meshgrid(xf, yf, sparse=True)colors = plt.cm.rainbow([i / n_label for i in range(n_label)])[labels.astype(np.int)]plt.figure()if draw_background:plt.pcolormesh(xy_xf, xy_yf, z, cmap=plt.cm.Paired)else:plt.contour(xf, yf, z, c='k-', levels=[0])plt.scatter(axis[0], axis[1], c=colors)plt.xlim(x_min, x_max)plt.ylim(y_min, y_max)plt.show()總結(jié)
以上是生活随笔為你收集整理的机器学习:多层感知机原理及实现的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 机器学习:SVM多分类,SVM回归(SV
- 下一篇: 机器学习:神经网络矩阵形式,向量形式,矩