简单算法的代码
文章目錄
- 1、梯度下降
- 2、KNN
1、梯度下降
# BGD 批梯度下降代碼實現 # SGD 隨機梯度下降代碼實現 import numpy as npimport randomdef batchGradientDescent(x, y, theta, alpha, m, maxInteration):x_train = x.transpose()for i in range(0, maxInteration):hypothesis = np.dot(x, theta)# 損失函數loss = hypothesis - y# 下降梯度gradient = np.dot(x_train, loss) / m# 求導之后得到thetatheta = theta - alpha * gradientreturn thetadef stochasticGradientDescent(x, y, theta, alpha, m, maxInteration):data = []for i in range(4):data.append(i)x_train = x.transpose()for i in range(0, maxInteration):hypothesis = np.dot(x, theta)# 損失函數loss = hypothesis - y# 選取一個隨機數index = random.sample(data, 1)index1 = index[0]# 下降梯度gradient = loss[index1] * x[index1]# 求導之后得到thetatheta = theta - alpha * gradientreturn thetadef main():trainData = np.array([[1, 4, 2], [2, 5, 3], [5, 1, 6], [4, 2, 8]])trainLabel = np.array([19, 26, 19, 20])print(trainData)print(trainLabel)m, n = np.shape(trainData)theta = np.ones(n)print(theta.shape)maxInteration = 500alpha = 0.01theta1 = batchGradientDescent(trainData, trainLabel, theta, alpha, m, maxInteration)print(theta1)theta2 = stochasticGradientDescent(trainData, trainLabel, theta, alpha, m, maxInteration)print(theta2)returnif __name__ == "__main__":main()2、KNN
關鍵點:計算點和中心間的距離:d=(∑(xi?xtest)2)d=\sqrt (\sum (x_i-x_{test})^2)d=(?∑(xi??xtest?)2)
偽代碼:
- 計算訓練集到該點的距離
- 選擇距離最小的k個點
- 返回k個點出現的頻率最高的類別最為當前點的預測類別
代碼:
''' in_x:輸入向量,也就是要分類的向量 data_set:測試集,也就是輸入向量和測試集中所有的樣本向量都要求距離 labels:''' def knn(in_x,data_set,labels,k):# 將輸入的行向量,擴展成和測試向量大小相同的矩陣,再和測試矩陣的每行求距離diff_mat=tile(in_x,(data_size,1))-data_setsq_diff_mat=diff_mat**2# 距離開方distances=sq_diff_mat.sum(axis=1)**0.5# 距離從小到大排列,保存索引值sorted_dist_indicies=distances.argsort()# 對前k個距離的樣本的類別求取for i in range(k):vote_label=labels[sorted_dist_indices[i]]# get:根據鍵值取值class_count[vote_label]=class_count.get(vote_label,0)+1sortedClassCount=sorted(classCount.iteritems(),\\key=operator.itemegetter(1),reverse=True)return sortedClassCount[0][0]總結
- 上一篇: 无人机对频一直失败怎么办(汉典无字的基本
- 下一篇: hands-on Machine Lea