日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

Python实现 logistic 回归算法

發(fā)布時間:2025/3/12 python 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Python实现 logistic 回归算法 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

Python實現(xiàn) logistic 回歸算法


1、算法介紹

模型描述:

sigmoid函數(shù):

原理: 優(yōu)化目標:最小化 sigmoid(f(x)) 和真實標簽的差別(有不同的 cost function)。運用算法(梯度上升等)更新參數(shù)w、b。


2、Python代碼實現(xiàn)及注釋

決策邊界:

代碼:

import numpy as np from matplotlib import pyplot as plt# 加載數(shù)據(jù)集 def loadDataset(filename):# 數(shù)據(jù)集dataList = []# 數(shù)據(jù)樣本labelsList = []fr = open(filename)for line in fr.readlines():lineArr = line.strip().split()# 添加一個值為1的特征,對應于參數(shù)bdataList.append([1.0, float(lineArr[0]), float(lineArr[1])])labelsList.append(int(lineArr[-1]))return dataList, labelsList# sigmoid函數(shù)計算 def sigmoid(z):return 1.0/(1+np.exp(-z))# 運用標準的梯度上升法更新參數(shù) def gradAscent(dataList, labelsList):dataMat = np.mat(dataList)labelsMat = np.mat(labelsList).Tm, n = np.shape(dataMat)learningRate = 0.1maxCycles = 1000weights = np.ones((n, 1))for i in range(maxCycles):a = sigmoid(dataMat*weights)# 梯度計算的公式error = labelsMat - a# 更新參數(shù)weights = weights + learningRate*dataMat.T*error/mreturn weights# 運用隨機梯度上升更新參數(shù) def stocGradAscent(dataList, labelsList):m, n = np.shape(dataList)maxCycles = 200weights = np.ones(n)for j in range(maxCycles):for i in range(m):# 學習率遞減learningRate = 4/(1.0+j+i)+0.01# randIndex = int(np.random.uniform(0, m))# 計算 sigmoid 一次只計算一個樣本a = sigmoid(np.sum(dataList[i]*weights))error = labelsList[i] - aweights = weights + learningRate*error*np.array(dataList[i])return weights# 繪制數(shù)據(jù)集及決策邊界 def plotLine(dataSet, labels, weights):# 繪制數(shù)據(jù)集,不同類別顏色不同plt.scatter(np.array(dataSet)[:, 1], np.array(dataSet)[:, 2], 30 * (np.array(labels)+1), 15*np.array(labels))# 令表達式=0繪制決策邊界x = np.expand_dims(range(-3, 3, 1), 1)y = (-weights[0]-x*weights[1])/weights[2]plt.plot(x, y)plt.show()# 測試 logistic 回歸分類器 def predict(sample, weights):# 重構樣本特征向量sample = np.array([1.0, sample[0], sample[1]])prob = sigmoid(np.sum(sample*weights))if prob > 0.5:print("this is a positive")else:print('this is a negative')if __name__ == '__main__':# 加載數(shù)據(jù)集dataList, labelsList = loadDataset('testSet.txt')# 得到模型參數(shù)# weights = gradAscent(dataList, labelsList)weights = stocGradAscent(dataList, labelsList)print(weights)# 繪制決策邊界plotLine(dataList, labelsList, weights)# 測試分類器sample = np.array([1, 9])predict(sample, weights)

總結

以上是生活随笔為你收集整理的Python实现 logistic 回归算法的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。