非线性回归(Non-linear Regression)
生活随笔
收集整理的這篇文章主要介紹了
非线性回归(Non-linear Regression)
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
非線性回歸應(yīng)用(Logistic Regression Application)
import numpy as np import random# 一個(gè)函數(shù)為梯度下降的算法 def GradientDescent(x,y,theta,alpha,m,numInterations):# m denotes the number of examples here, not the number of features'''x:實(shí)例;y:分類標(biāo)簽theta:要學(xué)習(xí)的參數(shù)θalpha:learning ratem:更新法則公式中實(shí)例的個(gè)數(shù),對(duì)應(yīng)矩陣的維數(shù)[]numInterations:使用此方法循環(huán)訓(xùn)練更新的次數(shù)'''xTrans = x.transpose() #轉(zhuǎn)置x便于后面運(yùn)算for i in range(0,numInterations):hypothesis = np.dot(x,theta) #這里為什么要放在for循環(huán)里面,并不受循環(huán)影響? #for循環(huán)次數(shù)即為更新次數(shù)loss = hypothesis - y #hypothesis其實(shí)就是y_hat,這里loss就等于y_hat減去y(實(shí)際)# avg cost per example (the 2 in 2*m doesn't really matter here.# But to be consistent with the gradient, I include itcost = np.sum(loss**2)/(2*m)#這里的cost函數(shù)與課文中提到的cost函數(shù)不一樣,這里使用了一個(gè)簡(jiǎn)單的cost便于計(jì)算'''cost:對(duì)精確度的衡量,每一次gradient都會(huì)減小'''print('Interation:%d|cost:%f'%(i,cost))# avg gradient per examplegradient = np.dot(xTrans,loss)/m #每一次的下降梯度值,除以m:取平均# updatatheta = theta-alpha*gradient #即更新法則的公式:θ=θ-α∑(h(x)-y)xreturn theta# 一個(gè)函數(shù)用來(lái)產(chǎn)生數(shù)據(jù)用來(lái)測(cè)試擬合 def genData(numPoints,bias,variance):'''numPoints:實(shí)例的行數(shù)(矩陣形式,每一行對(duì)應(yīng)一對(duì)實(shí)例)bias:生成y時(shí)產(chǎn)生一個(gè)偏差值variance:方差'''x = np.zeros(shape=(numPoints,2)) #numPoints行,2列的矩陣y = np.zeros(shape=(numPoints))#basically a staight linefor i in range(0,numPoints):# bias featurex[i][0] = 1x[i][1] = i# target variabley[i] = (i+bias)+random.uniform(0,1)*variance #random.uniform(0,1)同random.random()產(chǎn)生0~1隨機(jī)數(shù)return x,y# generate 100 columns with a bias of 25 and 10 variance as a bit of noise x,y = genData(100,25,10)#前面函數(shù)返回了兩個(gè)變量x,y此處可以任意取兩個(gè)變量按偏移量賦值給返回的x和y # print(x) # print(y) m,n = np.shape(x) #x的行數(shù)賦值給m,列數(shù)賦值為n a = np.shape(y) #y只有一列不會(huì)返回列的數(shù)值,會(huì)返回行的數(shù)值 # print(m,n) #(100行,2列) # print(a) #(100行,1列)numInterations = 100000 alpha = 0.0005 #取0~1,比較好的算法會(huì)設(shè)置開(kāi)始的alpha數(shù)值較大后期數(shù)值較小 theta = np.ones(n) # 初始化θ:[1. 1.] 為什么設(shè)置為1? theta = GradientDescent(x,y,theta,alpha,m,numInterations) print(theta) #約為[30 1]# 得出的theta就可以用于對(duì)新實(shí)例的計(jì)算和預(yù)測(cè) #回歸算法和神經(jīng)網(wǎng)絡(luò)中都會(huì)用到此梯度下降的方法總結(jié)
以上是生活随笔為你收集整理的非线性回归(Non-linear Regression)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: oracle忽略除数为0,ora-014
- 下一篇: 华三交换机ping大包命令_华三交换机常