日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

机器学习实践一 logistic regression regularize

發(fā)布時間:2023/11/29 编程问答 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 机器学习实践一 logistic regression regularize 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

Logistic regression

數(shù)據(jù)內(nèi)容: 兩個參數(shù) x1 x2 y值 0 或 1

Potting

def read_file(file):data = pd.read_csv(file, names=['exam1', 'exam2', 'admitted'])data = np.array(data)return datadef plot_data(X, y):plt.figure(figsize=(6, 4), dpi=150)X1 = X[y == 1, :]X2 = X[y == 0, :]plt.plot(X1[:, 0], X1[:, 1], 'yo')plt.plot(X2[:, 0], X2[:, 1], 'k+')plt.xlabel('Exam1 score')plt.ylabel('Exam2 score')plt.legend(['Admitted', 'Not admitted'], loc='upper right')plt.show()print('Plotting data with + indicating (y = 1) examples and o indicating (y = 0) examples.') plot_data(X, y)


從圖上可以看出admitted 和 not admitted 存在一個明顯邊界,下面進行邏輯回歸:
logistic 回歸的假設(shè)函數(shù)


g(x)為logistics function:

def sigmoid(x):return 1 / (np.exp(-x) + 1)

看一下邏輯函數(shù)的函數(shù)圖:

cost function
邏輯回歸的代價函數(shù):

Gradient descent

  • 批處理梯度下降(batch gradient descent)
  • 向量化計算公式:
    1/mXT(sigmoid(XθT)?y)1/mX^{T}(sigmoid(Xθ^{T}) - y) 1/mXT(sigmoid(XθT)?y)
def gradient(initial_theta, X, y):m, n = X.shapeinitial_theta = initial_theta.reshape((n, 1))grad = X.T.dot(sigmoid(X.dot(initial_theta)) - y) / mreturn grad.flatten()

computer Cost and grad

m, n = X.shape X = np.c_[np.ones(m), X] initial_theta = np.zeros((n + 1, 1)) y = y.reshape((m, 1))# cost, grad = costFunction(initial_theta, X, y) cost, grad = cost_function(initial_theta, X, y), gradient(initial_theta, X, y) print('Cost at initial theta (zeros): %f' % cost); print('Expected cost (approx): 0.693'); print('Gradient at initial theta (zeros): '); print('%f %f %f' % (grad[0], grad[1], grad[2])) print('Expected gradients (approx): -0.1000 -12.0092 -11.2628') # theta1 = np.array([[-24], [0.2], [0.2]], dtype='float64') cost, grad = cost_function(theta1, X, y), gradient(theta1, X, y) # cost, grad = costFunction(theta1, X, y) print('Cost at initial theta (zeros): %f' % cost); print('Expected cost (approx): 0.218'); print('Gradient at initial theta (zeros): '); print('%f %f %f' % (grad[0], grad[1], grad[2])) print('Expected gradients (approx): 0.043 2.566 2.647')

學(xué)習(xí) θ 參數(shù)
使用scipy庫里的optimize庫進行訓(xùn)練, 得到最終的theta結(jié)果
Optimizing using fminunc

initial_theta = np.zeros(n + 1) result = opt.minimize(fun=cost_function, x0=initial_theta, args=(X, y), method='SLSQP', jac=gradient)print('Cost at theta found by fminunc: %f' % result['fun']) print('Expected cost (approx): 0.203') print('theta:') print('%f %f %f' % (result['x'][0], result['x'][1], result['x'][2])) print('Expected theta (approx):') print(' -25.161 0.206 0.201')

predict and Accuracies
學(xué)習(xí)好了參數(shù)θ, 開始進行預(yù)測, 當(dāng)hθ 大于等于0.5, 預(yù)測y = 1
當(dāng)hθ小于0.5時, 預(yù)測y =0

def predict(theta, X):m = np.size(theta, 0)rst = sigmoid(X.dot(theta.reshape(m, 1)))rst = rst > 0.5return rst# predict and Accuraciesprob = sigmoid(np.array([1, 45, 85], dtype='float64').dot(result['x'])) print('For a student with scores 45 and 85, we predict an admission ' \'probability of %.3f' % prob) print('Expected value: 0.775 +/- 0.002\n')p = predict(result['x'], X)print('Train Accuracy: %.1f%%' % (np.mean(p == y) * 100)) print('Expected accuracy (approx): 89.0%\n')

也可以用skearn 來檢驗:

from sklearn.metrics import classification_report print(classification_report(predictions, y))

Decision boundary (決策邊界)

X × θ = 0

θ0 + x1θ1 + x2θ2 = 0

x1 = np.arange(70, step=0.1) x2 = -(final_theta[0] + x1*final_theta[1]) / final_theta[2]fig, ax = plt.subplots(figsize=(8,5)) positive = X[y == 1, :] negative = X[y == 0, :] ax.scatter(positive[:, 0], positive[:, 1], c='b', label='Admitted') ax.scatter(negative[:, 0], negative[:, 1], s=50, c='r', marker='x', label='Not Admitted') ax.plot(x1, x2) ax.set_xlim(30, 100) ax.set_ylim(30, 100) ax.set_xlabel('x1') ax.set_ylabel('x2') ax.set_title('Decision Boundary') plt.show()

Regularized logistic regression

正則化可以減少過擬合, 也就是高方差,直觀原理是,當(dāng)超參數(shù)lambda 非常大的時候,參數(shù)θ相對較小, 所以函數(shù)曲線就變得簡單, 也就減少了剛方差。
可視化數(shù)據(jù)

data2 = pd.read_csv('ex2data2.txt', names=[' column1', 'column2', 'Accepted']) data2.head() def plot_data():positive = data2[data2['Accepted'].isin([1])]negative = data2[data2['Accepted'].isin([0])]fig, ax = plt.subplots(figsize=(8,5))ax.scatter(positive['column1'], positive['column2'], s=50, c='b', marker='o', label='Accepted')ax.scatter(negative['column1'], negative['column2'], s=50, c='r', marker='x', label='Rejected')ax.legend()ax.set_xlabel('Test 1 Score')ax.set_ylabel('Test 2 Score')plot_data()

Feature mapping

盡可能將兩個特征 x1 x2 相結(jié)合,組成一個線性表達式,方法是映射到所有的x1 和x2 的多項式上,直到第六次冪

for i in 0..powerfor p in 0..i:output x1^(i-p) * x2^p``` def feature_mapping(x1, x2, power):data = {}for i in np.arange(power + 1):for p in np.arange(i + 1):data["f{}{}".format(i - p, p)] = np.power(x1, i - p) * np.power(x2, p)return pd.DataFrame(data)


Regularized Cost function

Regularized gradient decent

決策邊界
X × θ = 0

x = np.linspace(-1, 1.5, 250) xx, yy = np.meshgrid(x, x)z = feature_mapping(xx.ravel(), yy.ravel(), 6).as_matrix() z = z.dot(final_theta) z = z.reshape(xx.shape)plot_data() plt.contour(xx, yy, z, 0) plt.ylim(-.8, 1.2) 創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎勵來咯,堅持創(chuàng)作打卡瓜分現(xiàn)金大獎

總結(jié)

以上是生活随笔為你收集整理的机器学习实践一 logistic regression regularize的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。