日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

【机器学习基础】数学推导+纯Python实现机器学习算法8-9:线性可分支持向量机和线性支持向量机...

發(fā)布時間:2025/3/8 python 23 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【机器学习基础】数学推导+纯Python实现机器学习算法8-9:线性可分支持向量机和线性支持向量机... 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

?????

Python機器學習算法實現(xiàn)

Author:louwill

? ? ?

???? 前面兩講我們對感知機和神經(jīng)網(wǎng)絡進行了介紹。感知機作為一種線性分類模型,很難處理非線性問題。為了處理非線性的情況,在感知機模型的基礎上有了兩個方向,一個就是上一講說到的神經(jīng)網(wǎng)絡,大家也看到了,現(xiàn)在深度學習大放異彩,各種網(wǎng)絡功能強大。但實際上在神經(jīng)網(wǎng)絡興起之前,基于感知機的另一種模型——支持向量機,同樣可以解決非線性問題。

???? 支持向量機一般來說有三種任務類型:線性可分情況,近似線性可分情況以及線性不可分情況。針對這三種分別線性可分支持向量機、線性支持向量機和線性不可分支持向量機。筆者將分三次對這三種支持向量機進行介紹。

線性可分支持向量機

???? 和感知機一樣,線性可分支持向量機的訓練目標也是尋找一個分離超平面,能將數(shù)據(jù)分成不同的類。通過感知機的學習我們知道,當訓練數(shù)據(jù)線性可分時,一般存在不止一個線性超平面可以將數(shù)據(jù)分類,可能有無數(shù)多個線性超平面。而線性可分支持向量機則是利用間隔最大化求得一個最優(yōu)的分離超平面。

???? 關于函數(shù)間隔、幾何間隔和支持向量等相關概念,筆者這里不過多闡述,詳細細節(jié)可參考統(tǒng)計學習方法一書。總之,線性可分支持向量機可被形式化一個凸二次規(guī)劃問題:

???? 這里多說一句,感知機、最大熵和支持向量機等模型的優(yōu)化算法都是一些經(jīng)典的優(yōu)化問題,對其中涉及的凸優(yōu)化、拉格朗日對偶性、KKT條件、二次規(guī)劃等概念,建議各位找到相關材料和教材認真研讀,筆者這里不多做表述。

???? 一般來說,我們可以直接對上述凸二次規(guī)劃進行求解,但有時候該原始問題并不容易求解,這時候需要引入拉格朗日對偶性,將原始問題轉化為對偶問題進行求解。原始二次規(guī)劃的一般形式為:

???? 引入拉格朗日函數(shù):

???? 定義該拉式函數(shù)的最大化函數(shù):

???? 通過證明可得原始問題等價于該拉式函數(shù)的極小極大化問題:

???? 原始問題為極小極大化問題,根據(jù)拉格朗日對偶性,對偶問題即為極大極小化問題:

???? 為計算該對偶問題的解,我們需要對L(w,b, α)求極小,再對α求極大。具體推導如下。第一步:

第二步:

第三步:根據(jù)KKT條件可得:

最后可根據(jù)對偶問題求得原始問題的解為:

???? 以上便是線性可分支持向量機的對偶問題推導過程,詳細過程可參考相關教材,筆者不做過多展開(主要是打公式太費時間)。

線性可分支持向量機的簡單實現(xiàn)

???? 這里我們基采取和之前感知機模型一樣的優(yōu)化思想來求解線性可分支持向量機的原始問題。

???? 先準備示例訓練數(shù)據(jù):

data_dict = {-1:np.array([[1,7],[2,8],[3,8],]),1:np.array([[5,1],[6,-1],[7,3],])}

???? 導入相關package并繪圖展示:

import numpy as np import matplotlib.pyplot as pltcolors = {1:'r',-1:'g'} fig = plt.figure() ax = fig.add_subplot(1,1,1) [[ax.scatter(x[0],x[1],s=100,color=colors[i]) for x in data_dict[i]] for i in data_dict] plt.show()

???? 接下來定義線性可分支持向量機的模型主體和訓練部分:

def train(data):# 參數(shù)字典 { ||w||: [w,b] }opt_dict = {}# 數(shù)據(jù)轉換列表transforms = [[1,1], [-1,1],[-1,-1],[1,-1]]# 從字典中獲取所有數(shù)據(jù)all_data = []for yi in data:for featureset in data[yi]:for feature in featureset:all_data.append(feature)# 獲取數(shù)據(jù)最大最小值max_feature_value = max(all_data)min_feature_value = min(all_data)all_data = None# 定義一個步長列表step_sizes = [max_feature_value * 0.1,max_feature_value * 0.01,max_feature_value * 0.001]# 參數(shù)b的范圍設置b_range_multiple = 2b_multiple = 5latest_optimum = max_feature_value*10# 基于不同步長訓練優(yōu)化for step in step_sizes:w = np.array([latest_optimum,latest_optimum])# 凸優(yōu)化optimized = Falsewhile not optimized:for b in np.arange(-1*(max_feature_value*b_range_multiple),max_feature_value*b_range_multiple,step*b_multiple):for transformation in transforms:w_t = w*transformationfound_option = Truefor i in data:for xi in data[i]:yi=iif not yi*(np.dot(w_t,xi)+b) >= 1:found_option = Falseif found_option:opt_dict[np.linalg.norm(w_t)]?=?[w_t,b]if w[0] < 0:optimized = Trueprint('Optimized?a?step!')else:w = w - stepnorms = sorted([n for n in opt_dict])#||w|| : [w,b]opt_choice = opt_dict[norms[0]]w = opt_choice[0]b = opt_choice[1]latest_optimum = opt_choice[0][0]+step*2for i in data:for xi in data[i]:yi=iprint(xi,':',yi*(np.dot(w,xi)+b))return w, b

基于示例數(shù)據(jù)的訓練結果如下:

然后定義預測函數(shù):

# 定義預測函數(shù) def predict(features):# sign( x.w+b )classification = np.sign(np.dot(np.array(features),w)+b)if classification !=0:ax.scatter(features[0], features[1], s=200, marker='^', c=colors[classification])print(classification)return classification

基于示例數(shù)據(jù)的預測結果:

整合成線性可分支持向量機類

整理上述代碼并添加結果可視化:

class Hard_Margin_SVM:def __init__(self, visualization=True):self.visualization = visualizationself.colors = {1:'r',-1:'g'}if self.visualization:self.fig = plt.figure()self.ax = self.fig.add_subplot(1,1,1)# 定義訓練函數(shù)def train(self, data):self.data = data# 參數(shù)字典 { ||w||: [w,b] }opt_dict = {}# 數(shù)據(jù)轉換列表transforms = [[1,1],[-1,1],[-1,-1],[1,-1]]# 從字典中獲取所有數(shù)據(jù)all_data = []for yi in self.data:for featureset in self.data[yi]:for feature in featureset:all_data.append(feature)# 獲取數(shù)據(jù)最大最小值self.max_feature_value = max(all_data)self.min_feature_value = min(all_data)all_data = None# 定義一個學習率(步長)列表step_sizes = [self.max_feature_value * 0.1,self.max_feature_value * 0.01,self.max_feature_value * 0.001]# 參數(shù)b的范圍設置b_range_multiple = 2b_multiple = 5latest_optimum = self.max_feature_value*10# 基于不同步長訓練優(yōu)化for step in step_sizes:w = np.array([latest_optimum,latest_optimum])# 凸優(yōu)化optimized = Falsewhile not optimized:for b in np.arange(-1*(self.max_feature_value*b_range_multiple),self.max_feature_value*b_range_multiple,step*b_multiple):for transformation in transforms:w_t = w*transformationfound_option = Truefor i in self.data:for xi in self.data[i]:yi=iif not yi*(np.dot(w_t,xi)+b) >= 1:found_option = False# print(xi,':',yi*(np.dot(w_t,xi)+b))if found_option:opt_dict[np.linalg.norm(w_t)] = [w_t,b]if w[0] < 0:optimized = Trueprint('Optimized a step!')else:w = w - stepnorms = sorted([n for n in opt_dict])#||w|| : [w,b]opt_choice = opt_dict[norms[0]]self.w = opt_choice[0]self.b = opt_choice[1]latest_optimum = opt_choice[0][0]+step*2for i in self.data:for xi in self.data[i]:yi=iprint(xi,':',yi*(np.dot(self.w,xi)+self.b)) # 定義預測函數(shù)def predict(self,features):# sign( x.w+b )classification = np.sign(np.dot(np.array(features),self.w)+self.b)if classification !=0 and self.visualization:self.ax.scatter(features[0], features[1], s=200, marker='^', c=self.colors[classification])return classification# 定義結果繪圖函數(shù)def visualize(self):[[self.ax.scatter(x[0],x[1],s=100,color=self.colors[i]) for x in data_dict[i]] for i in data_dict]# hyperplane = x.w+b# v = x.w+b# psv = 1# nsv = -1# dec = 0# 定義線性超平面def hyperplane(x,w,b,v):return (-w[0]*x-b+v) / w[1]datarange = (self.min_feature_value*0.9,self.max_feature_value*1.1)hyp_x_min = datarange[0]hyp_x_max = datarange[1]# (w.x+b) = 1# 正支持向量psv1 = hyperplane(hyp_x_min, self.w, self.b, 1)psv2 = hyperplane(hyp_x_max, self.w, self.b, 1)self.ax.plot([hyp_x_min,hyp_x_max],[psv1,psv2], 'k')# (w.x+b) = -1# 負支持向量nsv1 = hyperplane(hyp_x_min, self.w, self.b, -1)nsv2 = hyperplane(hyp_x_max, self.w, self.b, -1)self.ax.plot([hyp_x_min,hyp_x_max],[nsv1,nsv2], 'k')# (w.x+b) = 0# 線性分隔超平面db1 = hyperplane(hyp_x_min, self.w, self.b, 0)db2 = hyperplane(hyp_x_max, self.w, self.b, 0)self.ax.plot([hyp_x_min,hyp_x_max],[db1,db2], 'y--')plt.show()

測試效果如下:

data_dict = {-1:np.array([[1,7],[2,8],[3,8],]),1:np.array([[5,1],[6,-1],[7,3],])}svm = Hard_Margin_SVM() svm.train(data=data_dict)predict_us = [[0,10],[1,3],[3,4],[3,5],[5,5],[5,6],[6,-5],[5,8],[2,5], [8,-3]]for p in predict_us:svm.predict(p)svm.visualize()

???? 以上就是本節(jié)內容,關于近似線性可分以及軟間隔最大化問題,筆者將在下一篇推文中介紹。完整代碼文件和數(shù)據(jù)可參考筆者GitHub地址:

https://github.com/luwill/machine-learning-code-writing

?線性支持向量機

在上一講中,我們探討了線性可分情況下的支持向量機模型。本節(jié)我們來繼續(xù)探討svm的第二種情況,線性支持向量機。何謂線性支持呢?就是訓練數(shù)據(jù)中大部分實例組成的樣本集合是線性可分的,但有一些特異點的存在造成了數(shù)據(jù)線性不可分的狀態(tài),在去除了這些特異點之后,剩下的數(shù)據(jù)組成的集合便是線性可分的。

原始問題

? ? 所以我們可以在線性可分支持向量機的基礎上,推導線性支持向量機的基本原理。假設訓練數(shù)據(jù)線性不可分,這便意味著某些樣本點不滿足此前線性可分中的函數(shù)間隔大于1的約束條件,線性支持向量機這里的處理方法是對每個實例引入一個松弛變量,使得函數(shù)間隔加上松弛變量大于等于1。對應于線性可分時的硬間隔最大化(hard margin svm),線性支持向量機可稱為軟間隔最大化問題(soft margin svm)。

? ? 因而線性支持向量機就可以形式化為一個凸二次規(guī)劃問題:

? ? 其中C>0為懲罰參數(shù),表示對誤分類的懲罰程度。最小化該目標函數(shù)可包含兩層含義:既要使得間隔最大化也要使得誤分類點個數(shù)最少,C即為二者的調和系數(shù)。關于凸二次規(guī)劃問題(QP)的求解,各位可參考運籌學、凸優(yōu)化等教材課程,這里不多贅述。

? ? 再來看線性支持向量機的對偶問題。首先定義拉格朗日函數(shù)如下:

? ? 由上一講的推導可知,對偶問題為拉格朗日函數(shù)的極大極小問題。基于該拉格朗日函數(shù)對w、b和keci求偏導:

? ? 由上三式可得:

? ? 將上述三個式子再代回到拉格朗日函數(shù)中:

? ? 于是便可得到線性支持向量機的對偶問題:

? ? 由KKT條件:

? ? 計算可得:

? ? 以上便是線性支持向量機,也即軟間隔最大化對偶問題的推導過程。

cvxopt

? ? 本節(jié)將使用Python的凸優(yōu)化求解的第三方庫cvxopt實現(xiàn)線性支持向量機。先對該庫進行了一個簡單介紹。經(jīng)典的二次規(guī)劃問題可表示為如下形式:

? ? 假設要求解如下二次規(guī)劃問題:

? ? 將目標函數(shù)和約束條件寫成矩陣形式:

? ? 基于cvxopt包求解上述問題如下:

import?numpy from cvxopt import matrix from cvxopt import solvers # 定義二次規(guī)劃參數(shù) P = matrix([[1.0,0.0],[0.0,0.0]]) q = matrix([3.0,4.0]) G = matrix([[-1.0,0.0,-1.0,2.0,3.0],[0.0,-1.0,-3.0,5.0,4.0]]) h = matrix([0.0,0.0,-15.0,100.0,80.0]) # 構建求解 sol = solvers.qp(P,q,G,h)

# 獲取最優(yōu)值 print(sol['x'],sol['primal?objective'])

基于cvxopt的線性支持向量機實現(xiàn)

? ? 導入相關package:

import numpy as np from numpy import linalg import cvxopt import cvxopt.solvers import pylab as pl

定義一個線性核函數(shù):

def linear_kernel(x1, x2):return np.dot(x1, x2)

生成示例數(shù)據(jù):

def gen_non_lin_separable_data():mean1 = [-1, 2]mean2 = [1, -1]mean3 = [4, -4]mean4 = [-4, 4]cov = [[1.0, 0.8], [0.8, 1.0]]X1 = np.random.multivariate_normal(mean1, cov, 50)X1 = np.vstack((X1, np.random.multivariate_normal(mean3, cov, 50)))y1 = np.ones(len(X1))X2 = np.random.multivariate_normal(mean2, cov, 50)X2 = np.vstack((X2, np.random.multivariate_normal(mean4, cov, 50)))y2 = np.ones(len(X2)) * -1return X1, y1, X2, y2X1,?y1,?X2,?y2?=?gen_non_lin_separable_data()

基于示例數(shù)據(jù)生成訓練集和測試集:

def split_train(X1, y1, X2, y2):X1_train = X1[:90]y1_train = y1[:90]X2_train = X2[:90]y2_train = y2[:90]X_train = np.vstack((X1_train, X2_train))y_train = np.hstack((y1_train, y2_train))return X_train, y_train def split_test(X1, y1, X2, y2):X1_test = X1[90:]y1_test = y1[90:]X2_test = X2[90:]y2_test = y2[90:]X_test = np.vstack((X1_test, X2_test))y_test = np.hstack((y1_test, y2_test))return X_test, y_test X_train, y_train = split_train(X1, y1, X2, y2) X_test, y_test = split_test(X1, y1, X2, y2) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)

基于cvxopt庫定義線性支持向量機的訓練過程:

def fit(X, y, C):n_samples, n_features = X.shape# Gram matrixK = np.zeros((n_samples, n_samples))for i in range(n_samples):for j in range(n_samples):K[i, j] = linear_kernel(X[i], X[j])P = cvxopt.matrix(np.outer(y, y) * K)q = cvxopt.matrix(np.ones(n_samples) * -1)A = cvxopt.matrix(y, (1, n_samples))b = cvxopt.matrix(0.0)if C is None:G = cvxopt.matrix(np.diag(np.ones(n_samples) * -1))h = cvxopt.matrix(np.zeros(n_samples))else:tmp1 = np.diag(np.ones(n_samples) * -1)tmp2 = np.identity(n_samples)G = cvxopt.matrix(np.vstack((tmp1, tmp2)))tmp1 = np.zeros(n_samples)tmp2 = np.ones(n_samples) * Ch = cvxopt.matrix(np.hstack((tmp1, tmp2)))# solve QP problemsolution = cvxopt.solvers.qp(P, q, G, h, A, b)# Lagrange multipliersa = np.ravel(solution['x'])# Support vectors have non zero lagrange multiplierssv = a > 1e-5ind = np.arange(len(a))[sv]a = a[sv]sv_x = X[sv]sv_y = y[sv]print("%d support vectors out of %d points" % (len(a), n_samples))# Interceptb = 0for n in range(len(a)):b += sv_y[n]b -= np.sum(a * sv_y * K[ind[n], sv])b /= len(a)# Weight vectorw = np.zeros(n_features)for n in range(len(a)):w += a[n] * sv_y[n] * sv[n]else:w = None

軟間隔支持向量機函數(shù)化封裝

import numpy as np from numpy import linalg import cvxopt import cvxopt.solvers import pylab as pldef linear_kernel(x1, x2):return np.dot(x1, x2)class soft_margin_svm(object):def __init__(self, kernel=linear_kernel, C=None):self.kernel = kernelself.C = Cif self.C is not None:self.C = float(self.C)def fit(self, X, y):n_samples, n_features = X.shape# Gram matrixK = np.zeros((n_samples, n_samples))for i in range(n_samples):for j in range(n_samples):K[i, j] = self.kernel(X[i], X[j])P = cvxopt.matrix(np.outer(y, y) * K)q = cvxopt.matrix(np.ones(n_samples) * -1)A = cvxopt.matrix(y, (1, n_samples))b = cvxopt.matrix(0.0)if self.C is None:G = cvxopt.matrix(np.diag(np.ones(n_samples) * -1))h = cvxopt.matrix(np.zeros(n_samples))else:tmp1 = np.diag(np.ones(n_samples) * -1)tmp2 = np.identity(n_samples)G = cvxopt.matrix(np.vstack((tmp1, tmp2)))tmp1 = np.zeros(n_samples)tmp2 = np.ones(n_samples) * self.Ch = cvxopt.matrix(np.hstack((tmp1, tmp2)))# solve QP problemsolution = cvxopt.solvers.qp(P, q, G, h, A, b)# Lagrange multipliersa = np.ravel(solution['x'])# Support vectors have non zero lagrange multiplierssv = a > 1e-5ind = np.arange(len(a))[sv]self.a = a[sv]self.sv = X[sv]self.sv_y = y[sv]print("%d support vectors out of %d points" % (len(self.a), n_samples))# Interceptself.b = 0for n in range(len(self.a)):self.b += self.sv_y[n]self.b -= np.sum(self.a * self.sv_y * K[ind[n], sv])self.b /= len(self.a)# Weight vectorif self.kernel == linear_kernel:self.w = np.zeros(n_features)for n in range(len(self.a)):self.w += self.a[n] * self.sv_y[n] * self.sv[n]else:self.w = Nonedef project(self, X):if self.w is not None:return np.dot(X, self.w) + self.belse:y_predict = np.zeros(len(X))for i in range(len(X)):s = 0for a, sv_y, sv in zip(self.a, self.sv_y, self.sv):s += a * sv_y * self.kernel(X[i], sv)y_predict[i] = sreturn y_predict + self.bdef predict(self, X):return np.sign(self.project(X))if?__name__?==?"__main__":def gen_non_lin_separable_data():mean1 = [-1, 2]mean2 = [1, -1]mean3 = [4, -4]mean4 = [-4, 4]cov = [[1.0, 0.8], [0.8, 1.0]]X1 = np.random.multivariate_normal(mean1, cov, 50)X1 = np.vstack((X1, np.random.multivariate_normal(mean3, cov, 50)))y1 = np.ones(len(X1))X2 = np.random.multivariate_normal(mean2, cov, 50)X2 = np.vstack((X2, np.random.multivariate_normal(mean4, cov, 50)))y2 = np.ones(len(X2)) * -1return X1, y1, X2, y2def gen_lin_separable_overlap_data():# generate training data in the 2-d casemean1 = np.array([0, 2])mean2 = np.array([2, 0])cov = np.array([[1.5, 1.0], [1.0, 1.5]])X1 = np.random.multivariate_normal(mean1, cov, 100)y1 = np.ones(len(X1))X2 = np.random.multivariate_normal(mean2, cov, 100)y2 = np.ones(len(X2)) * -1return X1, y1, X2, y2def split_train(X1, y1, X2, y2):X1_train = X1[:90]y1_train = y1[:90]X2_train = X2[:90]y2_train = y2[:90]X_train = np.vstack((X1_train, X2_train))y_train = np.hstack((y1_train, y2_train))return X_train, y_traindef split_test(X1, y1, X2, y2):X1_test = X1[90:]y1_test = y1[90:]X2_test = X2[90:]y2_test = y2[90:]X_test = np.vstack((X1_test, X2_test))y_test = np.hstack((y1_test, y2_test))return X_test, y_testdef plot_margin(X1_train, X2_train, clf):def f(x, w, b, c=0):# given x, return y such that [x,y] in on the line# w.x + b = creturn (-w[0] * x - b + c) / w[1]pl.plot(X1_train[:, 0], X1_train[:, 1], "ro")pl.plot(X2_train[:, 0], X2_train[:, 1], "bo")pl.scatter(clf.sv[:, 0], clf.sv[:, 1], s=100, c="g")# w.x + b = 0a0 = -4;a1 = f(a0, clf.w, clf.b)b0 = 4;b1 = f(b0, clf.w, clf.b)pl.plot([a0, b0], [a1, b1], "k")# w.x + b = 1a0 = -4;a1 = f(a0, clf.w, clf.b, 1)b0 = 4;b1 = f(b0, clf.w, clf.b, 1)pl.plot([a0, b0], [a1, b1], "k--")# w.x + b = -1a0 = -4;a1 = f(a0, clf.w, clf.b, -1)b0 = 4;b1 = f(b0, clf.w, clf.b, -1)pl.plot([a0, b0], [a1, b1], "k--")pl.axis("tight")pl.show()def plot_contour(X1_train, X2_train, clf):pl.plot(X1_train[:, 0], X1_train[:, 1], "ro")pl.plot(X2_train[:, 0], X2_train[:, 1], "bo")pl.scatter(clf.sv[:, 0], clf.sv[:, 1], s=100, c="g")X1, X2 = np.meshgrid(np.linspace(-6, 6, 50), np.linspace(-6, 6, 50))X = np.array([[x1, x2] for x1, x2 in zip(np.ravel(X1), np.ravel(X2))])Z = clf.project(X).reshape(X1.shape)pl.contour(X1, X2, Z, [0.0], colors='k', linewidths=1, origin='lower')pl.contour(X1, X2, Z + 1, [0.0], colors='grey', linewidths=1, origin='lower')pl.contour(X1, X2, Z - 1, [0.0], colors='grey', linewidths=1, origin='lower')pl.axis("tight")pl.show()def test_soft():X1, y1, X2, y2 = gen_lin_separable_overlap_data()X_train, y_train = split_train(X1, y1, X2, y2)X_test, y_test = split_test(X1, y1, X2, y2)clf = soft_margin_svm(C=1000.1)clf.fit(X_train, y_train)y_predict = clf.predict(X_test)correct = np.sum(y_predict == y_test)print("%d out of %d predictions correct" % (correct, len(y_predict)))plot_contour(X_train[y_train == 1], X_train[y_train == -1], clf)test_soft()

? ? 以上就是本節(jié)內容,關于近似線性可分以及軟間隔最大化問題,筆者將在下一篇推文中介紹。完整代碼文件和數(shù)據(jù)可參考筆者GitHub地址:

https://github.com/luwill/machine-learning-code-writing

參考資料:

https://pythonprogramming.net/

https://github.com/SmirkCao/Lihang/tree/master/CH07

往期精彩:

數(shù)學推導+純Python實現(xiàn)機器學習算法6:感知機

數(shù)學推導+純Python實現(xiàn)機器學習算法5:決策樹之CART算法

數(shù)學推導+純Python實現(xiàn)機器學習算法4:決策樹之ID3算法

數(shù)學推導+純Python實現(xiàn)機器學習算法3:k近鄰

數(shù)學推導+純Python實現(xiàn)機器學習算法2:邏輯回歸

數(shù)學推導+純Python實現(xiàn)機器學習算法1:線性回歸

往期精彩回顧適合初學者入門人工智能的路線及資料下載機器學習及深度學習筆記等資料打印機器學習在線手冊深度學習筆記專輯《統(tǒng)計學習方法》的代碼復現(xiàn)專輯 AI基礎下載機器學習的數(shù)學基礎專輯獲取一折本站知識星球優(yōu)惠券,復制鏈接直接打開:https://t.zsxq.com/yFQV7am本站qq群1003271085。加入微信群請掃碼進群:

總結

以上是生活随笔為你收集整理的【机器学习基础】数学推导+纯Python实现机器学习算法8-9:线性可分支持向量机和线性支持向量机...的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內容還不錯,歡迎將生活随笔推薦給好友。