python 线性回归_Python中的简化线性回归
python 線性回歸
In the area of Machine Learning, one of the first algorithms that someone can come across is Linear Regression. In general, Linear Regression lies in category of supervised types of learning algorithms, where we consider a number of X observations, accompanied with the corresponded same number of Y target values, while we will try to model relationships between input and output features.
在機器學習領域,線性回歸是人們可以遇到的最早的算法之一。 通常,線性回歸屬于監督型學習算法的類別,在該算法中,我們考慮了許多X觀察值以及相應數量的Y目標值,同時我們將嘗試對輸入和輸出特征之間的關系進行建模。
Any related problems can be categorised in Regression and Classification. In this post I will cover in few simple steps how we can approach and implement Linear Regression in Python where we will try to predict samples with continuous output.
任何相關問題都可以歸類為回歸和分類 。 在這篇文章中,我將通過幾個簡單的步驟介紹如何在Python中實現和實現線性回歸,其中我們將嘗試預測具有連續輸出的樣本。
基本功能 (Basic Function)
Linear regression is called linear because the simplest model involves a linear combination of the input variables that can be described in a polynomial function
線性回歸稱為線性回歸,因為最簡單的模型涉及可以在多項式函數中描述的輸入變量的線性組合
Within the above, Linear Regression comes with a strong assumption of the depended variables which will make us to turn in more complex models such Neural Network in other kind of problems.
在上文中,線性回歸對因變量進行了強有力的假設,這將使我們在更復雜的模型(如神經網絡)中遇到其他問題。
實作 (Implementation)
In this example, a simple model learns to draw a straight line that fits the distributed data. Learning the data is a repetitive process where in every learning cycle, an assessment of the model is made in respect of parameter optimisation used for training and the residual error minimisation for each prediction.
在此示例中,一個簡單的模型學習繪制一條適合分布式數據的直線。 學習數據是一個重復的過程,其中在每個學習周期中,都會針對用于訓練的參數優化和每個預測的殘差最小化對模型進行評估。
數據集 (Dataset)
Construct a toy dataset with numpy library, and plot a random line.
使用numpy庫構建玩具數據集,并繪制一條隨機線。
Create data + Plot a line創建數據+畫一條線In order to fit the line on the data, we need to measure the residual error between the line and the dataset, in simplified languaged the distance from the purple line to the actual data.
為了使線適合數據,我們需要用簡化的語言測量從紫色線到實際數據的距離,測量線和數據集之間的殘留誤差。
損失函數 (Loss Function)
To quantify the distance, we need a performance metric in other words a Loss/Cost function. This metric will also measure how good or bad our model is doing with learning the data. As our problem is linear regression, meaning that the values we are trying to predict are continuous values, we are going to you use Mean Squared Error (MSE). Of course there are other loss functions that can be used in a linear regression problem such Mean Absolute Error (MAE) or Huber Loss, but for a toy example we can keep it simple.
為了量化距離,我們需要一個性能指標,即損失/成本函數。 該指標還將衡量我們的模型在學習數據方面做得如何。 由于我們的問題是線性回歸,這意味著我們嘗試預測的值是連續值,因此我們將使用均方誤差 (MSE)。 當然,還有其他損失函數可用于線性回歸問題,例如平均絕對誤差 (MAE)或Huber損失 ,但對于玩具示例,我們可以使其保持簡單。
Mean Squared Error (MSE)均方誤差(MSE) MSE: 685.9313MSE:685.9313Calculating the loss function, is the first step to keep track of the model performance. Now our goal is to minimise it, somehow.
計算損失函數是跟蹤模型性能的第一步。 現在,我們的目標是以某種方式將其最小化。
Looking closer to the Loss function, we can see that the function is depended to w and b. In order to observe their impact in training process, we will plot the MSE keeping one of the two values constant interchangeably for both w and b.
靠近損耗函數,我們可以看到該函數取決于w和b 。 為了觀察它們在訓練過程中的影響,我們將繪制MSE,使w和b的兩個值之一保持不變。
w perfomance — Keep b constant性能-保持b不變 W and b Loss — Initial values vs BestW和b損失-初始值與最佳值The above figures depict the loss change between the initial values and the minimum(best) values of b and w.
上圖描述了b和w的初始值與最小(最佳)值之間的損耗變化。
The minima was found by just calculating the loss from a hardcoded range of values. Although, we need to find a smarter way in order to navigate from the initial position to the best position (lowest minima), optimising simultaneously both b and w.
通過僅從硬編碼的值范圍計算損耗來找到最小值。 雖然,我們需要找到一種更智能的方法,以便從初始位置導航到最佳位置(最低最小值),同時優化b和w。
梯度下降 (Gradient Descent)
We can tackle this problem using Gradient Descent algorithm.Gradient descent computes the gradients in each of the coefficients b and w, which is actually the slope at the current position. Given the slope, we know the direction to follow in order to reduce(minimise) the cost function.
我們可以使用Gradient Descent算法解決此問題.Gradient Descent計算系數b和w中的每個梯度,實際上是當前位置的斜率 。 給定斜率,我們知道減少(最小化)成本函數的方向。
Partial derivatives偏導數At each step the weight vector is moved in the direction of the greatest rate of decrease of the error function[1]. In other words we update the previous values of w and b with the new ones, by a defined strategy.
在每一步,權重矢量都以誤差函數最大減少率的方向移動[1]。 換句話說,我們通過定義的策略用新的值更新 w和b的先前值。
Update更新資料Here, a hyperparameter a for learning rate is introduced, which is a factor that defines how big or small will be the update towards minimizing the error[2]. Very small or high values of learning rate will or might never make the model to converge and reach the best possible minima, while it might need a large number of epochs or it will contantly keep missing the minima accordingly.
在這里,介紹了一種用于學習率的超參數a ,它是一個定義將更新的大小,以使誤差最小化的因素[2]。 很小或很高的學習率值將使模型收斂或永遠不會達到最佳可能的最小值,而它可能需要大量的時間,或者將不斷地缺少相應的最小值。
After updating with the new values, we then calculate the MSE after the new prediction is made. Τhese steps are parts of an iterative process which continue until the loss function stops decreasing, hope it finds a local minimum until it converges. Each learning cycle is called an epoch and the process is referred as training.
用新值更新后,我們便會在做出新預測后計算MSE。 這些步驟是迭代過程的一部分,該過程一直持續到損失函數停止減小為止,希望它找到局部最小值,直到收斂為止。 每個學習周期都被稱為一個時期 ,這個過程稱為培訓 。
# Make a new predictiondef predict(x):
return w * x + bIntuitive plot for Learning Rate直觀的學習率圖
Gradient Descent is usually described as a plateau that we always seek the lowest minima.
梯度下降通常被描述為一個平臺,我們一直在尋求最低的最小值。
http://www.bdhammel.com/learning-rates/http://www.bdhammel.com/learning-rates/Now that we have drilled down the problem we can develop the algorithm that optimises on partial derivatives of w and b.
現在,我們已經深入研究了問題,可以開發針對w和b的偏導數進行優化的算法。
Compute partial derivatives計算偏導數Optimization/Training process ends when the MSE eventually stops reducing.
當MSE最終停止減少時,優化/訓練過程結束。
Learning Process:
學習過程:
Initialise w,b with random valuesFor a range of epochs:- Predict a new line
- Compute partial derivatives (slope)
- Update w_new, b_newEvaluate with Loss Function (MSE)
Initialise with random values
用隨機值初始化
# Random initialisationw = np.random.random()
b = np.random.random()# Compute derivatives
def compute_derivatives(x,y):
dw = 0
db = 0
N = len(x)
for i in range(N):
x_i = x[i]
y_i = y[i]
y_hat = predict(x_i)
dw += -(2/N) * x_i * (y_i - y_hat)
db += -(2/N) * (y_i - y_hat)
return dw,db# Update with new values
def update(x,y, a=0.0002):
dw,db = compute_derivatives(x,y)
# Update previous w,b
new_w = w - (a*dw)
new_b = b - (a*db)
return new_w, new_b
Now that we have formed the training algorithm, let’s put them all in a Linear_Regression class to fully automate the process.
現在我們已經形成了訓練算法,讓我們將它們全部放入Linear_Regression類中以完全自動化該過程。
And that’s it. Full implementation of the article can be found in my github repository. In later posts the problem will be approached with Tensorflow’s API as long with a Classification implementation. Feel free to comment for any oversights made.
就是這樣。 這篇文章的完整實現可以在我的github倉庫中找到。 在以后的帖子中,只要使用了分類實現,就會使用Tensorflow的API處理該問題。 如有任何疏漏,請隨時發表評論。
Many thanks to Thanos Tagaris[3] for the amazing repository and work.
非常感謝Thanos Tagaris [3]提供了驚人的資料庫和工作。
[1] Christopher Bishop, Pattern Recognition and Machine Learning,Springer 2007
[1] Christopher Bishop,模式識別和機器學習,Springer 2007
[2] https://machinelearningmastery.com/linear-regression-for-machine-learning/
[2] https://machinelearningmastery.com/linear-regression-for-machine-learning/
[3] https://github.com/djib2011
[3] https://github.com/djib2011
翻譯自: https://medium.com/@sniafas/simplified-linear-regression-in-python-3a3696a92d09
python 線性回歸
總結
以上是生活随笔為你收集整理的python 线性回归_Python中的简化线性回归的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 堆糖网热门图片下载[通俗易懂](数据结构
- 下一篇: python模型部署方法_终极开箱即用的