日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

极限学习机和支持向量机_极限学习机I

發布時間:2023/12/15 编程问答 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 极限学习机和支持向量机_极限学习机I 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

極限學習機和支持向量機

Around 2005, A novel machine learning approach was introduced by Guang-Bin Huang and a team of researchers at Nanyang Technological University, Singapore.

2005年前后,黃光斌和新加坡南洋理工大學的一組研究人員介紹了一種新穎的機器學習方法。

This new proposed learning algorithm tends to reach the smallest training error, obtain the smallest norm of weights and the best generalization performance, and runs extremely fast, in order to differentiate it from the other popular SLFN learning algorithms, it is called the Extreme Learning Machine (ELM).

這種新提出的學習算法趨于達到最小的訓練誤差獲得最小的權重范數最佳的泛化性能 ,并且運行速度極快 ,以使其與其他流行的SLFN學習算法區分開來,稱為極限學習機。 (榆樹)。

This method mainly addresses the issue of far slower training time of neural networks than required, the main reasons for which is that all the parameters of the networks are tuned iteratively by using such learning algorithms. These slow-gradient based learning algorithms are extensively used to train neural networks.

該方法主要解決了神經網絡的訓練時間比所需的慢得多的問題,其主要原因是通過使用這種學習算法來迭代地調整網絡的所有參數。 這些基于慢梯度的學習算法被廣泛用于訓練神經網絡。

Before going into how ELM works and how is it so good, let’s see how gradient-based neural networks based off.

在探討ELM的工作原理以及它的性能如何之前,讓我們看看基于梯度的神經網絡是如何建立的。

基于梯度的神經網絡的演示 (Demonstration of Gradient-based Neural networks)

Below are the steps followed in a single-layered feedforward neural network in brief:

下面簡要介紹了單層前饋神經網絡中的步驟:

Step 1: Evaluate Wx + B

步驟1:評估Wx + B

Step 2: Apply activation function g(Wx + B) and Compute output

步驟2:應用激活函數g(Wx + B)并計算輸出

Step 3: Calculate Loss

步驟3:計算損失

Step 4: Compute gradients (using delta rule)

第4步:計算梯度(使用增量規則)

Step 5: Repeat

步驟5:重復

This method of propagating forward and back involves a hefty number of calculations Also if the input size is large or if there are more layers/nodes, the training takes up a significant amount of time.

這種向前和向后傳播的方法涉及大量計算。此外,如果輸入大小很大或如果有更多的層/節點,則訓練會占用大量時間。

fig.1. 3-layered Neural Network圖。1。 三層神經網絡

In the above example, we can see for a 4 node input we require W1 (20 parameters), W2 (53 parameters), and W3 (21 parameters), i.e. 94 parameters in total. And the parameters increase rapidly with the increasing input nodes.

在上面的示例中,我們可以看到對于4節點輸入,我們需要W1(20個參數),W2(53個參數)和W3(21個參數),即總共94個參數。 并且參數隨著輸入節點的增加而Swift增加。

Let’s take a real-life example of image classification of numbers with MNIST Dataset:

讓我們以MNIST數據集為例,對數字進行圖像分類:

MNIST Example

MNIST示例

Source資源

This has a 28x28 input size i.e. 784 input nodes. For its architecture, let’s consider two layers with 128 nodes and 64 nodes, which are then classified into 10 classes. Then parameters will be:

輸入大小為28x28,即784個輸入節點。 對于其體系結構,讓我們考慮兩層,分別具有128個節點64個節點 ,然后將它們分為10類。 然后參數將是:

  • First Layer (784, 128) = 100352 parameters

    第一層(784,128)= 100352參數
  • Second Layer (128, 64) = 8192 parameters

    第二層(128、64)= 8192個參數
  • Output Layer (64, 10) = 640 parameters

    輸出層(64、10)= 640個參數

This will give us a total of 109184 parameters. And the repeated adjustment of weights by backpropagation increases the training time by a lot.

這將給我們總共109184個參數。 而且通過反向傳播對權重進行反復調整會大大增加訓練時間。

And this just for a 28x28 image, consider training it for bigger input size with 10000’s of features. The training time just gets out of hand.

而這僅適用于28x28的圖片,請考慮使用10000項功能對其進行訓練以獲取更大的輸入大小。 培訓時間變得一發不可收拾。

結論: (Conclusion:)

In almost all practical learning algorithms of feedforward neural networks, the conventional backpropagation method requires all these weights to be adjusted at every back-prop step.

在幾乎所有前饋神經網絡的實用學習算法中,常規的反向傳播方法都需要在每個反向傳播步驟調整所有這些權重。

For most of the time, gradient-descent based strategies have been employed in varied learning algorithms of feedforward neural networks. However, it’s clear that gradient descent-based learning strategies square measure usually terribly slow because of improper learning steps or could simply converge to local minimums. And many iterative learning steps are needed by such learning algorithms so as to get higher learning performance.

在大多數情況下,前饋神經網絡的各種學習算法都采用了基于梯度下降的策略。 但是,很明顯,由于學習步驟不適當,基于梯度下降的學習策略的平方測量通常非常慢,或者可能會收斂到局部最小值。 并且,這樣的學習算法需要許多迭代學習步驟,以便獲得更高的學習性能。

This makes the training far slower than required, which has been a major bottleneck for various applications.

這使培訓速度大大慢于所需的時間,這已成為各種應用程序的主要瓶頸。

Next Article in this series: Part II: Algorithm https://medium.com/@prasad.kumkar/extreme-learning-machines-9c8be01f6f77

本系列的下一篇文章: 第二部分:算法 https://medium.com/@prasad.kumkar/extreme-learning-machines-9c8be01f6f77

翻譯自: https://medium.com/datadriveninvestor/extreme-learning-machines-82095ee198ce

極限學習機和支持向量機

總結

以上是生活随笔為你收集整理的极限学习机和支持向量机_极限学习机I的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。