DL之InceptionV4/ResNet:InceptionV4/Inception-ResNet算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之InceptionV4/ResNet:InceptionV4/Inception-ResNet算法的簡介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細(xì)攻略
?
?
?
目錄
InceptionV4/Inception-ResNet算法的簡介(論文介紹)
1、實驗結(jié)果
Inception-v4算法的架構(gòu)詳解
Inception-ResNet算法的架構(gòu)詳解
InceptionV4/Inception-ResNet算法的案例應(yīng)用
?
?
?
?
?
?
?
相關(guān)文章
DL之InceptionV2/V3:InceptionV2 & InceptionV3算法的簡介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細(xì)攻略
DL之BN-Inception:BN-Inception算法的簡介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細(xì)攻略
DL之InceptionV4/ResNet:InceptionV4/Inception-ResNet算法的簡介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細(xì)攻略
DL之InceptionV4/ResNet:InceptionV4/Inception-ResNet算法的架構(gòu)詳解之詳細(xì)攻略
?
InceptionV4/Inception-ResNet算法的簡介(論文介紹)
? ? ? ?InceptionV4和Inception-ResNet是谷歌研究人員,2016年,在Inception基礎(chǔ)上進行的持續(xù)改進,又帶來的兩個新的版本。
Abstract
? ? ? ? Very deep convolutional networks have been central to ?the largest advances in image recognition performance in ?recent years. One example is the Inception architecture that ?has been shown to achieve very good performance at relatively ?low computational cost. Recently, the introduction ?of residual connections in conjunction with a more traditional ?architecture has yielded state-of-the-art performance ?in the 2015 ILSVRC challenge; its performance was similar ?to the latest generation Inception-v3 network. This raises ?the question of whether there are any benefit in combining ?the Inception architecture with residual connections. Here ?we give clear empirical evidence that training with residual ?connections accelerates the training of Inception networks ?significantly. There is also some evidence of residual Inception ?networks outperforming similarly expensive Inception ?networks without residual connections by a thin margin. We ?also present several new streamlined architectures for both ?residual and non-residual Inception networks. These variations ?improve the single-frame recognition performance on ?the ILSVRC 2012 classification task significantly. We further ?demonstrate how proper activation scaling stabilizes ?the training of very wide residual Inception networks. With ?an ensemble of three residual and one Inception-v4, we ?achieve 3.08% top-5 error on the test set of the ImageNet ?classification (CLS) challenge.
摘要
? ? ? ? 非常深的卷積網(wǎng)絡(luò)是近年來圖像識別性能最大進步的核心。一個例子是Inception 架構(gòu),已經(jīng)證明它在相對較低的計算成本下獲得了非常好的性能。最近,在2015年的ILSVRC挑戰(zhàn)中,引入residual 連接和更傳統(tǒng)的架構(gòu)帶來了最先進的性能;其性能類似于最新一代的Inception-v3網(wǎng)絡(luò)。這就提出了這樣一個問題:在將Inception 架構(gòu)與residual 連接結(jié)合起來時是否有任何好處。在這里,我們給出了清晰的經(jīng)驗證據(jù),證明使用residual 連接的訓(xùn)練顯著加速了初始網(wǎng)絡(luò)的訓(xùn)練。還有一些證據(jù)表明,residual Inception 架構(gòu)網(wǎng)絡(luò)的表現(xiàn)優(yōu)于同樣昂貴的Inception 網(wǎng)絡(luò),而無需residual 連接。我們還為殘差和非殘差初始網(wǎng)絡(luò)提供了幾種新的簡化架構(gòu)。這些變化顯著提高了ILSVRC 2012分類任務(wù)的單幀識別性能。我們進一步證明了適當(dāng)?shù)募せ畋壤绾畏€(wěn)定非常廣泛的residual Inception網(wǎng)絡(luò)的訓(xùn)練。利用三個residual 和一個Inception-v4,的集合,我們在ImageNet分類(CLS)挑戰(zhàn)的測試集上實現(xiàn)了3.08% top-5 錯誤。
Conclusions
? ? ? ?We have presented three new network architectures in detail:
? Inception-ResNet-v1: a hybrid Inception version that has a similar computational cost to Inception-v3 from [15].
? Inception-ResNet-v2: a costlier hybrid Inception version with significantly improved recognition performance.
? Inception-v4: a pure Inception variant without residual connections with roughly the same recognition performance as Inception-ResNet-v2.
? ? ? ?We studied how the introduction of residual connections leads to dramatically improved training speed for the Inception architecture. Also our latest models (with and without residual connections) outperform all our previous networks, just by virtue of the increased model size.
結(jié)論
? ? ? ?我們詳細(xì)介紹了三種新的網(wǎng)絡(luò)架構(gòu):
?Inception-ResNet-v1:一個混合的Inception版本,其計算成本與[15]版本的incep -v3相似。
?Inception-ResNet-v2:一個成本更高的混合Inception版本,顯著提高了識別性能。
?Inception-v4:一個沒有residual 連接的Inception,與Inception-ResNet-v2的識別性能大致相同。
? ? ? ?我們研究了如何引入residual 連接來顯著提高Inception體系結(jié)構(gòu)的訓(xùn)練速度。此外,我們最新的模型(包括和不包括residual 連接)的性能優(yōu)于所有以前的網(wǎng)絡(luò),這僅僅是因為模型的大小有所增加。
?
1、實驗結(jié)果
| 1、Single crop -single model experimental results Reported on the non-blacklisted subset of the validation set of ILSVRC 2012 | |
| 2、144 crops evaluations -single model experimental results ? | |
| Reported on the all 50000 images of the validation set of ILSVRC 2012 3、Ensemble results with 144 crops/dense evaluation. 集成學(xué)習(xí)效果更好! For Inception-v4(+Residual), the ensemble consists of one pure Inception-v4 and three Inception-ResNet-v2 models and were evaluated both on the validation and on the test-set. | |
| 4、訓(xùn)練過程中的速度比較 其中紅色的Inception-resnet-v2效果性能最好 (1)、Top-5 error evolution of all four models (single model, single crop)
| |
| (2)、Top-1 error evolution of all four models (single model, single crop) This paints a similar picture as the top-5 evaluation. 其中紅色的Inception-resnet-v2效果性能最好 |
?
?
論文
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi.
Inception-v4, Inception-ResNetand the Impact of Residual Connections on Learning, 2016
https://arxiv.org/abs/1602.07261
?
Inception-v4算法的架構(gòu)詳解
DL之InceptionV4/ResNet:InceptionV4/Inception-ResNet算法的架構(gòu)詳解之詳細(xì)攻略
?
Inception-ResNet算法的架構(gòu)詳解
? ? ? Inception-ResNet網(wǎng)絡(luò): 改進的Inception模塊和殘差連接的結(jié)合。引入residual connection直連,把Inception和ResNet結(jié)合起來,讓網(wǎng)絡(luò)又寬又深。
DL之InceptionV4/ResNet:InceptionV4/Inception-ResNet算法的架構(gòu)詳解之詳細(xì)攻略
?
?
InceptionV4/Inception-ResNet算法的案例應(yīng)用
后期更新……
?
?
?
?
?
?
?
?
?
?
?
?
?
總結(jié)
以上是生活随笔為你收集整理的DL之InceptionV4/ResNet:InceptionV4/Inception-ResNet算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: ML之Validation:机器学习中模
- 下一篇: ML之LoRDTRF:基于LoRDT(C