简单有效!在CV/NLP/DL领域中,有哪些修改一行代码或者几行代码提升性能的算法?...
圈圈
1. relu:用極簡的方式實(shí)現(xiàn)非線性激活,還緩解了梯度消失
x = max(x, 0)2. normalization:提高網(wǎng)絡(luò)訓(xùn)練穩(wěn)定性
x = (x - x.mean()) / x.std()3. gradient clipping:直擊靶心 避免梯度爆炸hhh
grad [grad > THRESHOLD] = THRESHOLD # THRESHOLD是設(shè)定的最大梯度閾值4. dropout:隨機(jī)丟棄,抑制過擬合,提高模型魯棒性
x = torch.nn.functional.dropout(x, p=p, training=training) # 哈哈哈調(diào)皮了,因?yàn)閷?shí)際dropout還有很多其他操作 # 不夠僅丟棄這一步確實(shí)可以一行搞定 x = x * np.random.binomial(n=1, p=p, size=x.shape) # 這里p是想保留的概率,上面那是丟棄的概率5. skip connection(residual learning):提供恒等映射的能力,保證模型不會(huì)因網(wǎng)絡(luò)變深而退化
F(x) = F(x) + x6. focal loss:用預(yù)測概率對(duì)不同類別的loss進(jìn)行加權(quán),緩解類別不平衡問題
loss = -np.log(p) # 原始交叉熵?fù)p失, p是模型預(yù)測的真實(shí)類別的概率, loss = (1-p)**GAMMA * loss # GAMMA是調(diào)制系數(shù)7. attention mechanism:用query和原始特征的相似度對(duì)原始特征進(jìn)行加權(quán),關(guān)注想要的信息
attn = torch.softmax(torch.matmul(q, k), dim) #用Transformer里KQV那套范式為例 v = torch.matmul(attn, v)8. subword embedding(char或char ngram):基本解決OOV(out of vocabulary)問題、分詞問題。這個(gè)對(duì)encode應(yīng)該比較有效,但對(duì)decode不太友好
x = [char for char in sentence] # char-levelSmarter
前面兩位高贊的回答的很好了,我就補(bǔ)充一下自己知道的。盡量避開優(yōu)化器、激活函數(shù)、數(shù)據(jù)增強(qiáng)等改進(jìn)。。
Deep Learning:?Cyclic LR、Flooding
Image classification:?ResNet、GN、Label Smoothing、ShuffleNet
Object Detection:?Soft-NMS、Focal Loss、GIOU、OHEM
Instance Segmentation:?PointRend
Domain Adaptation:?BNM
GAN:?Wasserstein GAN
Deep Learning
Standard LR -> Cyclic LR
SNAPSHOT ENSEMBLES: TRAIN 1, GET M FOR FREE
每隔一段時(shí)間重啟學(xué)習(xí)率,這樣在單位時(shí)間內(nèi)能收斂到多個(gè)局部最小值,可以得到很多個(gè)模型做集成。
#CYCLE=8000, LR_INIT=0.1, LR_MIN=0.001 scheduler = lambda x: ((LR_INIT-LR_MIN)/2)*(np.cos(PI*(np.mod(x-1,CYCLE)/(CYCLE)))+1)+LR_MINWithout Flooding -> With Flooding
Do We Need Zero Training Loss After Achieving Zero Training Error?
Flooding方法:當(dāng)training loss大于一個(gè)閾值時(shí),進(jìn)行正常的梯度下降;當(dāng)training loss低于閾值時(shí),會(huì)反過來進(jìn)行梯度上升,讓training loss保持在一個(gè)閾值附近,讓模型持續(xù)進(jìn)行“random walk”,并期望模型能被優(yōu)化到一個(gè)平坦的損失區(qū)域,這樣發(fā)現(xiàn)test loss進(jìn)行了double decent!
flood = (loss - b).abs() + bImage classification
VGGNet -> ResNet
Deep Residual Learning for Image Recognition
ResNet相比于VGGNet多了一個(gè)skip connect,網(wǎng)絡(luò)優(yōu)化變的更加容易
H(x) = F(x) + xBN -> GN
Group Normalization
在小batch size下BN掉點(diǎn)嚴(yán)重,而GN更加魯棒,性能穩(wěn)定。
x = x.view(N, G, -1) mean, var = x.mean(-1, keepdim=True), x.var(-1, keepdim=True) x = (x - mean) / (var + self.eps).sqrt() x = x.view(N, C, H, W)Hard Label -> Label Smoothing
Bag of Tricks for Image Classification with Convolutional Neural Networks
label smoothing將hard label轉(zhuǎn)變成soft label,使網(wǎng)絡(luò)優(yōu)化更加平滑。
targets = (1 - label_smooth) * targets + label_smooth / num_classesMobileNet -> ShuffleNet
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
將組卷積的輸出feature map的通道順序打亂,增加不同組feature map的信息交互。
channels_per_group = num_channels // groups x = x.view(batch_size, groups, channels_per_group, height, width) x = torch.transpose(x, 1, 2).contiguous() x = x.view(batch_size, -1, height, width)Object Detection
NMS -> Soft-NMS
Improving Object Detection With One Line of Code
Soft-NMS將重疊率大于設(shè)定閾值的框分類置信度降低,而不是直接置為0,可以增加召回率。
#以線性降低分類置信度為例 if iou > threshold:weight = 1 - iouCE Loss -> Focal Loss
Focal Loss for Dense Object Detection
Focal loss對(duì)CE loss增加了一個(gè)調(diào)制系數(shù)來降低容易樣本的權(quán)重值,使得訓(xùn)練過程更加關(guān)注困難樣本。
loss = -np.log(p) # 原始交叉熵?fù)p失, p是模型預(yù)測的真實(shí)類別的概率, loss = (1-p)**GAMMA * loss # GAMMA是調(diào)制系數(shù)IOU -> GIOU
Generalized Interp over Union: A Metric and A Loss for Bounding Box Regression
GIOU loss避免了IOU loss中兩個(gè)bbox不重合時(shí)Loss為0的情況,解決了IOU loss對(duì)物體大小敏感的問題。
#area_C閉包面積,add_area并集面積 end_area = (area_C - add_area)/area_C #閉包區(qū)域中不屬于兩個(gè)框的區(qū)域占閉包區(qū)域的比重 giou = iou - end_areaHard Negative Mining -> OHEM
Training Region-based Object Detectors with Online Hard Example Mining
OHEM通過選擇損失較大的候選ROI進(jìn)行梯度更新解決類別不平衡問題。
#只對(duì)難樣本產(chǎn)生的loss更新 index = torch.argsort(loss.sum(1))[int(num * ohem_rate):] loss = loss[index, :]Instance Segmentation
Mask R-CNN -> PointRend
PointRend: Image Segmentation as Rendering
每次從粗粒度預(yù)測出來的mask中選擇TopN個(gè)最不確定的位置進(jìn)行細(xì)粒度預(yù)測,以非常的少的計(jì)算代價(jià)下獲得巨大的性能提升。
points = sampling_points(out, x.shape[-1] // 16, self.k, self.beta) coarse = point_sample(out, points, align_corners=False) fine = point_sample(res2, points, align_corners=False) feature_representation = torch.cat([coarse, fine], dim=1)Domain Adaptation
EntMin -> BNM
Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations
類別預(yù)測的判別性與多樣性同時(shí)指向矩陣的核范數(shù),可以通過最大化矩陣核范數(shù)(BNM)來提升預(yù)測的性能。
L_BNM = -torch.norm(X,'nuc')GAN
GAN -> Wasserstein GAN
Wasserstein GAN
WGAN引入了Wasserstein距離,既解決了GAN訓(xùn)練不穩(wěn)定的問題,也提供了一個(gè)可靠的訓(xùn)練進(jìn)程指標(biāo),而且該指標(biāo)確實(shí)與生成樣本的質(zhì)量高度相關(guān)。
Wasserstein GAN相比GAN只改了四點(diǎn): 判別器最后一層去掉sigmoid 生成器和判別器的loss不取對(duì)數(shù) 每次更新把判別器參數(shù)的絕對(duì)值按閾值截?cái)?使用RMSProp或者SGD優(yōu)化器 往期精彩回顧適合初學(xué)者入門人工智能的路線及資料下載機(jī)器學(xué)習(xí)及深度學(xué)習(xí)筆記等資料打印機(jī)器學(xué)習(xí)在線手冊(cè)深度學(xué)習(xí)筆記專輯《統(tǒng)計(jì)學(xué)習(xí)方法》的代碼復(fù)現(xiàn)專輯 AI基礎(chǔ)下載機(jī)器學(xué)習(xí)的數(shù)學(xué)基礎(chǔ)專輯 獲取本站知識(shí)星球優(yōu)惠券,復(fù)制鏈接直接打開: https://t.zsxq.com/qFiUFMV 本站qq群704220115。加入微信群請(qǐng)掃碼:總結(jié)
以上是生活随笔為你收集整理的简单有效!在CV/NLP/DL领域中,有哪些修改一行代码或者几行代码提升性能的算法?...的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 推荐一个Python GUI神器,双手彻
- 下一篇: 【Python基础】Pandas数据可视