日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

PyTorch框架学习十七——Batch Normalization

發布時間:2024/7/23 编程问答 40 豆豆
生活随笔 收集整理的這篇文章主要介紹了 PyTorch框架学习十七——Batch Normalization 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

PyTorch框架學習十七——Batch Normalization

  • 一、BN的概念
  • 二、Internal Covariate Shift(ICS)
  • 三、BN的一個應用案例
  • 四、PyTorch中BN的實現
    • 1._BatchNorm類
    • 2.nn.BatchNorm1d/2d/3d
      • (1)nn.BatchNorm1d
      • (2)nn.BatchNorm2d
      • (3)nn.BatchNorm3d

本次筆記關注深度學習中非常常用的標準化方法:BN。

BN首次于論文《Batch Normalization:Accelerating Deep Network Training by Reducing Internel Covariate》中提出,本來是想要解決訓練過程中隨著網絡層數的加深而導致的數據尺度/分布變化的問題,但是同時還發現了很多其他非常有用的優點,因此自從提出以來就受到了廣泛的應用。相應的,LN、IN和GN在基于BN的思想上紛紛被提出,適用于各自不同的場景。

一、BN的概念

Batch Normalization:批歸一化,批指的是一小批數據,通常為mini-batch,標準化指的是要將數據標準化為0均值、1方差。

建議看BN的論文,這里提煉一下BN使用的優點:

  • 可用更大的lr,加速模型收斂。
  • 可不用精心設計權值初始化,這一點很實用(其實是很省事)
  • 可不用Dropout或用較小的Dropout
  • 可不用L2或用較小的weight decay
  • 可不用LRN(local response normalization)
  • 以及BN提出的本意,解決了隨著網絡層數的加深而導致的數據尺度/分布變化的問題
  • 下面給出了BN的算法過程:

    輸入是一小批的數據以及要學習的參數γ和β,首先求這一小批數據的均值和方差,然后使用這個均值和方差進行標準化數據,這樣得到的數據就服從0均值1標準差。但到這里還沒有結束,BN還多了一個仿射變換的步驟,即將標準化后的數據縮放并平移,但是這個是一個可學習的過程,參數γ和β是在訓練過程中不斷被學習的,如果模型覺得需要,就可以進行仿射變換,這一步的作用是可以增加模型的容量,使得模型更加靈活,選擇性更多。

    二、Internal Covariate Shift(ICS)

    這個就是BN論文本來要解決的問題,訓練過程中隨著網絡層數的增加,數據的分布會隨之變化,下面舉了一個例子,具體如下圖所示:

    這是一個全連接網絡,第一層為輸入層X,第一個全連接層的權值為W1,則第一個全連接層的輸出H1等于X和W1向量相乘,假設輸入X滿足0均值1標準差,即標準化后的結果。

    若W初始化時也是1標準差,那么H1的方差的結果如上圖計算得為n,即經過一層全連接層,數據的分布范圍就擴大了n倍,那么經過多層,數據的分布將會越來越大,這樣反向求梯度的時候也會非常大,也就是梯度爆炸現象。

    再試想一下,如果初始化的時候W的方差很小,小于1/n,那么H1的方差的結果將會小于1,那么經過多層,數據的分布將會越來越小,這樣反向求梯度的時候也會非常小,也就是梯度消失現象。

    以上所述的就是ICS,而BN所做的就是在每一層全連接層后面將數據分布變化的數據再標準化回0均值1標準差,以此來消除對后續網絡層的影響,從而消除了ICS。

    下面將構造一個100層,每層256個神經元的全連接網絡,觀察其數據分布隨網絡層數的變化:

    import torch import numpy as np import torch.nn as nn import sys, os from tools.common_tools import set_seedset_seed(1) # 設置隨機種子class MLP(nn.Module):def __init__(self, neural_num, layers=100):super(MLP, self).__init__()self.linears = nn.ModuleList([nn.Linear(neural_num, neural_num, bias=False) for i in range(layers)])self.bns = nn.ModuleList([nn.BatchNorm1d(neural_num) for i in range(layers)])self.neural_num = neural_numdef forward(self, x):for (i, linear), bn in zip(enumerate(self.linears), self.bns):x = linear(x)# method 3# x = bn(x)x = torch.relu(x)if torch.isnan(x.std()):print("output is nan in {} layers".format(i))breakprint("layers:{}, std:{}".format(i, x.std().item()))return xdef initialize(self):for m in self.modules():if isinstance(m, nn.Linear):# method 1nn.init.normal_(m.weight.data, std=1) # normal: mean=0, std=1# method 2 kaiming# nn.init.kaiming_normal_(m.weight.data)neural_nums = 256 layer_nums = 100 batch_size = 16net = MLP(neural_nums, layer_nums) net.initialize()inputs = torch.randn((batch_size, neural_nums)) # normal: mean=0, std=1output = net(inputs) print(output)

    代碼中有3種情況:

    • 第一個是使用正態分布給權值初始化,使得權值0均值1標準差,且沒有使用BN層,將每一層網絡的數據標準差打印出來如下所示:
    layers:0, std:9.352246284484863 layers:1, std:112.47123718261719 layers:2, std:1322.8056640625 layers:3, std:14569.42578125 layers:4, std:154672.765625 layers:5, std:1834038.125 layers:6, std:18807982.0 layers:7, std:209553056.0 layers:8, std:2637502976.0 layers:9, std:32415457280.0 layers:10, std:374825549824.0 layers:11, std:3912853094400.0 layers:12, std:41235926482944.0 layers:13, std:479620541448192.0 layers:14, std:5320927071961088.0 layers:15, std:5.781225696395264e+16 layers:16, std:7.022147146707108e+17 layers:17, std:6.994718592201654e+18 layers:18, std:8.473501588274335e+19 layers:19, std:9.339794309346954e+20 layers:20, std:9.56936220412742e+21 layers:21, std:1.176274650258599e+23 layers:22, std:1.482641634599281e+24 layers:23, std:1.6921343606923352e+25 layers:24, std:1.9741450942615745e+26 layers:25, std:2.1257213262324592e+27 layers:26, std:2.191710730990783e+28 layers:27, std:2.5254503817521246e+29 layers:28, std:3.221308876879874e+30 layers:29, std:3.530952437322462e+31 layers:30, std:4.525353644890983e+32 layers:31, std:4.715011552268428e+33 layers:32, std:5.369590669553154e+34 layers:33, std:6.712318470791119e+35 layers:34, std:7.451114589527308e+36 output is nan in 35 layers tensor([[3.2626e+36, 0.0000e+00, 7.2932e+37, ..., 0.0000e+00, 0.0000e+00,2.5465e+38],[3.9237e+36, 0.0000e+00, 7.5033e+37, ..., 0.0000e+00, 0.0000e+00,2.1274e+38],[0.0000e+00, 0.0000e+00, 4.4932e+37, ..., 0.0000e+00, 0.0000e+00,1.7016e+38],...,[0.0000e+00, 0.0000e+00, 2.4222e+37, ..., 0.0000e+00, 0.0000e+00,2.5295e+38],[4.7380e+37, 0.0000e+00, 2.1580e+37, ..., 0.0000e+00, 0.0000e+00,2.6028e+38],[0.0000e+00, 0.0000e+00, 6.0878e+37, ..., 0.0000e+00, 0.0000e+00,2.1695e+38]], grad_fn=<ReluBackward0>)

    正如上面分析的一樣,數據變得越來越大。

    • 第二種情況是找到一種合適的初始化方法,使得其沒有ICS,這里使用了kaiming_normal初始化,結果如下:
    layers:0, std:0.8266295790672302 layers:1, std:0.8786815404891968 layers:2, std:0.9134421944618225 layers:3, std:0.8892470598220825 layers:4, std:0.8344280123710632 layers:5, std:0.874537467956543 layers:6, std:0.7926970720291138 layers:7, std:0.7806458473205566 layers:8, std:0.8684563636779785 layers:9, std:0.9434137344360352 layers:10, std:0.964215874671936 layers:11, std:0.8896796107292175 layers:12, std:0.8287257552146912 layers:13, std:0.8519770503044128 layers:14, std:0.83543461561203 layers:15, std:0.802306056022644 layers:16, std:0.8613607287406921 layers:17, std:0.7583686709403992 layers:18, std:0.8120225071907043 layers:19, std:0.791111171245575 layers:20, std:0.7164373397827148 layers:21, std:0.778393030166626 layers:22, std:0.8672043085098267 layers:23, std:0.8748127222061157 layers:24, std:0.9020991921424866 layers:25, std:0.8585717082023621 layers:26, std:0.7824354767799377 layers:27, std:0.7968913912773132 layers:28, std:0.8984370231628418 layers:29, std:0.8704466819763184 layers:30, std:0.9860475063323975 layers:31, std:0.9080778360366821 layers:32, std:0.9140638113021851 layers:33, std:1.0099570751190186 layers:34, std:0.9909381866455078 layers:35, std:1.0253210067749023 layers:36, std:0.8490436673164368 layers:37, std:0.703953742980957 layers:38, std:0.7186156511306763 layers:39, std:0.7250635623931885 layers:40, std:0.7030817866325378 layers:41, std:0.6325559616088867 layers:42, std:0.6623691916465759 layers:43, std:0.6960877180099487 layers:44, std:0.7140734195709229 layers:45, std:0.6329052448272705 layers:46, std:0.645889937877655 layers:47, std:0.7354376912117004 layers:48, std:0.6710689067840576 layers:49, std:0.6939154863357544 layers:50, std:0.6889259219169617 layers:51, std:0.6331775188446045 layers:52, std:0.6029314398765564 layers:53, std:0.6145529747009277 layers:54, std:0.6636687517166138 layers:55, std:0.7440096139907837 layers:56, std:0.7972176671028137 layers:57, std:0.7606151103973389 layers:58, std:0.6968684196472168 layers:59, std:0.7306802868843079 layers:60, std:0.6875628232955933 layers:61, std:0.7171440720558167 layers:62, std:0.7646605968475342 layers:63, std:0.7965087294578552 layers:64, std:0.8833741545677185 layers:65, std:0.8592953681945801 layers:66, std:0.8092937469482422 layers:67, std:0.8064812421798706 layers:68, std:0.6792411208152771 layers:69, std:0.6583347320556641 layers:70, std:0.5702279210090637 layers:71, std:0.5084437727928162 layers:72, std:0.4869327247142792 layers:73, std:0.4635041356086731 layers:74, std:0.4796812832355499 layers:75, std:0.4737212061882019 layers:76, std:0.4541455805301666 layers:77, std:0.4971913695335388 layers:78, std:0.49279505014419556 layers:79, std:0.44223514199256897 layers:80, std:0.4802999496459961 layers:81, std:0.5579249858856201 layers:82, std:0.5283756852149963 layers:83, std:0.5451982617378235 layers:84, std:0.6203728318214417 layers:85, std:0.6571894884109497 layers:86, std:0.7036821842193604 layers:87, std:0.7321069836616516 layers:88, std:0.6924358606338501 layers:89, std:0.6652534604072571 layers:90, std:0.6728310585021973 layers:91, std:0.6606624126434326 layers:92, std:0.6094606518745422 layers:93, std:0.6019104719161987 layers:94, std:0.5954217314720154 layers:95, std:0.6624558568000793 layers:96, std:0.6377887725830078 layers:97, std:0.6079288125038147 layers:98, std:0.6579317450523376 layers:99, std:0.6668478846549988 tensor([[0.0000, 1.3437, 0.0000, ..., 0.0000, 0.6444, 1.1867],[0.0000, 0.9757, 0.0000, ..., 0.0000, 0.4645, 0.8594],[0.0000, 1.0023, 0.0000, ..., 0.0000, 0.5148, 0.9196],...,[0.0000, 1.2873, 0.0000, ..., 0.0000, 0.6454, 1.1411],[0.0000, 1.3589, 0.0000, ..., 0.0000, 0.6749, 1.2438],[0.0000, 1.1807, 0.0000, ..., 0.0000, 0.5668, 1.0600]],grad_fn=<ReluBackward0>)

    數據的確沒有隨著網絡層加深而快速增大或減小,但是尋找到一種合適的初始化方法往往很花費時間。

    • 第三種情況是加入BN層,不使用初始化:
    layers:0, std:0.5751240849494934 layers:1, std:0.5803307890892029 layers:2, std:0.5825020670890808 layers:3, std:0.5823132395744324 layers:4, std:0.5860626101493835 layers:5, std:0.579832911491394 layers:6, std:0.5815905332565308 layers:7, std:0.5734466910362244 layers:8, std:0.5853903293609619 layers:9, std:0.5811620950698853 layers:10, std:0.5818504095077515 layers:11, std:0.5775734186172485 layers:12, std:0.5788553357124329 layers:13, std:0.5831498503684998 layers:14, std:0.5726235508918762 layers:15, std:0.5717664957046509 layers:16, std:0.576700747013092 layers:17, std:0.5848639607429504 layers:18, std:0.5718148350715637 layers:19, std:0.5775086879730225 layers:20, std:0.5790560841560364 layers:21, std:0.5815289616584778 layers:22, std:0.5845211744308472 layers:23, std:0.5830678343772888 layers:24, std:0.5817515850067139 layers:25, std:0.5793628096580505 layers:26, std:0.5744576454162598 layers:27, std:0.581753134727478 layers:28, std:0.5858433246612549 layers:29, std:0.5895737409591675 layers:30, std:0.5806193351745605 layers:31, std:0.5742025971412659 layers:32, std:0.5814924240112305 layers:33, std:0.5800969004631042 layers:34, std:0.5751299858093262 layers:35, std:0.5819362998008728 layers:36, std:0.57569420337677 layers:37, std:0.5824175477027893 layers:38, std:0.5741908550262451 layers:39, std:0.5768386721611023 layers:40, std:0.578640341758728 layers:41, std:0.5833579301834106 layers:42, std:0.5873513221740723 layers:43, std:0.5807022452354431 layers:44, std:0.5743744373321533 layers:45, std:0.5791332721710205 layers:46, std:0.5789337158203125 layers:47, std:0.5805914402008057 layers:48, std:0.5796007513999939 layers:49, std:0.5833531022071838 layers:50, std:0.5896912813186646 layers:51, std:0.5851364731788635 layers:52, std:0.5816906094551086 layers:53, std:0.5805508494377136 layers:54, std:0.5876169204711914 layers:55, std:0.576688826084137 layers:56, std:0.5784814357757568 layers:57, std:0.5820549726486206 layers:58, std:0.5837342739105225 layers:59, std:0.5691872835159302 layers:60, std:0.5777156949043274 layers:61, std:0.5763663649559021 layers:62, std:0.5843147039413452 layers:63, std:0.5852570533752441 layers:64, std:0.5836994051933289 layers:65, std:0.5794276595115662 layers:66, std:0.590632438659668 layers:67, std:0.5765355825424194 layers:68, std:0.5794717073440552 layers:69, std:0.5696660876274109 layers:70, std:0.5910594463348389 layers:71, std:0.5822493433952332 layers:72, std:0.5893915295600891 layers:73, std:0.5875967741012573 layers:74, std:0.5845006108283997 layers:75, std:0.573967695236206 layers:76, std:0.5823272466659546 layers:77, std:0.5769740343093872 layers:78, std:0.5787169933319092 layers:79, std:0.5757712721824646 layers:80, std:0.5799717307090759 layers:81, std:0.577584981918335 layers:82, std:0.581005334854126 layers:83, std:0.5819255113601685 layers:84, std:0.577966570854187 layers:85, std:0.5941665172576904 layers:86, std:0.5822250247001648 layers:87, std:0.5828983187675476 layers:88, std:0.5758668184280396 layers:89, std:0.5786070823669434 layers:90, std:0.5724494457244873 layers:91, std:0.5775058269500732 layers:92, std:0.5749661326408386 layers:93, std:0.5795350670814514 layers:94, std:0.5690663456916809 layers:95, std:0.5838885307312012 layers:96, std:0.578350305557251 layers:97, std:0.5750819444656372 layers:98, std:0.5843801498413086 layers:99, std:0.5825926065444946 tensor([[1.0858, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.7552],[0.0000, 0.0000, 0.3216, ..., 0.0000, 0.0000, 0.1931],[0.0000, 0.5979, 0.1423, ..., 1.2776, 0.9048, 0.0000],...,[0.8705, 0.4248, 0.0000, ..., 0.0000, 0.8963, 0.3446],[0.0000, 0.0000, 0.0000, ..., 0.5631, 0.0000, 0.4281],[1.1301, 0.0000, 0.0000, ..., 2.2642, 0.3234, 0.0000]],grad_fn=<ReluBackward0>)

    可以看得出來效果比使用kaiming_normal初始化更好,這個例子說明BN層的使用可以不用初始化而且避免了ICS的問題。

    三、BN的一個應用案例

    用LeNet解決一個人民幣二分類的問題:

    import os import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision.transforms as transforms from torch.utils.tensorboard import SummaryWriter from torch.utils.data import DataLoader from matplotlib import pyplot as pltimport sys hello_pytorch_DIR = os.path.abspath(os.path.dirname(__file__)+os.path.sep+".."+os.path.sep+"..") sys.path.append(hello_pytorch_DIR)from model.lenet import LeNet, LeNet_bn from tools.my_dataset import RMBDataset from tools.common_tools import set_seedclass LeNet_bn(nn.Module):def __init__(self, classes):super(LeNet_bn, self).__init__()self.conv1 = nn.Conv2d(3, 6, 5)self.bn1 = nn.BatchNorm2d(num_features=6)self.conv2 = nn.Conv2d(6, 16, 5)self.bn2 = nn.BatchNorm2d(num_features=16)self.fc1 = nn.Linear(16 * 5 * 5, 120)self.bn3 = nn.BatchNorm1d(num_features=120)self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, classes)def forward(self, x):out = self.conv1(x)out = self.bn1(out)out = F.relu(out)out = F.max_pool2d(out, 2)out = self.conv2(out)out = self.bn2(out)out = F.relu(out)out = F.max_pool2d(out, 2)out = out.view(out.size(0), -1)out = self.fc1(out)out = self.bn3(out)out = F.relu(out)out = F.relu(self.fc2(out))out = self.fc3(out)return outdef initialize_weights(self):for m in self.modules():if isinstance(m, nn.Conv2d):nn.init.xavier_normal_(m.weight.data)if m.bias is not None:m.bias.data.zero_()elif isinstance(m, nn.BatchNorm2d):m.weight.data.fill_(1)m.bias.data.zero_()elif isinstance(m, nn.Linear):nn.init.normal_(m.weight.data, 0, 1)m.bias.data.zero_()set_seed(1) # 設置隨機種子 rmb_label = {"1": 0, "100": 1}# 參數設置 MAX_EPOCH = 10 BATCH_SIZE = 16 LR = 0.01 log_interval = 10 val_interval = 1# ============================ step 1/5 數據 ============================ BASE_DIR = os.path.dirname(os.path.abspath(__file__)) split_dir = os.path.abspath(os.path.join(BASE_DIR, "..", "..", "data", "rmb_split")) train_dir = os.path.join(split_dir, "train") valid_dir = os.path.join(split_dir, "valid")if not os.path.exists(split_dir):raise Exception(r"數據 {} 不存在, 回到lesson-06\1_split_dataset.py生成數據".format(split_dir))norm_mean = [0.485, 0.456, 0.406] norm_std = [0.229, 0.224, 0.225]train_transform = transforms.Compose([transforms.Resize((32, 32)),transforms.RandomCrop(32, padding=4),transforms.RandomGrayscale(p=0.8),transforms.ToTensor(),transforms.Normalize(norm_mean, norm_std), ])valid_transform = transforms.Compose([transforms.Resize((32, 32)),transforms.ToTensor(),transforms.Normalize(norm_mean, norm_std), ])# 構建MyDataset實例 train_data = RMBDataset(data_dir=train_dir, transform=train_transform) valid_data = RMBDataset(data_dir=valid_dir, transform=valid_transform)# 構建DataLoder train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) valid_loader = DataLoader(dataset=valid_data, batch_size=BATCH_SIZE)# ============================ step 2/5 模型 ============================# net = LeNet_bn(classes=2) net = LeNet(classes=2) # net.initialize_weights()# ============================ step 3/5 損失函數 ============================ criterion = nn.CrossEntropyLoss() # 選擇損失函數# ============================ step 4/5 優化器 ============================ optimizer = optim.SGD(net.parameters(), lr=LR, momentum=0.9) # 選擇優化器 scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1) # 設置學習率下降策略# ============================ step 5/5 訓練 ============================ train_curve = list() valid_curve = list()iter_count = 0 # 構建 SummaryWriter writer = SummaryWriter(comment='test_your_comment', filename_suffix="_test_your_filename_suffix")for epoch in range(MAX_EPOCH):loss_mean = 0.correct = 0.total = 0.net.train()for i, data in enumerate(train_loader):iter_count += 1# forwardinputs, labels = dataoutputs = net(inputs)# backwardoptimizer.zero_grad()loss = criterion(outputs, labels)loss.backward()# update weightsoptimizer.step()# 統計分類情況_, predicted = torch.max(outputs.data, 1)total += labels.size(0)correct += (predicted == labels).squeeze().sum().numpy()# 打印訓練信息loss_mean += loss.item()train_curve.append(loss.item())if (i+1) % log_interval == 0:loss_mean = loss_mean / log_intervalprint("Training:Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(epoch, MAX_EPOCH, i+1, len(train_loader), loss_mean, correct / total))loss_mean = 0.# 記錄數據,保存于event filewriter.add_scalars("Loss", {"Train": loss.item()}, iter_count)writer.add_scalars("Accuracy", {"Train": correct / total}, iter_count)scheduler.step() # 更新學習率# validate the modelif (epoch+1) % val_interval == 0:correct_val = 0.total_val = 0.loss_val = 0.net.eval()with torch.no_grad():for j, data in enumerate(valid_loader):inputs, labels = dataoutputs = net(inputs)loss = criterion(outputs, labels)_, predicted = torch.max(outputs.data, 1)total_val += labels.size(0)correct_val += (predicted == labels).squeeze().sum().numpy()loss_val += loss.item()valid_curve.append(loss.item())print("Valid:\t Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(epoch, MAX_EPOCH, j+1, len(valid_loader), loss_val, correct / total))# 記錄數據,保存于event filewriter.add_scalars("Loss", {"Valid": loss.item()}, iter_count)writer.add_scalars("Accuracy", {"Valid": correct / total}, iter_count)train_x = range(len(train_curve)) train_y = train_curvetrain_iters = len(train_loader) valid_x = np.arange(1, len(valid_curve)+1) * train_iters*val_interval # 由于valid中記錄的是epochloss,需要對記錄點進行轉換到iterations valid_y = valid_curveplt.plot(train_x, train_y, label='Train') plt.plot(valid_x, valid_y, label='Valid')plt.legend(loc='upper right') plt.ylabel('loss value') plt.xlabel('Iteration') plt.show()

    首先看一下不使用BN層不使用初始化只用了LeNet的結果:

    后期數據出現不理想的情況會使得訓練損失發生較大震蕩。

    再看一下加上初始化后的結果:

    后期不再震蕩,但是前期也不是很理想。

    最后看使用加了BN層的LeNet的結果:

    相對來說,這是比較理想的損失函數曲線,盡管有震蕩,但是幅度很小。

    四、PyTorch中BN的實現

    1._BatchNorm類

    class _BatchNorm(_NormBase):def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True,track_running_stats=True):super(_BatchNorm, self).__init__(num_features, eps, momentum, affine, track_running_stats)def forward(self, input):self._check_input_dim(input)# exponential_average_factor is set to self.momentum# (when it is available) only so that it gets updated# in ONNX graph when this node is exported to ONNX.if self.momentum is None:exponential_average_factor = 0.0else:exponential_average_factor = self.momentumif self.training and self.track_running_stats:# TODO: if statement only here to tell the jit to skip emitting this when it is Noneif self.num_batches_tracked is not None:self.num_batches_tracked = self.num_batches_tracked + 1if self.momentum is None: # use cumulative moving averageexponential_average_factor = 1.0 / float(self.num_batches_tracked)else: # use exponential moving averageexponential_average_factor = self.momentumreturn F.batch_norm(input, self.running_mean, self.running_var, self.weight, self.bias,self.training or not self.track_running_stats,exponential_average_factor, self.eps)

    這是BN層需要繼承的基類,從init函數可以看出主要參數如下:

  • num_features:一個樣本的特征數量。
  • eps:分母修正項,防止標準化時除以一個為0的方差使得出錯。
  • momentum:指數加權平均估計當前mean/var。
  • affine:是否需要affine transform。
  • track_running_stats:是否為訓練狀態,因為訓練模式的mean/var是基于當前數據指數加權平均計算得到的,每個batch都不一樣,而測試模式時的mean/var是統計得到,是固定的,不隨batch而變化。
  • 此外,從forward函數中可以看到實現標準化和仿射變換操作的,是最后的return,它調用了PyTorch的functional功能,在這個調用里主要屬性有:

  • running_mean:均值。
  • running_var:方差。
  • weight:仿射變換中的gamma。
  • bias:仿射變換中的beta。
  • 對應下面這個公式里的四個參數:

    訓練時均值和方差是采用指數加權平均計算得到的,公式如下所示:

    其中,pre_running_mean是上一個batch計算得到的均值,mean_t是當前batch下求取的均值,最終的均值是這樣的一個指數加權平均的形式,方差也是同理。

    2.nn.BatchNorm1d/2d/3d

    (1)nn.BatchNorm1d

    與其他網絡層類似,BN層也有三種維度,以一維的為例:

    torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

    參數都是基類_BatchNorm的參數。

    上面介紹了標準化的計算公式,但是BN并不是簡單的將這一個batch的數據全部放在一起求均值和方差,而是一種逐特征(在特征這個維度上)的計算方式,一維數據如下圖所示:

    每一列代表一個數據樣本,一共有三個,每個數據樣本有5個特征(類似RGB模式),每個特征下就是一個特征的維度,在一維的情況下就是(1),所以數據維度是一個3×5×1的形式,而BN所做的是將三個樣本的第一個特征進行標準化和仿射變換,即三個1這行計算均值和方差,同理,另外四行都是如此,因為是對每個特征進行單獨的計算均值和方差,所以稱為逐特征的計算方式。

    下面給出PyTorch的實現代碼:

    import torch import numpy as np import torch.nn as nn import sys, os from tools.common_tools import set_seedset_seed(1) # 設置隨機種子# ======================================== nn.BatchNorm1d flag = 1 # flag = 0 if flag:batch_size = 3num_features = 5momentum = 0.3features_shape = (1)# 下面三行就是手動構建一個B×C×features_shape :3×5×1 的上述數據feature_map = torch.ones(features_shape) # 1Dfeature_maps = torch.stack([feature_map*(i+1) for i in range(num_features)], dim=0) # 2Dfeature_maps_bs = torch.stack([feature_maps for i in range(batch_size)], dim=0) # 3Dprint("input data:\n{} shape is {}".format(feature_maps_bs, feature_maps_bs.shape))bn = nn.BatchNorm1d(num_features=num_features, momentum=momentum)running_mean, running_var = 0, 1for i in range(2):outputs = bn(feature_maps_bs)print("\niteration:{}, running mean: {} ".format(i, bn.running_mean))print("iteration:{}, running var:{} ".format(i, bn.running_var))# 以下是手動計算的驗證過程mean_t, var_t = 2, 0running_mean = (1 - momentum) * running_mean + momentum * mean_trunning_var = (1 - momentum) * running_var + momentum * var_tprint("iteration:{}, 第二個特征的running mean: {} ".format(i, running_mean))print("iteration:{}, 第二個特征的running var:{}".format(i, running_var))

    結果如下:

    input data: tensor([[[1.],[2.],[3.],[4.],[5.]],[[1.],[2.],[3.],[4.],[5.]],[[1.],[2.],[3.],[4.],[5.]]]) shape is torch.Size([3, 5, 1])iteration:0, running mean: tensor([0.3000, 0.6000, 0.9000, 1.2000, 1.5000]) iteration:0, running var:tensor([0.7000, 0.7000, 0.7000, 0.7000, 0.7000]) iteration:0, 第二個特征的running mean: 0.6 iteration:0, 第二個特征的running var:0.7iteration:1, running mean: tensor([0.5100, 1.0200, 1.5300, 2.0400, 2.5500]) iteration:1, running var:tensor([0.4900, 0.4900, 0.4900, 0.4900, 0.4900]) iteration:1, 第二個特征的running mean: 1.02 iteration:1, 第二個特征的running var:0.48999999999999994

    一開始均值和方差為0和1,momentum為0.3,第一個特征的當前均值為1方差為0,可以自己動手算一下,第一個特征的running_mean=0.7×0+0.3×1=0.3;第一個特征的running_var=0.7×1+0.3×0=0.7;剩下的計算類似,動手算一下加深理解。

    (2)nn.BatchNorm2d

    再來看一下二維的情況,其實和一維類似,只是特征本身的維度變成了二維:

    橫軸還是樣本數,縱軸還是特征數,還是將三個樣本的第一個特征算一下均值方差,第二個第三個特征類似,還是這樣一行一行的算,所以這時的輸入數據維度會變成四維的:B×C×W×H,B是一個batch樣本數,C為特征數,W和H為特征維度,如圖中應該是:3×3×2×2,下面看一下實現,為區別B和C,代碼中B為3,即三個樣本,C為6,即六個特征:

    flag = 1 # flag = 0 if flag:batch_size = 3num_features = 6momentum = 0.3features_shape = (2, 2)feature_map = torch.ones(features_shape) # 2Dfeature_maps = torch.stack([feature_map*(i+1) for i in range(num_features)], dim=0) # 3Dfeature_maps_bs = torch.stack([feature_maps for i in range(batch_size)], dim=0) # 4Dprint("input data:\n{} shape is {}".format(feature_maps_bs, feature_maps_bs.shape))bn = nn.BatchNorm2d(num_features=num_features, momentum=momentum)running_mean, running_var = 0, 1for i in range(2):outputs = bn(feature_maps_bs)print("\niter:{}, running_mean.shape: {}".format(i, bn.running_mean.shape))print("iter:{}, running_var.shape: {}".format(i, bn.running_var.shape))print("iter:{}, weight.shape: {}".format(i, bn.weight.shape))print("iter:{}, bias.shape: {}".format(i, bn.bias.shape))

    結果如下:

    input data: tensor([[[[1., 1.],[1., 1.]],[[2., 2.],[2., 2.]],[[3., 3.],[3., 3.]],[[4., 4.],[4., 4.]],[[5., 5.],[5., 5.]],[[6., 6.],[6., 6.]]],[[[1., 1.],[1., 1.]],[[2., 2.],[2., 2.]],[[3., 3.],[3., 3.]],[[4., 4.],[4., 4.]],[[5., 5.],[5., 5.]],[[6., 6.],[6., 6.]]],[[[1., 1.],[1., 1.]],[[2., 2.],[2., 2.]],[[3., 3.],[3., 3.]],[[4., 4.],[4., 4.]],[[5., 5.],[5., 5.]],[[6., 6.],[6., 6.]]]]) shape is torch.Size([3, 6, 2, 2])iter:0, running_mean.shape: torch.Size([6]) iter:0, running_var.shape: torch.Size([6]) iter:0, weight.shape: torch.Size([6]) iter:0, bias.shape: torch.Size([6])iter:1, running_mean.shape: torch.Size([6]) iter:1, running_var.shape: torch.Size([6]) iter:1, weight.shape: torch.Size([6]) iter:1, bias.shape: torch.Size([6])

    觀察一下四個參數的size為6,因為是逐特征的計算。

    (3)nn.BatchNorm3d


    只是特征維度變成三維,計算方式還是逐特征的,看一下實現,注意這里特征數又變成了4:

    flag = 1 # flag = 0 if flag:batch_size = 3num_features = 4momentum = 0.3features_shape = (2, 2, 3)feature = torch.ones(features_shape) # 3Dfeature_map = torch.stack([feature * (i + 1) for i in range(num_features)], dim=0) # 4Dfeature_maps = torch.stack([feature_map for i in range(batch_size)], dim=0) # 5Dprint("input data:\n{} shape is {}".format(feature_maps, feature_maps.shape))bn = nn.BatchNorm3d(num_features=num_features, momentum=momentum)running_mean, running_var = 0, 1for i in range(2):outputs = bn(feature_maps)print("\niter:{}, running_mean.shape: {}".format(i, bn.running_mean.shape))print("iter:{}, running_var.shape: {}".format(i, bn.running_var.shape))print("iter:{}, weight.shape: {}".format(i, bn.weight.shape))print("iter:{}, bias.shape: {}".format(i, bn.bias.shape))

    結果如下:

    input data: tensor([[[[[1., 1., 1.],[1., 1., 1.]],[[1., 1., 1.],[1., 1., 1.]]],[[[2., 2., 2.],[2., 2., 2.]],[[2., 2., 2.],[2., 2., 2.]]],[[[3., 3., 3.],[3., 3., 3.]],[[3., 3., 3.],[3., 3., 3.]]],[[[4., 4., 4.],[4., 4., 4.]],[[4., 4., 4.],[4., 4., 4.]]]],[[[[1., 1., 1.],[1., 1., 1.]],[[1., 1., 1.],[1., 1., 1.]]],[[[2., 2., 2.],[2., 2., 2.]],[[2., 2., 2.],[2., 2., 2.]]],[[[3., 3., 3.],[3., 3., 3.]],[[3., 3., 3.],[3., 3., 3.]]],[[[4., 4., 4.],[4., 4., 4.]],[[4., 4., 4.],[4., 4., 4.]]]],[[[[1., 1., 1.],[1., 1., 1.]],[[1., 1., 1.],[1., 1., 1.]]],[[[2., 2., 2.],[2., 2., 2.]],[[2., 2., 2.],[2., 2., 2.]]],[[[3., 3., 3.],[3., 3., 3.]],[[3., 3., 3.],[3., 3., 3.]]],[[[4., 4., 4.],[4., 4., 4.]],[[4., 4., 4.],[4., 4., 4.]]]]]) shape is torch.Size([3, 4, 2, 2, 3])iter:0, running_mean.shape: torch.Size([4]) iter:0, running_var.shape: torch.Size([4]) iter:0, weight.shape: torch.Size([4]) iter:0, bias.shape: torch.Size([4])iter:1, running_mean.shape: torch.Size([4]) iter:1, running_var.shape: torch.Size([4]) iter:1, weight.shape: torch.Size([4]) iter:1, bias.shape: torch.Size([4])

    總結

    以上是生活随笔為你收集整理的PyTorch框架学习十七——Batch Normalization的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。