PyTorch 进阶学习(二)————STN:空间变换网络(Spatial Transformer Network)
文章目錄
- 數(shù)據(jù)集的加載
- 空間變換網(wǎng)絡(luò)的介紹
- 定義網(wǎng)絡(luò)
- 訓(xùn)練和測(cè)試模型
- 可視化 STN 結(jié)果
官方文檔地址: https://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html
在本教程中,您將學(xué)會(huì)如何使用 空間變換網(wǎng)絡(luò) 的視覺注意力機(jī)制來擴(kuò)充網(wǎng)絡(luò)。如果需要了解更多 空間變換網(wǎng)絡(luò) 可以在 DeepMind 論文中詳細(xì)閱讀。
空間變換網(wǎng)絡(luò)允許神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)如何對(duì)輸入圖像執(zhí)行空間變換,以增強(qiáng)模型的幾何不變性。例如,他可以裁剪感興趣的區(qū)域,縮放并校正圖像的方向。這可能是一個(gè)有用的機(jī)制,因?yàn)?strong>CNN不會(huì)對(duì)旋轉(zhuǎn)和縮放以及更一般的仿射變換保持不變。
對(duì)于 STN 的最大優(yōu)點(diǎn)之一就是:能夠?qū)⑵浜?jiǎn)單地插入到任何現(xiàn)有的 CNN 中,而無(wú)需進(jìn)行任何修改。
# License: BSD # Author: Ghassen Hamrounifrom __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision from torchvision import datasets, transforms import matplotlib.pyplot as plt import numpy as npplt.ion() # interactive mode數(shù)據(jù)集的加載
本文以經(jīng)典的 MNIST 數(shù)據(jù)集為例,使用標(biāo)準(zhǔn)卷積網(wǎng)絡(luò)和空間變換網(wǎng)絡(luò)。
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")# Training dataset train_loader = torch.utils.data.DataLoader(datasets.MNIST(root='.', train=True, download=True,transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))])), batch_size=64, shuffle=True, num_workers=4) # Test dataset test_loader = torch.utils.data.DataLoader(datasets.MNIST(root='.', train=False, transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))])), batch_size=64, shuffle=True, num_workers=4)空間變換網(wǎng)絡(luò)的介紹
空間變換網(wǎng)絡(luò)主要由三個(gè)主要部分組成:
- 本地網(wǎng)絡(luò)(Localisation Network): 本地網(wǎng)絡(luò)為常規(guī)的 CNN,是一個(gè)用來回歸變換參數(shù)θ的網(wǎng)絡(luò)。
- 網(wǎng)格生成器(Grid Genator): 網(wǎng)格生成器在輸入圖像中生成與輸出圖像的每個(gè)像素相對(duì)應(yīng)的坐標(biāo)網(wǎng)絡(luò)。
- 采樣器(Sampler): 采樣器利用采樣網(wǎng)絡(luò)和輸入的特征圖同時(shí)作為輸入,然后輸入,得到了特征圖經(jīng)過變換之后的結(jié)果。
定義網(wǎng)絡(luò)
class Net(nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(1, 10, kernel_size=5)self.conv2 = nn.Conv2d(10, 20, kernel_size=5)self.conv2_drop = nn.Dropout2d()self.fc1 = nn.Linear(320, 50)self.fc2 = nn.Linear(50, 10)# Spatial transformer localization-networkself.localization = nn.Sequential(nn.Conv2d(1, 8, kernel_size=7),nn.MaxPool2d(2, stride=2),nn.ReLU(True),nn.Conv2d(8, 10, kernel_size=5),nn.MaxPool2d(2, stride=2),nn.ReLU(True))# Regressor for the 3 * 2 affine matrixself.fc_loc = nn.Sequential(nn.Linear(10 * 3 * 3, 32),nn.ReLU(True),nn.Linear(32, 3 * 2))# Initialize the weights/bias with identity transformationself.fc_loc[2].weight.data.zero_()self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))# Spatial transformer network forward functiondef stn(self, x):xs = self.localization(x)xs = xs.view(-1, 10 * 3 * 3)theta = self.fc_loc(xs)theta = theta.view(-1, 2, 3)grid = F.affine_grid(theta, x.size())x = F.grid_sample(x, grid)return xdef forward(self, x):# transform the inputx = self.stn(x)# Perform the usual forward passx = F.relu(F.max_pool2d(self.conv1(x), 2))x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))x = x.view(-1, 320)x = F.relu(self.fc1(x))x = F.dropout(x, training=self.training)x = self.fc2(x)return F.log_softmax(x, dim=1)model = Net().to(device)訓(xùn)練和測(cè)試模型
使用 SGD 算法訓(xùn)練模型。網(wǎng)絡(luò)正在以監(jiān)督學(xué)習(xí)的方式來學(xué)習(xí)分類任務(wù)。同時(shí)該模型以端到端的方式自動(dòng)學(xué)習(xí) STN。
optimizer = optim.SGD(model.parameters(), lr=0.01)def train(epoch):model.train()for batch_idx, (data, target) in enumerate(train_loader):data, target = data.to(device), target.to(device)optimizer.zero_grad()output = model(data)loss = F.nll_loss(output, target)loss.backward()optimizer.step()if batch_idx % 500 == 0:print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch, batch_idx * len(data), len(train_loader.dataset),100. * batch_idx / len(train_loader), loss.item())) # # A simple test procedure to measure STN the performances on MNIST. #def test():with torch.no_grad():model.eval()test_loss = 0correct = 0for data, target in test_loader:data, target = data.to(device), target.to(device)output = model(data)# sum up batch losstest_loss += F.nll_loss(output, target, size_average=False).item()# get the index of the max log-probabilitypred = output.max(1, keepdim=True)[1]correct += pred.eq(target.view_as(pred)).sum().item()test_loss /= len(test_loader.dataset)print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(test_loss, correct, len(test_loader.dataset),100. * correct / len(test_loader.dataset)))可視化 STN 結(jié)果
現(xiàn)在,我們將檢查學(xué)習(xí)到的視覺注意力機(jī)制的結(jié)果。
我們定義了一個(gè)小的輔助函數(shù),以便訓(xùn)練的時(shí)候進(jìn)行可視化轉(zhuǎn)換。
def convert_image_np(inp):"""Convert a Tensor to numpy image."""inp = inp.numpy().transpose((1, 2, 0))mean = np.array([0.485, 0.456, 0.406])std = np.array([0.229, 0.224, 0.225])inp = std * inp + meaninp = np.clip(inp, 0, 1)return inp# We want to visualize the output of the spatial transformers layer # after the training, we visualize a batch of input images and # the corresponding transformed batch using STN.def visualize_stn():with torch.no_grad():# Get a batch of training datadata = next(iter(test_loader))[0].to(device)input_tensor = data.cpu()transformed_input_tensor = model.stn(data).cpu()in_grid = convert_image_np(torchvision.utils.make_grid(input_tensor))out_grid = convert_image_np(torchvision.utils.make_grid(transformed_input_tensor))# Plot the results side-by-sidef, axarr = plt.subplots(1, 2)axarr[0].imshow(in_grid)axarr[0].set_title('Dataset Images')axarr[1].imshow(out_grid)axarr[1].set_title('Transformed Images')for epoch in range(1, 20 + 1):train(epoch)test()# Visualize the STN transformation on some input batch visualize_stn()plt.ioff() plt.show()
輸出:
總結(jié)
以上是生活随笔為你收集整理的PyTorch 进阶学习(二)————STN:空间变换网络(Spatial Transformer Network)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 阿里面试题:FileInputStrea
- 下一篇: 一起了解smb渗透