[VGG16]——网络结构介绍及搭建(PyTorch)
生活随笔
收集整理的這篇文章主要介紹了
[VGG16]——网络结构介绍及搭建(PyTorch)
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
一、VGG16的結(jié)構(gòu)層次
VGG16總共有16層,13個(gè)卷積層和3個(gè)全連接層,第一次經(jīng)過(guò)64個(gè)卷積核的兩次卷積后,采用一次pooling,第二次經(jīng)過(guò)兩次128個(gè)卷積核卷積后,采用pooling;再經(jīng)過(guò)3次256個(gè)卷積核卷積后。采用pooling,再重復(fù)兩次三個(gè)512個(gè)卷積核卷積后,再pooling,最后經(jīng)過(guò)三次全連接。
1、附上官方的vgg16網(wǎng)絡(luò)結(jié)構(gòu)圖:
- conv3-64的全稱就是convolution kernel_size=3, the number of kernel=64,也就是說(shuō),這一層是卷積層,卷積核尺寸是3x3xn(n代表channels,是輸入該層圖像數(shù)據(jù)的通道數(shù)),該卷積層有64個(gè)卷積核實(shí)施卷積操作。
- FC4096全稱是Fully Connected 4096,是輸出層連接4096個(gè)神經(jīng)元的全連接層。
- maxpool就是最大池化操作。最大值池化的窗口尺寸是2×2,步長(zhǎng)stride=2
2、 VGG模型所需的內(nèi)存容量:
?二、模型搭建
VGG16實(shí)現(xiàn),基于CIFAR-10數(shù)據(jù)集
- CIFAR-10:該數(shù)據(jù)集共有60000張彩色圖像,這些圖像是32*32,分為10個(gè)類,每類6000張圖,50000 training images and 10000 test images.
- CIFAR-10:CIFAR-10 and CIFAR-100 datasets (toronto.edu)
1、數(shù)據(jù)加載
#加載數(shù)據(jù) transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) #訓(xùn)練集 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True) #測(cè)試集 testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')2、VGG16網(wǎng)絡(luò)實(shí)現(xiàn)
class Vgg16_net(nn.Module):def __init__(self):super(Vgg16_net, self).__init__()self.layer1=nn.Sequential(nn.Conv2d(in_channels=3,out_channels=64,kernel_size=3,stride=1,padding=1), #(32-3+2)/1+1=32 32*32*64nn.BatchNorm2d(64),#inplace-選擇是否進(jìn)行覆蓋運(yùn)算#意思是是否將計(jì)算得到的值覆蓋之前的值,比如nn.ReLU(inplace=True),#意思就是對(duì)從上層網(wǎng)絡(luò)Conv2d中傳遞下來(lái)的tensor直接進(jìn)行修改,#這樣能夠節(jié)省運(yùn)算內(nèi)存,不用多存儲(chǔ)其他變量nn.Conv2d(in_channels=64,out_channels=64,kernel_size=3,stride=1,padding=1), #(32-3+2)/1+1=32 32*32*64#Batch Normalization強(qiáng)行將數(shù)據(jù)拉回到均值為0,方差為1的正太分布上,# 一方面使得數(shù)據(jù)分布一致,另一方面避免梯度消失。nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=2,stride=2) #(32-2)/2+1=16 16*16*64)self.layer2=nn.Sequential(nn.Conv2d(in_channels=64,out_channels=128,kernel_size=3,stride=1,padding=1), #(16-3+2)/1+1=16 16*16*128nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(in_channels=128,out_channels=128,kernel_size=3,stride=1,padding=1), #(16-3+2)/1+1=16 16*16*128nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.MaxPool2d(2,2) #(16-2)/2+1=8 8*8*128)self.layer3=nn.Sequential(nn.Conv2d(in_channels=128,out_channels=256,kernel_size=3,stride=1,padding=1), #(8-3+2)/1+1=8 8*8*256nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(in_channels=256,out_channels=256,kernel_size=3,stride=1,padding=1), #(8-3+2)/1+1=8 8*8*256nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(in_channels=256,out_channels=256,kernel_size=3,stride=1,padding=1), #(8-3+2)/1+1=8 8*8*256nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.MaxPool2d(2,2) #(8-2)/2+1=4 4*4*256)self.layer4=nn.Sequential(nn.Conv2d(in_channels=256,out_channels=512,kernel_size=3,stride=1,padding=1), #(4-3+2)/1+1=4 4*4*512nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1), #(4-3+2)/1+1=4 4*4*512nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1), #(4-3+2)/1+1=4 4*4*512nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.MaxPool2d(2,2) #(4-2)/2+1=2 2*2*512)self.layer5=nn.Sequential(nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1), #(2-3+2)/1+1=2 2*2*512nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1), #(2-3+2)/1+1=2 2*2*512nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1), #(2-3+2)/1+1=2 2*2*512nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.MaxPool2d(2,2) #(2-2)/2+1=1 1*1*512)self.conv=nn.Sequential(self.layer1,self.layer2,self.layer3,self.layer4,self.layer5)self.fc=nn.Sequential(#y=xA^T+b x是輸入,A是權(quán)值,b是偏執(zhí),y是輸出#nn.Liner(in_features,out_features,bias)#in_features:輸入x的列數(shù) 輸入數(shù)據(jù):[batchsize,in_features]#out_freatures:線性變換后輸出的y的列數(shù),輸出數(shù)據(jù)的大小是:[batchsize,out_features]#bias: bool 默認(rèn)為True#線性變換不改變輸入矩陣x的行數(shù),僅改變列數(shù)nn.Linear(512,512),nn.ReLU(inplace=True),nn.Dropout(0.5),nn.Linear(512,256),nn.ReLU(inplace=True),nn.Dropout(0.5),nn.Linear(256,10))def forward(self,x):x=self.conv(x)#這里-1表示一個(gè)不確定的數(shù),就是你如果不確定你想要reshape成幾行,但是你很肯定要reshape成512列# 那不確定的地方就可以寫成-1#如果出現(xiàn)x.size(0)表示的是batchsize的值# x=x.view(x.size(0),-1)x = x.view(-1, 512)x=self.fc(x)return x3、模型訓(xùn)練
mini-batch設(shè)置為100,每加載50個(gè)mini-batch,統(tǒng)計(jì)一次數(shù)據(jù)(5000張圖)
''' 模型訓(xùn)練 ''' def net_train():epoch = 10 # 訓(xùn)練次數(shù)learning_rate = 1e-4 # 學(xué)習(xí)率net = Vgg16_net()criterion = nn.CrossEntropyLoss()optimizer = optim.Adam(net.parameters(), lr=learning_rate)print('start Training.......')print("print msg : 50 mini-batches per time")for epoch in range(epoch): # 迭代running_loss = 0.0running_acc = 0.0print('*' * 25, 'epoch {}'.format(epoch + 1), '*' * 25, "——> ")for i, data in enumerate(trainloader, 0):inputs, labels = data#print("i: {}, inputs: {}, labels: {}".format(i, len(inputs), len(labels)))# 向前傳播out = net(inputs)loss = criterion(out, labels)running_loss += loss.item() * labels.size(0)_, pred = torch.max(out, 1) # 預(yù)測(cè)最大值所在的位置標(biāo)簽num_correct = (pred == labels).sum()# 初始化梯度optimizer.zero_grad()outputs = net(inputs)loss = criterion(outputs, labels)loss.backward()optimizer.step()# 打印loss 和 accrunning_acc += num_correct.item()running_loss += loss.item()if i % 50 == 49: # print every 5000 mini-batchesprint('[%d, %5d] loss: %.5f Acc:%.5f' %(epoch + 1, i + 1, running_loss / 5000, running_acc / 5000))running_loss = 0.0running_acc = 0.0print('Finished Training')return net4、模型測(cè)試
''' 模型測(cè)試 ''' def net_test(model):model.eval() # 模型評(píng)估criterion = nn.CrossEntropyLoss()eval_loss = 0eval_acc = 0for data in testloader: # 測(cè)試模型img, label = dataout = model(img)loss = criterion(out, label)eval_loss += loss.item() * label.size(0)_, pred = torch.max(out, 1)num_correct = (pred == label).sum()eval_acc += num_correct.item()print('Test Loss: {:.6f}, Acc: {:.6f}'.format(eval_loss / (len(testset)), eval_acc / (len(testset))))總結(jié)
以上是生活随笔為你收集整理的[VGG16]——网络结构介绍及搭建(PyTorch)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 2022天勤数据结构
- 下一篇: c语言求1到20的各个阶乘,c语言求阶乘