日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【pytorch】pytorch自定义训练vgg16和测试数据集 微调resnet18全连接层

發布時間:2024/9/30 编程问答 43 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【pytorch】pytorch自定义训练vgg16和测试数据集 微调resnet18全连接层 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

自己定義模型

測試:

correct = 0total = 0for data in test_loader:img,label = dataoutputs = net(Variable(img))_,predict = torch.max(outputs.data,1)total += label.size(0)correct += (predict == label).sum()print(str(predict)+','+str(label))print(100*correct/total)

輸出:

預測錯誤還是挺大的,居然全是1
完整代碼:

```python import torch.nn.functional as Fimport torch import torch.nn as nn from torch.autograd import Variablefrom torchvision import transforms from torch.utils.data.dataset import Dataset from torch.utils.data.dataloader import DataLoader from PIL import Imageimport torch.optim as optim import os# ***************************初始化一些函數******************************** # torch.cuda.set_device(gpu_id)#使用GPU learning_rate = 0.0001 # 學習率的設置# *************************************數據集的設置**************************************************************************** root = os.getcwd() + '\\data\\' # 數據集的地址# 定義讀取文件的格式 def default_loader(path):return Image.open(path).convert('RGB')class MyDataset(Dataset):# 創建自己的類: MyDataset,這個類是繼承的torch.utils.data.Dataset# ********************************** #使用__init__()初始化一些需要傳入的參數及數據集的調用**********************def __init__(self, txt, transform=None, target_transform=None,test = False,loader=default_loader):super(MyDataset, self).__init__()# 對繼承自父類的屬性進行初始化imgs = []fh = open(txt, 'r')# 按照傳入的路徑和txt文本參數,以只讀的方式打開這個文本for line in fh: # 迭代該列表#按行循環txt文本中的內line = line.strip('\n')line = line.rstrip('\n')# 刪除 本行string 字符串末尾的指定字符,這個方法的詳細介紹自己查詢pythonwords = line.split()# 用split將該行分割成列表 split的默認參數是空格,所以不傳遞任何參數時分割空格imgs.append((words[0], int(words[1])))# 把txt里的內容讀入imgs列表保存,具體是words幾要看txt內容而定# 很顯然,根據我剛才截圖所示txt的內容,words[0]是圖片信息,words[1]是lableself.test = testself.imgs = imgsself.transform = transformself.target_transform = target_transform# *************************** #使用__getitem__()對數據進行預處理并返回想要的信息**********************def __getitem__(self, index): # 這個方法是必須要有的,用于按照索引讀取每個元素的具體內容fn, label = self.imgs[index]if self.test is False:# fn是圖片path #fn和label分別獲得imgs[index]也即是剛才每行中word[0]和word[1]的信息img_path = os.path.join("C:\\Users\\pic\\train", fn)else:img_path = os.path.join("C:\\Users\\pic\\test", fn)img = Image.open(img_path).convert('RGB')# 按照路徑讀取圖片if self.transform is not None:img = self.transform(img)# 數據標簽轉換為Tensorreturn img, label# return回哪些內容,那么我們在訓練時循環讀取每個batch時,就能獲得哪些內容# ********************************** #使用__len__()初始化一些需要傳入的參數及數據集的調用**********************def __len__(self):# 這個函數也必須要寫,它返回的是數據集的長度,也就是多少張圖片,要和loader的長度作區分return len(self.imgs)class Net(nn.Module): # 定義網絡,繼承torch.nn.Moduledef __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(3, 6, 5) # 卷積層self.pool = nn.MaxPool2d(2, 2) # 池化層self.conv2 = nn.Conv2d(6, 16, 5) # 卷積層self.fc1 = nn.Linear(16 * 5 * 5, 120) # 全連接層self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 2) # 2個輸出def forward(self, x): # 前向傳播x = self.pool(F.relu(self.conv1(x))) # F就是torch.nn.functionalx = self.pool(F.relu(self.conv2(x)))x = x.view(-1, 16 * 5 * 5) # .view( )是一個tensor的方法,使得tensor改變size但是元素的總數是不變的。# 從卷基層到全連接層的維度轉換x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = self.fc3(x)return xIMG_MEAN = [0.485, 0.456, 0.406] IMG_STD = [0.229, 0.224, 0.225]net = Net() # 初始化一個卷積神經網絡leNet- train_data = MyDataset(txt=root + 'num.txt', transform=transforms.Compose([transforms.RandomHorizontalFlip(), # 水平翻轉transforms.Resize((32, 32)),# 將圖片縮放到指定大小(h,w)或者保持長寬比并縮放最短的邊到int大小transforms.CenterCrop(32),transforms.ToTensor()])) test_data = MyDataset(txt=root+'test.txt', transform=transforms.Compose([transforms.Resize((32, 32)),# 將圖片縮放到指定大小(h,w)或者保持長寬比并縮放最短的邊到int大小transforms.CenterCrop(32),transforms.ToTensor()]),test=True) train_loader = DataLoader(dataset=train_data, batch_size=227, shuffle=True,drop_last=True) # batch_size:從樣本中取多少張,每一次epoch都會輸入batch_size張 print('num_of_trainData:', len(train_data)) test_loader = DataLoader(dataset=test_data, batch_size=19, shuffle=False)def trainandsave():# 神經網絡結構print('h')net = Net()optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # 學習率為0.001criterion = nn.CrossEntropyLoss() # 損失函數也可以自己定義,我們這里用的交叉熵損失函數# 訓練部分for epoch in range(10): # 訓練的數據量為10個epoch,每個epoch為一個循環# 每個epoch要訓練所有的圖片,每訓練完成200張便打印一下訓練的效果(loss值)running_loss = 0.0 # 定義一個變量方便我們對loss進行輸出for i, data in enumerate(train_loader, 0): # 這里我們遇到了第一步中出現的trailoader,代碼傳入數據# enumerate是python的內置函數,既獲得索引也獲得數據# get the inputsinputs, labels = data # data是從enumerate返回的data,包含數據和標簽信息,分別賦值給inputs和labels# wrap them in Variableinputs, labels = Variable(inputs), Variable(labels) # # 轉換數據格式用Variableoptimizer.zero_grad() # 梯度置零,因為反向傳播過程中梯度會累加上一次循環的梯度# forward + backward + optimizeoutputs = net(inputs) # 把數據輸進CNN網絡netloss = criterion(outputs, labels) # 計算損失值loss.backward() # loss反向傳播optimizer.step() # 反向傳播后參數更新running_loss += loss.item() # loss累加if i % 9 == 1:print('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, running_loss / 10)) # 平均損失值running_loss = 0.0 # 這一個結束后,就把running_loss歸零,print('Finished Training')# 保存神經網絡torch.save(net, 'net.pkl')# 保存整個神經網絡的結構和模型參數torch.save(net.state_dict(), 'net_params.pkl')

嘗試運行vgg16:

找了很久,預測是參數的問題,輸入圖片是224*224,一開始改的resize但還是報錯,于是改
transforms.CenterCrop((224, 224))
然后data[0]要改成item()
然后運行成功了(好慢。。)

class VGG16(nn.Module):def __init__(self, nums=2):super(VGG16, self).__init__()self.nums = numsvgg = []# 第一個卷積部分# 112, 112, 64vgg.append(nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 第二個卷積部分# 56, 56, 128vgg.append(nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 第三個卷積部分# 28, 28, 256vgg.append(nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 第四個卷積部分# 14, 14, 512vgg.append(nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 第五個卷積部分# 7, 7, 512vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 將每一個模塊按照他們的順序送入到nn.Sequential中,輸入要么事orderdict,要么事一系列的模型,遇到上述的list,必須用*號進行轉化self.main = nn.Sequential(*vgg)# 全連接層classfication = []# in_features四維張量變成二維[batch_size,channels,width,height]變成[batch_size,channels*width*height]classfication.append(nn.Linear(in_features=512 * 7 * 7, out_features=4096)) # 輸出4096個神經元,參數變成512*7*7*4096+bias(4096)個classfication.append(nn.ReLU())classfication.append(nn.Dropout(p=0.5))classfication.append(nn.Linear(in_features=4096, out_features=4096))classfication.append(nn.ReLU())classfication.append(nn.Dropout(p=0.5))classfication.append(nn.Linear(in_features=4096, out_features=self.nums))self.classfication = nn.Sequential(*classfication)def forward(self, x):feature = self.main(x) # 輸入張量xfeature = feature.view(x.size(0), -1) # reshape x變成[batch_size,channels*width*height]#feature = feature.view(-1,116224)result = self.classfication(feature)return resultnet = Net() # 初始化一個卷積神經網絡leNet- train_data = MyDataset(txt=root + 'num.txt', transform=transforms.Compose([transforms.RandomHorizontalFlip(), # 水平翻轉 transforms.Resize((224, 224)),# 將圖片縮放到指定大小(h,w)或者保持長寬比并縮放最短的邊到int大小 transforms.CenterCrop((224, 224)),transforms.ToTensor()])) test_data = MyDataset(txt=root+'test.txt', transform=transforms.Compose([transforms.Resize((32, 32)),# 將圖片縮放到指定大小(h,w)或者保持長寬比并縮放最短的邊到int大小transforms.CenterCrop(32),transforms.ToTensor()]),test=True) train_loader = DataLoader(dataset=train_data, batch_size=16, shuffle=True,drop_last=True) # batch_size:從樣本中取多少張,每一次epoch都會輸入batch_size張 print('num_of_trainData:', len(train_data)) test_loader = DataLoader(dataset=test_data, batch_size=19, shuffle=False)if __name__ == '__main__':# trainandsave()vgg = VGG16()#vgg = VGG16(2)optimizer = optim.SGD(vgg.parameters(), lr=0.001, momentum=0.9) # 學習率為0.001criterion = nn.CrossEntropyLoss() # 損失函數也可以自己定義,我們這里用的交叉熵損失函數# 訓練部分for epoch in range(10): # 訓練的數據量為10個epoch,每個epoch為一個循環# 每個epoch要訓練所有的圖片,每訓練完成200張便打印一下訓練的效果(loss值)running_loss = 0.0 # 定義一個變量方便我們對loss進行輸出train_loss = 0.train_acc = 0.for i, data in enumerate(train_loader, 0): # 這里我們遇到了第一步中出現的trailoader,代碼傳入數據# enumerate是python的內置函數,既獲得索引也獲得數據# get the inputsinputs, labels = data # data是從enumerate返回的data,包含數據和標簽信息,分別賦值給inputs和labels# wrap them in Variableinputs, labels = Variable(inputs), Variable(labels) # # 轉換數據格式用Variableoptimizer.zero_grad() # 梯度置零,因為反向傳播過程中梯度會累加上一次循環的梯度# forward + backward + optimizeoutputs = vgg(inputs) # 把數據輸進CNN網絡netloss = criterion(outputs, labels) # 計算損失值train_loss += loss.item()pred = torch.max(outputs, 1)[1]train_correct = (pred == labels).sum()train_acc += train_correct.item()loss.backward() # loss反向傳播optimizer.step() # 反向傳播后參數更新running_loss += loss.item() # loss累加print('Train Loss: {:.6f}, Acc: {:.6f}'.format(train_loss / (len(train_data)), train_acc / (len(train_data))))print('Finished Training')# 保存神經網絡torch.save(net, 'net.pkl')# 保存整個神經網絡的結構和模型參數torch.save(net.state_dict(), 'net_params.pkl')

嘗試將損失函數修改:
optimizer = optim.Adam(vgg.parameters(), lr=1e-6) # 學習率為0.001

結果仍然不理想。懷疑是數據集有過大誤差。

載入已有模型進行參數優化

兩個主要的遷移學習場景:
Finetuning the convnet: 我們使用預訓練網絡初始化網絡,而不是隨機初始化,就像在imagenet 1000數據集上訓練的網絡一樣。其余訓練看起來像往常一樣。(此微調過程對應引用中所說的初始化)
ConvNet as fixed feature extractor: 在這里,我們將凍結除最終完全連接層之外的所有網絡的權重。最后一個全連接層被替換為具有隨機權重的新層,并且僅訓練該層。(此步對應引
用中的固定特征提取器

用加載預訓練模型并重置最終的全連接層的方法進行訓練。
每一個epoch都進行訓練和測試。我寫的是resnet18(先自己網上下載pkl文件,在pycharm里面下載太慢)

目前的全部代碼:

from __future__ import print_function, divisionimport torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copyimport torchvision.models as modelsdata_transforms = { 'train': transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),'val': transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),}data_dir =os.getcwd() + '\\data\\' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,shuffle=True) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") def imshow(inp, title=None):"""Imshow for Tensor."""inp = inp.numpy().transpose((1, 2, 0))mean = np.array([0.485, 0.456, 0.406])std = np.array([0.229, 0.224, 0.225])inp = std * inp + meaninp = np.clip(inp, 0, 1)plt.imshow(inp)if title is not None:plt.title(title)plt.pause(0.001) # pause a bit so that plots are updateddef train_model(model, criterion, optimizer, scheduler, num_epochs=25):since = time.time()best_model_wts = copy.deepcopy(model.state_dict())best_acc = 0.0for epoch in range(num_epochs):print('Epoch {}/{}'.format(epoch, num_epochs - 1))print('-' * 10)# Each epoch has a training and validation phasefor phase in ['train', 'val']:if phase == 'train':scheduler.step()model.train() # Set model to training moderunning_loss = 0.0running_corrects = 0# Iterate over data.for inputs, labels in dataloaders[phase]:# zero the parameter gradientsoptimizer.zero_grad()# track history if only in trainwith torch.set_grad_enabled(phase == 'train'):outputs = model(inputs)_,preds = torch.max(outputs, 1)loss = criterion(outputs, labels)if phase == 'train':# backward + optimize only if in training phaseloss.backward()optimizer.step()running_loss += loss.item() * inputs.size(0)running_corrects += torch.sum(preds == labels.data)epoch_loss = running_loss / dataset_sizes[phase]epoch_acc = running_corrects.double() / dataset_sizes[phase]print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc)) # deep copy the modelif phase == 'val' and epoch_acc > best_acc:best_acc = epoch_accbest_model_wts = copy.deepcopy(model.state_dict())print()time_elapsed = time.time() - sinceprint('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))print('Best val Acc: {:4f}'.format(best_acc))# load best model weightsmodel.load_state_dict(best_model_wts)return modelclass_names = image_datasets['train'].classes def visualize_model(model, num_images=6):was_training = model.trainingmodel.eval()images_so_far = 0fig = plt.figure()with torch.no_grad():for i, (inputs, labels) in enumerate(dataloaders['val']):outputs = model(inputs)_, preds = torch.max(outputs, 1)for j in range(inputs.size()[0]):images_so_far += 1ax = plt.subplot(num_images // 2, 2, images_so_far)ax.axis('off')ax.set_title('predicted: {}'.format(class_names[preds[j]]))imshow(inputs.cpu().data[j])if images_so_far == num_images:model.train(mode=was_training)returnmodel.train(mode=was_training)model_ft = models.resnet18(pretrained=False) pthfile = r'C:\Users\14172\PycharmProjects\pythonProject4\resnet18-5c106cde.pth' model_ft.load_state_dict(torch.load(pthfile))#model_ft = models.vgg16(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 2) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)# Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,num_epochs=25) # 保存神經網絡 torch.save(model_ft, 'modefresnet.pkl') # 保存整個神經網絡的結構和模型參數 torch.save(model_ft.state_dict(), 'modelresnet_params.pkl') visualize_model(model_ft)


兩者差距還是比較大,后期再進行調整。先記錄。

與50位技術專家面對面20年技術見證,附贈技術全景圖

總結

以上是生活随笔為你收集整理的【pytorch】pytorch自定义训练vgg16和测试数据集 微调resnet18全连接层的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。