日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

CV算法复现(分类算法2/6):AlexNet(2012年 Hinton组)

發布時間:2023/11/27 生活经验 41 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CV算法复现(分类算法2/6):AlexNet(2012年 Hinton组) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

致謝:霹靂吧啦Wz:https://space.bilibili.com/18161609

目錄

致謝:霹靂吧啦Wz:https://space.bilibili.com/18161609

1 本次要點

1.1 深度學習理論

1.2 pytorch框架語法

2 網絡簡介

2.1 歷史意義

2.2 網絡亮點

2.3 網絡架構

3 代碼結構

3.1?model.py

3.2?train.py

3.3?predict.py

3.4?split_data.py


1 本次要點

1.1 深度學習理論

  1. 經過一次卷積操作后,圖像新尺寸計算公式:(如果padding [p1, p2]中p1,p2不相等,那么公式中2P就變為P1+P2)(如果結果值不是整數,pytorch中會自動忽略最后一行以及最后一列,以保證N為整數。)
  2. ?

1.2 pytorch框架語法

  1. pytorch可以自定義網絡權重的初始化方法(見model.py)。
  2. pata?=?list(net.parameters())?#查看模型參數

2 網絡簡介

2.1 歷史意義

  • 2012年ImageNet圖像分類冠軍網絡,分類準確率由傳統的 70%+直接提升到 80%+。在那年之后,深
    度學習開始迅速發展。

2.2 網絡亮點

  1. 首次利用 GPU 進行網絡加速訓練。
  2. 使用了 ReLU 激活函數,而不是傳統的 Sigmoid 激活函數以及 Tanh 激活函數。
  3. 在前兩層的全連接層中使用了 Dropout 隨機失活神經元操作,以減少過擬合。

2.3 網絡架構

備注:padding:?[1, 2]即圖像最左邊緣加1列0,最右邊緣加2列0。圖像最上邊緣加1行0,圖像最下邊緣加2行0。

3 代碼結構

  • model.py
  • train.py
  • predict.py
  • split_data.py(數據集劃分)

3.1?model.py

import torch.nn as nn
import torch"""
本AlexNet復現相比原論文,每層的卷積核個數減半。
"""
class AlexNet(nn.Module):def __init__(self, num_classes=1000, init_weights=False):super(AlexNet, self).__init__()# nn.Sequential():將一系列層結構進行打包。省去每一層都用一個變量去表示。self.features = nn.Sequential(nn.Conv2d(3, 48, kernel_size=11, stride=4, padding=2),  # input[3, 224, 224]  output[48, 55, 55]nn.ReLU(inplace=True), #inplace:通過增加計算量來降低內存使用,從而可以載入更大模型(默認False)。nn.MaxPool2d(kernel_size=3, stride=2),                  # output[48, 27, 27]nn.Conv2d(48, 128, kernel_size=5, padding=2),           # output[128, 27, 27]nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2),                  # output[128, 13, 13]nn.Conv2d(128, 192, kernel_size=3, padding=1),          # output[192, 13, 13]nn.ReLU(inplace=True),nn.Conv2d(192, 192, kernel_size=3, padding=1),          # output[192, 13, 13]nn.ReLU(inplace=True),nn.Conv2d(192, 128, kernel_size=3, padding=1),          # output[128, 13, 13]nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2),                  # output[128, 6, 6])self.classifier = nn.Sequential(nn.Dropout(p=0.5),nn.Linear(128 * 6 * 6, 2048), # 輸入:128通道*6*6(特征圖大小)(到此之前會拉成1維)nn.ReLU(inplace=True),nn.Dropout(p=0.5),nn.Linear(2048, 2048),nn.ReLU(inplace=True),nn.Linear(2048, num_classes),)if init_weights:self._initialize_weights()def forward(self, x):x = self.features(x)x = torch.flatten(x, start_dim=1) # torch中順序[B,C,H,W],start_dim=1就是將C維度拉平。x = self.classifier(x)return x# 初始化權重方式(框架有默認,如果要自定義可如下方式寫)def _initialize_weights(self):for m in self.modules():if isinstance(m, nn.Conv2d):nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')if m.bias is not None:nn.init.constant_(m.bias, 0)elif isinstance(m, nn.Linear):nn.init.normal_(m.weight, 0, 0.01)nn.init.constant_(m.bias, 0)

3.2?train.py

import torch
import torch.nn as nn
from torchvision import transforms, datasets, utils
import matplotlib.pyplot as plt
import numpy as np
import torch.optim as optim
from model import AlexNet
import os
import json
import time"""
數據集:花分類(5類)
"""def main():device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")print("using {} device.".format(device))data_transform = {"train": transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),#水平隨機翻轉transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]),"val": transforms.Compose([transforms.Resize((224, 224)),  # cannot 224, must (224, 224)transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])}data_root = os.path.abspath(os.path.join(os.getcwd(), "../.."))  #os.getcwd():獲取當前絕對路徑。"../.."返回到上上層路徑。image_path = os.path.join(data_root, "data_set", "flower_data")  # flower data set pathassert os.path.exists(image_path), "{} path does not exist.".format(image_path)train_dataset = datasets.ImageFolder(root=os.path.join(image_path, "train"),transform=data_transform["train"])train_num = len(train_dataset)# {'daisy':0, 'dandelion':1, 'roses':2, 'sunflower':3, 'tulips':4}flower_list = train_dataset.class_to_idxcla_dict = dict((val, key) for key, val in flower_list.items())#將鍵和值順序反過來。目的是讓模型預測的結果索引,可直接找到對應的類型。# write dict into json filejson_str = json.dumps(cla_dict, indent=4)#編碼成json格式with open('class_indices.json', 'w') as json_file:#新建json文件并寫入內容json_file.write(json_str)batch_size = 32nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8])  # number of workersprint('Using {} dataloader workers every process'.format(nw))train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=batch_size, shuffle=True,num_workers=nw)validate_dataset = datasets.ImageFolder(root=os.path.join(image_path, "val"),transform=data_transform["val"])val_num = len(validate_dataset)validate_loader = torch.utils.data.DataLoader(validate_dataset,batch_size=4, shuffle=False,num_workers=nw)print("using {} images for training, {} images fot validation.".format(train_num,# 查看數據集代碼                                                                       val_num))# test_data_iter = iter(validate_loader)# test_image, test_label = test_data_iter.next()## def imshow(img):#     img = img / 2 + 0.5  # unnormalize#     npimg = img.numpy()#     plt.imshow(np.transpose(npimg, (1, 2, 0)))#     plt.show()## print(' '.join('%5s' % cla_dict[test_label[j].item()] for j in range(4)))# imshow(utils.make_grid(test_image))net = AlexNet(num_classes=5, init_weights=True)net.to(device)loss_function = nn.CrossEntropyLoss()# pata = list(net.parameters()) #查看模型參數(調試用)optimizer = optim.Adam(net.parameters(), lr=0.0002)save_path = './AlexNet.pth'best_acc = 0.0for epoch in range(10):# 訓練階段net.train() #自動判定dropout或BN層是否應該啟用。running_loss = 0.0t1 = time.perf_counter()for step, data in enumerate(train_loader, start=0):images, labels = dataoptimizer.zero_grad()outputs = net(images.to(device))loss = loss_function(outputs, labels.to(device))loss.backward()#反向傳播optimizer.step()#更新每個節點參數# print statisticsrunning_loss += loss.item()# print train process 打印訓練信息rate = (step + 1) / len(train_loader)a = "*" * int(rate * 50)b = "." * int((1 - rate) * 50)print("\rtrain loss: {:^3.0f}%[{}->{}]{:.3f}".format(int(rate * 100), a, b, loss), end="")print()print(time.perf_counter()-t1)# 驗證階段net.eval() #自動判定dropout或BN層是否應該啟用。acc = 0.0  # accumulate accurate number / epochwith torch.no_grad():#不去計算損失梯度for val_data in validate_loader:val_images, val_labels = val_dataoutputs = net(val_images.to(device))predict_y = torch.max(outputs, dim=1)[1]acc += (predict_y == val_labels.to(device)).sum().item()val_accurate = acc / val_numif val_accurate > best_acc: best_acc = val_accuratetorch.save(net.state_dict(), save_path)print('[epoch %d] train_loss: %.3f  test_accuracy: %.3f' %(epoch + 1, running_loss / step, val_accurate))print('Finished Training')if __name__ == '__main__':main()

訓練結果:

3.3?predict.py

import torch
from model import AlexNet
from PIL import Image
from torchvision import transforms
import matplotlib.pyplot as plt
import jsondata_transform = transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])# load image
img = Image.open("../tulip.jpg")
plt.imshow(img)
# [N, C, H, W]
img = data_transform(img)
# expand batch dimension
img = torch.unsqueeze(img, dim=0)# read class_indict
try:json_file = open('./class_indices.json', 'r')class_indict = json.load(json_file)
except Exception as e:print(e)exit(-1)# create model
model = AlexNet(num_classes=5)
# load model weights
model_weight_path = "./AlexNet.pth"
model.load_state_dict(torch.load(model_weight_path))
model.eval()
with torch.no_grad():#不去計算損失梯度# predict classoutput = torch.squeeze(model(img))#torch.squeeze():對數據的維度進行壓縮,去掉維數為1的的維度predict = torch.softmax(output, dim=0)#將預測結果值轉換為概率分布形式。predict_cla = torch.argmax(predict).numpy()
print(class_indict[str(predict_cla)], predict[predict_cla].item())
plt.show()

輸出:

3.4?split_data.py

import os
from shutil import copy, rmtree
import random"""
使用步驟如下:
(1)在data_set文件夾下創建新文件夾"flower_data"
(2)點擊鏈接下載花分類數據集 http://download.tensorflow.org/example_images/flower_photos.tgz
(3)解壓數據集到flower_data文件夾下
(4)執行"split_data.py"腳本自動將數據集劃分成訓練集train和驗證集val├── flower_data   ├── flower_photos(解壓的數據集文件夾,3670個樣本)  ├── train(生成的訓練集,3306個樣本)  └── val(生成的驗證集,364個樣本) 
"""def mk_file(file_path: str):if os.path.exists(file_path):# 如果文件夾存在,則先刪除原文件夾在重新創建rmtree(file_path)os.makedirs(file_path)def main():# 保證隨機可復現random.seed(0)# 將數據集中10%的數據劃分到驗證集中split_rate = 0.1# 指向你解壓后的flower_photos文件夾cwd = os.getcwd()data_root = os.path.join(cwd, "flower_data")origin_flower_path = os.path.join(data_root, "flower_photos")assert os.path.exists(origin_flower_path)flower_class = [cla for cla in os.listdir(origin_flower_path)if os.path.isdir(os.path.join(origin_flower_path, cla))]# 建立保存訓練集的文件夾train_root = os.path.join(data_root, "train")mk_file(train_root)for cla in flower_class:# 建立每個類別對應的文件夾mk_file(os.path.join(train_root, cla))# 建立保存驗證集的文件夾val_root = os.path.join(data_root, "val")mk_file(val_root)for cla in flower_class:# 建立每個類別對應的文件夾mk_file(os.path.join(val_root, cla))for cla in flower_class:cla_path = os.path.join(origin_flower_path, cla)images = os.listdir(cla_path)num = len(images)# 隨機采樣驗證集的索引eval_index = random.sample(images, k=int(num*split_rate))for index, image in enumerate(images):if image in eval_index:# 將分配至驗證集中的文件復制到相應目錄image_path = os.path.join(cla_path, image)new_path = os.path.join(val_root, cla)copy(image_path, new_path)else:# 將分配至訓練集中的文件復制到相應目錄image_path = os.path.join(cla_path, image)new_path = os.path.join(train_root, cla)copy(image_path, new_path)print("\r[{}] processing [{}/{}]".format(cla, index+1, num), end="")  # processing barprint()print("processing done!")if __name__ == '__main__':main()

輸出:

總結

以上是生活随笔為你收集整理的CV算法复现(分类算法2/6):AlexNet(2012年 Hinton组)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。