日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

使用Resnet网络对人脸图像分类识别出男女性别(包含数据集制作+训练+测试)

發布時間:2023/12/20 pytorch 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 使用Resnet网络对人脸图像分类识别出男女性别(包含数据集制作+训练+测试) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • 目標檢測+分類數據集大全[https://blog.csdn.net/DeepLearning_/article/details/127276492?spm=1001.2014.3001.5502](https://blog.csdn.net/DeepLearning_/article/details/127276492?spm=1001.2014.3001.5502):
  • 前言
  • 一、數據預處理
    • 1.分類數據存放
    • 2.生成train.txt與val.txt
  • 二、更改配置文件
    • 1.自定義修改
  • 三、定義resnet網絡
  • 四、train.py訓練
  • 五、預測predict.py實現
  • 六、預測結果
  • 七、完整項目代碼+數據集(大于1500張)
  • 總結


目標檢測+分類數據集大全https://blog.csdn.net/DeepLearning_/article/details/127276492?spm=1001.2014.3001.5502:

前言

本打算昨天寫這篇博客的,推遲到今天晚上。實際上,上午我已經把模型訓練完了,迭代100次,最后準確率可達到95%,考慮到用的臺式機沒有裝顯卡,所以使用的數據集一共只有340張。分布情況如下。
【訓練集】女性:150張; 男性:150張
【驗證集】女性:20張; 男性:20張
數據集預覽

女性數據

男性數據


提示:以下是本篇文章正文內容,下面案例可供參考

一、數據預處理

1.分類數據存放

分類數據是不需要像目標檢測數據樣,每張圖片去打標簽,我們唯一需要做的就是把同類照片放到一個文件夾。如我們新建一個名字為“0”的文件夾,用于存放所有用于訓練的150張女性圖片,新建一個名字為“1”的文件夾,用于存放所有用于訓練的150張男性圖片。同理,驗證集也如此排布。如下圖所示,為我的數據排布情況,數據集存放在gender_data文件夾里。

2.生成train.txt與val.txt

圖片數據排布完后,還需要做的就是使用腳本工具,分別生成訓練集和驗證集的存儲路徑及對應標簽(0或者1)。這一步至關重要,必不可少。因為訓練時,就是通過讀取這兩個txt文件里的路徑,來讀取訓練集和驗證集的圖片,并輸送給網絡,同時給對應的標簽類別。
腳本命名Build_all_classes_path_to_txt.py
**注意:**需要分兩次執行,分別創建train.txt與val.txt,記得更改路徑

import os import os.pathdef listfiles(rootDir, txtfile, foldnam =''):ftxtfile = open(txtfile, 'a')list_dirs = os.walk(rootDir)#foldnam = FolderName[0]#print(foldnam)count = 0dircount = 0for root,dirs,files in list_dirs:for d in dirs:#print(os.path.join(root, d))dircount += 1for f in files:#print(os.path.join(root, f))ftxtfile.write(os.path.join(root, f) + ' ' + foldnam + '\n')count += 1#print(rootDir + ' has ' + str(count) + ' files')#獲取路徑下所有文件夾的完整路徑,用于讀取文件用 def GetFileFromThisRootDir(dir):allfolder = []folder_name = ''for root,dirs,files in os.walk(dir):allfolder.append(root)"""for filespath in files:filepath = os.path.join(root, filespath)#print(filepath)extension = os.path.splitext(filepath)[1][1:]if needExtFilter and extension in ext:allfiles.append(filepath)elif not needExtFilter:allfiles.append(filepath) """All_folder = allfolder#print(All_folder)for folder_num in All_folder[1:]:#print(folder_num)folder_name = folder_num.split('/')[:]print (folder_name)listfiles(folder_num, txtfile_path, folder_name[-1])return#def Generate_path_to_txt(FolderPath=[]): # print(FolderPath)if __name__=='__main__':folder_path = 'F:/Study_code/classification-pytorch/Classification-MaleFemale-pytorch/gender_data/val/' #val and train foldertxtfile_path = 'F:/Study_code/classification-pytorch/Classification-MaleFemale-pytorch/gender_data/val.txt'folder_path = GetFileFromThisRootDir(folder_path)

生成的.txt文件內容如下

二、更改配置文件

1.自定義修改

實際上很多可以修改,如loss選擇、梯度下降方法、學習率、衰減率等等。

代碼如下(示例):

class Config(object):num_classes = 2loss = 'softmax' #focal_losstest_root = 'gender_data/'test_list = 'gender_data/val.txt'train_batch_size = 16 # batch sizetrain_root = 'gender_data/'train_list = 'gender_data/train.txt'finetune = Falseload_model_path = 'checkpoints/model-epoch-1.pth'save_interval = 1input_shape = (3, 112, 112)optimizer = 'sgd' # optimizer should be sgd, adamnum_workers = 4 # how many workers for loading dataprint_freq = 10 # print info every N batchmilestones = [60, 100] # adjust lr lr = 0.1 # initial learning ratemax_epoch = 100 # max epochlr_decay = 0.95 # when val_loss increase, lr = lr*lr_decayweight_decay = 5e-4

三、定義resnet網絡

實際上resnet網絡pytorch內部經典網絡中已存在,但作者還是參考開源代碼自己構建了一個resnet網絡的py文件resnet.py。這個可直接拿來使用。本次訓練使用的是resnet18.
代碼如下(示例):

"""resnet in pytorch[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.Deep Residual Learning for Image Recognitionhttps://arxiv.org/abs/1512.03385v1 """import torch import torch.nn as nnclass Flatten(nn.Module):def forward(self, input):#print(input.view(input.size(0), -1).shape)return input.view(input.size(0), -1)class BasicBlock(nn.Module):"""Basic Block for resnet 18 and resnet 34"""expansion = 1def __init__(self, in_channels, out_channels, stride=1):super().__init__()#residual functionself.residual_function = nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False),nn.BatchNorm2d(out_channels),nn.ReLU(inplace=True),nn.Conv2d(out_channels, out_channels * BasicBlock.expansion, kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(out_channels * BasicBlock.expansion))#shortcutself.shortcut = nn.Sequential()#the shortcut output dimension is not the same with residual function#use 1*1 convolution to match the dimensionif stride != 1 or in_channels != BasicBlock.expansion * out_channels:self.shortcut = nn.Sequential(nn.Conv2d(in_channels, out_channels * BasicBlock.expansion, kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(out_channels * BasicBlock.expansion))def forward(self, x):return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))class BottleNeck(nn.Module):"""Residual block for resnet over 50 layers"""expansion = 4def __init__(self, in_channels, out_channels, stride=1):super().__init__()self.residual_function = nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels),nn.ReLU(inplace=True),nn.Conv2d(out_channels, out_channels, stride=stride, kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(out_channels),nn.ReLU(inplace=True),nn.Conv2d(out_channels, out_channels * BottleNeck.expansion, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels * BottleNeck.expansion),)self.shortcut = nn.Sequential()if stride != 1 or in_channels != out_channels * BottleNeck.expansion:self.shortcut = nn.Sequential(nn.Conv2d(in_channels, out_channels * BottleNeck.expansion, stride=stride, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels * BottleNeck.expansion))def forward(self, x):return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))class ResNet(nn.Module):def __init__(self, block, num_block, scale=0.25, num_classes=2):super().__init__()self.in_channels = int(64 * scale)self.conv1 = nn.Sequential(nn.Conv2d(3, int(64 * scale), kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(int(64 * scale)),nn.ReLU(inplace=True))#we use a different inputsize than the original paper#so conv2_x's stride is 1self.conv2_x = self._make_layer(block, int( 64 * scale), num_block[0], 2)self.conv3_x = self._make_layer(block, int(128 * scale), num_block[1], 2)self.conv4_x = self._make_layer(block, int(256 * scale), num_block[2], 2)self.conv5_x = self._make_layer(block, int(512 * scale), num_block[3], 2)self.output = nn.Sequential(nn.Conv2d(int(512*scale), int(512*scale), kernel_size=(7, 7), stride=1, groups=int(512*scale), bias=False),nn.BatchNorm2d(int(512*scale)),Flatten(),#nn.Linear(int(32768 * scale), num_classes)nn.Linear(int(512 * scale), num_classes))def _make_layer(self, block, out_channels, num_blocks, stride):"""make resnet layers(by layer i didnt mean this 'layer' was the same as a neuron netowork layer, ex. conv layer), one layer may contain more than one residual block Args:block: block type, basic block or bottle neck blockout_channels: output depth channel number of this layernum_blocks: how many blocks per layerstride: the stride of the first block of this layerReturn:return a resnet layer"""# we have num_block blocks per layer, the first block # could be 1 or 2, other blocks would always be 1strides = [stride] + [1] * (num_blocks - 1)layers = []for stride in strides:layers.append(block(self.in_channels, out_channels, stride))self.in_channels = out_channels * block.expansionreturn nn.Sequential(*layers)def forward(self, x):output = self.conv1(x)output = self.conv2_x(output)output = self.conv3_x(output)output = self.conv4_x(output)output = self.conv5_x(output)output = self.output(output)return output def resnet18():""" return a ResNet 18 object"""return ResNet(BasicBlock, [2, 2, 2, 2])def resnet34():""" return a ResNet 34 object"""return ResNet(BasicBlock, [3, 4, 6, 3])def resnet50():""" return a ResNet 50 object"""return ResNet(BottleNeck, [3, 4, 6, 3])def resnet101():""" return a ResNet 101 object"""return ResNet(BottleNeck, [3, 4, 23, 3])def resnet152():""" return a ResNet 152 object"""return ResNet(BottleNeck, [3, 8, 36, 3])from thop import profile from thop import clever_format if __name__=='__main__':input = torch.Tensor(1, 3, 112, 112)model = resnet18()#print(model)flops, params = profile(model, inputs=(input, ))flops, params = clever_format([flops, params], "%.3f")#print(model)print('VoVNet Flops:', flops, ',Params:' ,params)

四、train.py訓練

訓練代碼及書寫邏輯也是個常規操作,很好理解,關鍵點在于如何去加載數據,并做預處理變換。
代碼如下(示例),僅供參考:

import torch from torch.utils import data import os import time import numpy as np from models.resnet import * #resnet34 from models.mobilenetv2 import mobilenetv2 #from models.mobilenetv3 import * #from models.repvgg import * from data.dataset import Dataset from config.config import Config from loss.focal_loss import FocalLoss from utils.cosine_lr_scheduler import CosineDecayLR #from torch.autograd import Variable def train(model, criterion, optimizer, scheduler, trainloader, epoch):model.train()for ii, data in enumerate(trainloader):start = time.time()iters = epoch * len(trainloader) + iischeduler.step(iters + 1)data_input, label = data#print(data_input, label)#data_input, label = Variable(data_input), Variable(label)-1data_input = data_input.to(device)label = label.to(device).long()output = model(data_input)#print(output)#print(label)loss = criterion(output, label)optimizer.zero_grad()loss.backward()optimizer.step()if iters % opt.print_freq == 0:output = output.data.cpu().numpy()output = np.argmax(output, axis=1)label = label.data.cpu().numpy()acc = np.mean((output == label).astype(int))speed = opt.print_freq / (time.time() - start)time_str = time.asctime(time.localtime(time.time()))print(time_str, 'epoch', epoch, 'iters', iters, 'speed', speed, 'lr',optimizer.param_groups[0]['lr'], 'loss', loss.cpu().detach().numpy(), 'acc', acc)def eval_train(model, criterion, testloader):model.eval()test_loss = 0.0 # cost function errorcorrect = 0.0with torch.no_grad():for (datas, labels) in testloader:datas = datas.to(device)labels = labels.to(device).long()outputs = model(datas)loss = criterion(outputs, labels)test_loss += loss.item()_, preds = outputs.max(1)correct += preds.eq(labels).sum()print('Test set: Average loss: {:.4f}, Accuracy: {:.4f}'.format(test_loss / len(testloader),correct.float() / len(testloader)))if __name__ == '__main__':opt = Config()#os.environ['CUDA_VISIBLE_DEVICES'] = '0'#device = torch.device("cuda" if torch.cuda.is_available() else "cpu")device = torch.device("cpu")test_dataset = Dataset(opt.test_root, opt.test_list, phase='test', input_shape=opt.input_shape)testloader = data.DataLoader(test_dataset,shuffle=False,pin_memory=True,num_workers=opt.num_workers)train_dataset = Dataset(opt.train_root, opt.train_list, phase='train', input_shape=opt.input_shape)trainloader = data.DataLoader(train_dataset,batch_size=opt.train_batch_size,shuffle=True,pin_memory=True,num_workers=opt.num_workers)if opt.loss == 'focal_loss':criterion = FocalLoss(gamma=2)else:criterion = torch.nn.CrossEntropyLoss()model = resnet18()#model = get_RepVGG_func_by_name('RepVGG-B0')#model = mobilenetv2()if opt.finetune == True:model.load_state_dict(torch.load(opt.load_model_path))model = torch.nn.DataParallel(model)model.to(device)total_batch = len(trainloader)NUM_BATCH_WARM_UP = total_batch * 5optimizer = torch.optim.SGD(model.parameters(), lr=opt.lr, weight_decay=opt.weight_decay)scheduler = CosineDecayLR(optimizer, opt.max_epoch * total_batch, opt.lr, 1e-6, NUM_BATCH_WARM_UP)print('{} train iters per epoch in dataset'.format(len(trainloader)))for epoch in range(0, opt.max_epoch):train(model, criterion, optimizer, scheduler, trainloader, epoch)if epoch % opt.save_interval == 0 or epoch == (opt.max_epoch - 1):torch.save(model.module.state_dict(), 'checkpoints/model-epoch-'+str(epoch) + '.pth')eval_train(model, criterion, testloader)


訓練過程日志打印如下,最后的預測準確率還不錯:

五、預測predict.py實現

代碼如下(示例),僅供參考:

from torch.autograd import Variable from torchvision import datasets, models, transforms import matplotlib.pyplot as plt # plt 用于顯示圖片 from PIL import Image, ImageDraw, ImageFont import cv2 import numpy as np from models.resnet import * from config.config import Config from models.mobilenetv2 import *def show_infer_result(result):font = ImageFont.truetype('data/font/HuaWenXinWei-1.ttf', 50)plt.rcParams['font.sans-serif'] = ['SimHei'] # 中文亂碼plt.subplot(121)plt.imshow(image)plt.title('測試圖片')#不顯示坐標軸plt.axis('off')#子圖2plt.subplot(122)img2_2 = cv2.imread('./test2.jpg')cv2img = cv2.cvtColor(img2_2, cv2.COLOR_BGR2RGB)img_PIL = Image.fromarray(cv2img)draw = ImageDraw.Draw(img_PIL)label = ''if result == 0:label = '女性'else:label = '男性'draw.text((170, 150), label, fill=(255, 0, 255), font=font, align='center')cheng = cv2.cvtColor(np.array(img_PIL), cv2.COLOR_RGB2BGR)plt.imshow(cheng)plt.title('預測結果')plt.axis('off')# #設置子圖默認的間距plt.tight_layout()#顯示圖像plt.show()def model_infer(img, model_path):data_transforms = transforms.Compose([transforms.Resize([112, 112]),transforms.ToTensor(),transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])# net = resnet18().cuda().eval() # 實例化自己的模型;net = resnet18().eval() # resnet模型net.load_state_dict((torch.load(model_path)), False)imgblob = data_transforms(img).unsqueeze(0).type(torch.FloatTensor).cpu()#print(imgblob)imgblob = Variable(imgblob)torch.no_grad()output = net(imgblob)_, pred = output.max(1)# print("output ---> ",output)predict_result = pred.numpy()show_infer_result(predict_result)return predict_resultif __name__ == "__main__":imagepath = './gender_data/val/1/14901.png'image = Image.open(imagepath)model_path = "./checkpoints/model-epoch-99.pth"model_infer(image, model_path)print("====infer over!")

六、預測結果

女性圖片測試

男性圖片測試

七、完整項目代碼+數據集(大于1500張)

源碼(訓練代碼及預測代碼)+模型+數據集下載:https://download.csdn.net/download/DeepLearning_/87190601
覺得有用的,感謝先點贊+收藏+關注吧,
如何快速搭建神經網絡并訓練,請參考另外博客:五步教你使用Pytorch搭建神經網絡并訓練


總結

本文屬于使用resnet網絡+pytorch深度學習框架,實現男女性別識別分類模型的訓練+預測,當然還包括了分類數據集制作,公開了項目部分代碼僅供參考學習,后續會補上多組對比實驗和代碼模型。敬請關注!

總結

以上是生活随笔為你收集整理的使用Resnet网络对人脸图像分类识别出男女性别(包含数据集制作+训练+测试)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。