日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

BelgiumTS交通数据集分类-pytorch版

發(fā)布時(shí)間:2025/3/20 编程问答 53 豆豆
生活随笔 收集整理的這篇文章主要介紹了 BelgiumTS交通数据集分类-pytorch版 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

數(shù)據(jù)集下載地址:https://www.lanzouw.com/b01i4vc4b
密碼:6y8b

注意:文件大小494M,有點(diǎn)大,百度云太慢不考慮,藍(lán)奏云只能上傳小于100M的,所以,將訓(xùn)練集拆分壓縮了,使用時(shí)請將Training(0~30)和Training(31~61)合并到一個(gè)文件夾中

這個(gè)數(shù)據(jù)集有62個(gè)類別,可以做分類任務(wù),數(shù)據(jù)量不是很多,所以我使用旋轉(zhuǎn)、翻轉(zhuǎn)對(duì)數(shù)據(jù)集進(jìn)行增強(qiáng)。得到18300個(gè)圖片。

類別:62類,訓(xùn)練集和測試集都有標(biāo)簽

訓(xùn)練集:4575張圖片,下面的代碼我通過旋轉(zhuǎn),翻轉(zhuǎn)對(duì)數(shù)據(jù)增強(qiáng)得到18300張圖片數(shù)據(jù)

測試集:2520張圖片

??

完整代碼在最下面:

下面數(shù)據(jù)處理和訓(xùn)練過程的代碼:

定義數(shù)據(jù)增強(qiáng)的transform,如果你不想增強(qiáng)數(shù)據(jù),只用data_transforms即可

# 定義圖片增強(qiáng)要用的transformsimport torchvision.datasets as datasets import torchvision.transforms as transformsfrom torch import nn import torch import torch.nn.functional as F device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')# data augmentation for training and test time # Resize all images to 32 * 32 and normalize them to mean = 0 and standard-deviation = 1 based on statistics collected from the training setdata_transforms = transforms.Compose([transforms.Resize((32, 32)),transforms.ToTensor(),transforms.Normalize((0.3337, 0.3064, 0.3171), ( 0.2672, 0.2564, 0.2629)) ])# Resize, normalize and rotate image data_rotate = transforms.Compose([transforms.Resize((32, 32)),transforms.RandomRotation(15),transforms.ToTensor(),transforms.Normalize((0.3337, 0.3064, 0.3171), ( 0.2672, 0.2564, 0.2629)) ])# Resize, normalize and flip image horizontally and vertically data_hvflip = transforms.Compose([transforms.Resize((32, 32)),transforms.RandomHorizontalFlip(1),transforms.RandomVerticalFlip(1),transforms.ToTensor(),transforms.Normalize((0.3337, 0.3064, 0.3171), ( 0.2672, 0.2564, 0.2629)) ])# Resize, normalize and shear image data_shear = transforms.Compose([transforms.Resize((32, 32)),transforms.RandomAffine(degrees = 15,shear=2),transforms.ToTensor(),transforms.Normalize((0.3337, 0.3064, 0.3171), ( 0.2672, 0.2564, 0.2629)) ])

加載數(shù)據(jù)集, 制作train_dataset時(shí),我使用了三個(gè)上面的三個(gè)增強(qiáng)數(shù)據(jù)的transforms,你也可以只用第一個(gè),或則使用其中幾個(gè)

train_dir = './data/TSC/Training/' test_dir = './data/TSC/Testing/'train_dataset = torch.utils.data.ConcatDataset([#數(shù)據(jù)增強(qiáng),圖片旋轉(zhuǎn)剪切datasets.ImageFolder(train_dir,transform=data_transforms),datasets.ImageFolder(train_dir,transform=data_rotate),datasets.ImageFolder(train_dir,transform=data_hvflip),datasets.ImageFolder(train_dir,transform=data_shear)])test_dataset = datasets.ImageFolder(test_dir,transform=data_transforms)train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True, num_workers=1) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=True, num_workers=1)

定義模型:

class CNN_TSC(nn.Module):def __init__(self):super(CNN_TSC, self).__init__()self.conv1 = nn.Conv2d(3, 6, 5)self.pool = nn.MaxPool2d(2, 2)self.conv2 = nn.Conv2d(6, 16, 5)self.fc1 = nn.Linear(16 * 5 * 5, 120)self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 62)def forward(self, x):x = self.pool(F.relu(self.conv1(x)))x = self.pool(F.relu(self.conv2(x)))x = x.view(-1, 16 * 5 * 5)x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))output = self.fc3(x)return outputnet= CNN_TSC().to(device) #定義損失函數(shù)和優(yōu)化器 loss = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(net.parameters(), lr=0.001)

訓(xùn)練和預(yù)測:

#訓(xùn)練 def train():net.train()for data, label in train_loader:data, label = data.to(device), label.to(device)net.zero_grad()output = net(data)l = loss(output, label)l.backward()optimizer.step() #測試 def test(epoch):net.eval()batch_loss, correct, total = 0,0 ,0for data, label in test_loader:data, label = data.to(device), label.to(device)output = net(data)batch_loss +=loss(output, label)predict_label = torch.argmax(output, dim=1)correct += torch.sum(predict_label==label)total +=len(label)print('epoch:%d loss: %.4f accuracy:%.2f%%' %(epoch, batch_loss/len(test_loader), 100*correct/total))print('training on %s'%(device)) for epoch in range(10):train()test(epoch)

實(shí)驗(yàn)結(jié)果:

完整的代碼:

import torchvision.datasets as datasets import torchvision.transforms as transformsfrom torch import nn import torch import torch.nn.functional as F device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')# data augmentation for training and test time # Resize all images to 32 * 32 and normalize them to mean = 0 and standard-deviation = 1 based on statistics collected from the training setdata_transforms = transforms.Compose([transforms.Resize((32, 32)),transforms.ToTensor(),transforms.Normalize((0.3337, 0.3064, 0.3171), ( 0.2672, 0.2564, 0.2629)) ])# Resize, normalize and rotate image data_rotate = transforms.Compose([transforms.Resize((32, 32)),transforms.RandomRotation(15),transforms.ToTensor(),transforms.Normalize((0.3337, 0.3064, 0.3171), ( 0.2672, 0.2564, 0.2629)) ])# Resize, normalize and flip image horizontally and vertically data_hvflip = transforms.Compose([transforms.Resize((32, 32)),transforms.RandomHorizontalFlip(1),transforms.RandomVerticalFlip(1),transforms.ToTensor(),transforms.Normalize((0.3337, 0.3064, 0.3171), ( 0.2672, 0.2564, 0.2629)) ])# Resize, normalize and shear image data_shear = transforms.Compose([transforms.Resize((32, 32)),transforms.RandomAffine(degrees = 15,shear=2),transforms.ToTensor(),transforms.Normalize((0.3337, 0.3064, 0.3171), ( 0.2672, 0.2564, 0.2629)) ])#請修改這個(gè)目錄為你的數(shù)據(jù)集目錄 train_dir = './data/TSC/Training/' test_dir = './data/TSC/Testing/'train_dataset = torch.utils.data.ConcatDataset([#數(shù)據(jù)增強(qiáng),圖片旋轉(zhuǎn)剪切datasets.ImageFolder(train_dir,transform=data_transforms),datasets.ImageFolder(train_dir,transform=data_rotate),datasets.ImageFolder(train_dir,transform=data_hvflip),datasets.ImageFolder(train_dir,transform=data_shear)])test_dataset = datasets.ImageFolder(test_dir,transform=data_transforms)train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True, num_workers=1) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=True, num_workers=1)class CNN_TSC(nn.Module):def __init__(self):super(CNN_TSC, self).__init__()self.conv1 = nn.Conv2d(3, 6, 5)self.pool = nn.MaxPool2d(2, 2)self.conv2 = nn.Conv2d(6, 16, 5)self.fc1 = nn.Linear(16 * 5 * 5, 120)self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 62)def forward(self, x):x = self.pool(F.relu(self.conv1(x)))x = self.pool(F.relu(self.conv2(x)))x = x.view(-1, 16 * 5 * 5)x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))output = self.fc3(x)return outputnet= CNN_TSC().to(device) #定義損失函數(shù)和優(yōu)化器 loss = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(net.parameters(), lr=0.001)#訓(xùn)練 def train():net.train()for data, label in train_loader:data, label = data.to(device), label.to(device)net.zero_grad()output = net(data)l = loss(output, label)l.backward()optimizer.step() #測試 def test(epoch):net.eval()batch_loss, correct, total = 0,0 ,0for data, label in test_loader:data, label = data.to(device), label.to(device)output = net(data)batch_loss +=loss(output, label)predict_label = torch.argmax(output, dim=1)correct += torch.sum(predict_label==label)total +=len(label)print('epoch:%d loss: %.4f accuracy:%.2f%%' %(epoch, batch_loss/len(test_loader), 100*correct/total))print('training on %s'%(device)) for epoch in range(10):train()test(epoch)

總結(jié)

以上是生活随笔為你收集整理的BelgiumTS交通数据集分类-pytorch版的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。