【天池赛事】零基础入门语义分割-地表建筑物识别 Task1:赛题理解与 baseline
【天池賽事】零基礎(chǔ)入門語義分割-地表建筑物識(shí)別
-
Task1:賽題理解與 baseline(3 天)
– 學(xué)習(xí)主題:理解賽題內(nèi)容解題流程
– 學(xué)習(xí)內(nèi)容:賽題理解、數(shù)據(jù)讀取、比賽 baseline 構(gòu)建
– 學(xué)習(xí)成果:比賽 baseline 提交 -
Task2:數(shù)據(jù)擴(kuò)增方法(3 天)
– 學(xué)習(xí)主題:語義分割任務(wù)中數(shù)據(jù)擴(kuò)增方法
– 學(xué)習(xí)內(nèi)容:掌握語義分割任務(wù)中數(shù)據(jù)擴(kuò)增方法的細(xì)節(jié)和使用
– 學(xué)習(xí)成果:數(shù)據(jù)擴(kuò)增方法的實(shí)踐 -
Task3:網(wǎng)絡(luò)模型結(jié)構(gòu)發(fā)展(3 天)
– 學(xué)習(xí)主題:掌握語義分割模型的發(fā)展脈絡(luò)
– 學(xué)習(xí)內(nèi)容: FCN、 Unet、 DeepLab、 SegNet、 PSPNet
– 學(xué)習(xí)成果:多種網(wǎng)絡(luò)模型的搭建 -
Task4:評(píng)價(jià)函數(shù)與損失函數(shù)(3 天)
– 學(xué)習(xí)主題:語義分割模型各種評(píng)價(jià)函數(shù)與損失函數(shù)
– 學(xué)習(xí)內(nèi)容: Dice、 IoU、 BCE、 Focal Loss、 Lovász-Softmax
– 學(xué)習(xí)成果:評(píng)價(jià)/損失函數(shù)的實(shí)踐 -
Task5:模型訓(xùn)練與驗(yàn)證(3 天)
– 學(xué)習(xí)主題:數(shù)據(jù)劃分方法
– 學(xué)習(xí)內(nèi)容:三種數(shù)據(jù)劃分方法、模型調(diào)參過程
– 學(xué)習(xí)成果:數(shù)據(jù)劃分具體操作 -
Task6:分割模型模型集成(3 天)
– 學(xué)習(xí)主題:語義分割模型集成方法
– 學(xué)習(xí)內(nèi)容: LookaHead、 SnapShot、 SWA、 TTA
– 學(xué)習(xí)成果:模型集成思路
Task1:賽題理解與 baseline
- 1 學(xué)習(xí)目標(biāo)
- 2 賽題數(shù)據(jù)
- 3 數(shù)據(jù)標(biāo)簽
- 4 評(píng)價(jià)指標(biāo)
- 5 讀取數(shù)據(jù)
- 6 解題思路
- 7 本章小結(jié)
- 8 課后作業(yè)
- 9 Baseline
本章將對(duì)語義分割賽題進(jìn)行賽題背景講解,對(duì)賽題數(shù)據(jù)讀取進(jìn)行說明,并給出解題思路。
- 賽題名稱:零基礎(chǔ)入門語義分割-地表建筑物識(shí)別
- 賽題目標(biāo):通過本次賽題可以引導(dǎo)大家熟練掌握語義分割任務(wù)的定義,具體的解題流程和相應(yīng)的模型,并掌握語義分割任務(wù)的發(fā)展。
- 賽題任務(wù):賽題以計(jì)算機(jī)視覺為背景,要求選手使用給定的航拍圖像訓(xùn)練模型并完成地表建筑物識(shí)
別任務(wù)。
1 學(xué)習(xí)目標(biāo)
? 理解賽題背景和賽題數(shù)據(jù)
? 完成賽題報(bào)名和數(shù)據(jù)下載,理解賽題的解題思路
2 賽題數(shù)據(jù)
遙感技術(shù)已成為獲取地表覆蓋信息最為行之有效的手段,遙感技術(shù)已經(jīng)成功應(yīng)用于地表覆蓋檢測(cè)、植被面積檢測(cè)和建筑物檢測(cè)任務(wù)。本賽題使用航拍數(shù)據(jù),需要參賽選手完成地表建筑物識(shí)別,將地表航拍圖像素劃分為有建筑物和無建筑物兩類。
如下圖,左邊為原始航拍圖,右邊為對(duì)應(yīng)的建筑物標(biāo)注。
賽題數(shù)據(jù)來源(Inria Aerial Image Labeling),并進(jìn)行拆分處理。數(shù)據(jù)集報(bào)名后可見并可下載。賽題數(shù)據(jù)為航拍圖,需要參賽選手識(shí)別圖片中的地表建筑具體像素位置。
3 數(shù)據(jù)標(biāo)簽
賽題為語義分割任務(wù),因此具體的標(biāo)簽為圖像像素類別。在賽題數(shù)據(jù)中像素屬于 2 類(無建筑物和有建筑物),因此標(biāo)簽為有建筑物的像素。賽題原始圖片為 jpg 格式,標(biāo)簽為 RLE 編碼的字符串。
RLE 全稱(run-length encoding),翻譯為游程編碼或行程長(zhǎng)度編碼,對(duì)連續(xù)的黑、白像素?cái)?shù)以不同的碼字進(jìn)行編碼。 RLE 是一種簡(jiǎn)單的非破壞性資料壓縮法,經(jīng)常用在在語義分割比賽中對(duì)標(biāo)簽進(jìn)行編碼。
RLE 與圖片之間的轉(zhuǎn)換如下:
# rle編碼的具體的讀取代碼如下: import numpy as np import pandas as pd import cv2# 將圖片編碼為rle格式 def rle_encode(im):'''im: numpy array, 1 - mask, 0 - backgroundReturns run length as string formated'''pixels = im.flatten(order = 'F')pixels = np.concatenate([[0], pixels, [0]])runs = np.where(pixels[1:] != pixels[:-1])[0] + 1runs[1::2] -= runs[::2]return ' '.join(str(x) for x in runs)# 將rle格式進(jìn)行解碼為圖片 def rle_decode(mask_rle, shape=(512, 512)):'''mask_rle: run-length as string formated (start length)shape: (height,width) of array to return Returns numpy array, 1 - mask, 0 - background'''s = mask_rle.split()starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]starts -= 1ends = starts + lengthsimg = np.zeros(shape[0]*shape[1], dtype=np.uint8)for lo, hi in zip(starts, ends):img[lo:hi] = 1return img.reshape(shape, order='F')4 評(píng)價(jià)指標(biāo)
賽題使用 Dice coefficient 來衡量選手結(jié)果與真實(shí)標(biāo)簽的差異性, Dice coefficient 可以按像素差異性來比較結(jié)果的差異性。 Dice coefficient 的具體計(jì)算方式如下:
2?∣X∩Y∣∣X∣+∣Y∣{{2*|X\cap Y|}\over {|X|+|Y|}} ∣X∣+∣Y∣2?∣X∩Y∣?
其中X是預(yù)測(cè)結(jié)果,Y為真實(shí)標(biāo)簽的結(jié)果。當(dāng)X與Y完全相同時(shí)Dice coefficient為1,排行榜使用所有測(cè)試集圖片的平均Dice coefficient來衡量,分?jǐn)?shù)值越大越好。
5 讀取數(shù)據(jù)
6 解題思路
由于本次賽題是一個(gè)典型的語義分割任務(wù),因此可以直接使用語義分割的模型來完成:
? 步驟 1:使用 FCN 模型模型跑通具體模型訓(xùn)練過程,并對(duì)結(jié)果進(jìn)行預(yù)測(cè)提交;
? 步驟 2:在現(xiàn)有基礎(chǔ)上加入數(shù)據(jù)擴(kuò)增方法,并劃分驗(yàn)證集以監(jiān)督模型精度;
? 步驟 3:使用更加強(qiáng)大模型結(jié)構(gòu)(如 Unet 和 PSPNet)或尺寸更大的輸入完成訓(xùn)練;
? 步驟 4:訓(xùn)練多個(gè)模型完成模型集成操作;
7 本章小結(jié)
本章主要對(duì)賽題背景和主要任務(wù)進(jìn)行講解,并多對(duì)賽題數(shù)據(jù)和標(biāo)注讀取方式進(jìn)行介紹,最后列舉了賽題解題思路。
8 課后作業(yè)
9 Baseline
# -*- coding: utf-8 -*- from google.colab import drive drive.mount('/content/drive')# !unzip -n /content/drive/MyDrive/SemanticSegmentation/train.zip -d /content/data # !unzip -n /content/drive/MyDrive/SemanticSegmentation/test_a.zip -d /content/data # !unzip -n /content/drive/MyDrive/SemanticSegmentation/train_mask.csv.zip -d /content/data # !cp /content/drive/MyDrive/SemanticSegmentation/test_a_samplesubmit.csv /content/data # !pip install rasterio# Commented out IPython magic to ensure Python compatibility. import numpy as np import pandas as pd import pathlib, sys, os, random, time import numba, cv2, gc from tqdm import tqdm_notebookimport matplotlib.pyplot as plt # %matplotlib inlineimport warnings warnings.filterwarnings('ignore')from tqdm.notebook import tqdmimport albumentations as Aimport rasterio from rasterio.windows import Windowdef rle_encode(im):'''im: numpy array, 1 - mask, 0 - backgroundReturns run length as string formated'''pixels = im.flatten(order = 'F')pixels = np.concatenate([[0], pixels, [0]])runs = np.where(pixels[1:] != pixels[:-1])[0] + 1runs[1::2] -= runs[::2]return ' '.join(str(x) for x in runs)def rle_decode(mask_rle, shape=(512, 512)):'''mask_rle: run-length as string formated (start length)shape: (height,width) of array to return Returns numpy array, 1 - mask, 0 - background'''s = mask_rle.split()starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]starts -= 1ends = starts + lengthsimg = np.zeros(shape[0]*shape[1], dtype=np.uint8)for lo, hi in zip(starts, ends):img[lo:hi] = 1return img.reshape(shape, order='F')import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.data as Dimport torchvision from torchvision import transforms as TEPOCHES = 20 BATCH_SIZE = 32 IMAGE_SIZE = 256 DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' trfm = A.Compose([A.Resize(IMAGE_SIZE, IMAGE_SIZE),A.HorizontalFlip(p=0.5),A.VerticalFlip(p=0.5),A.RandomRotate90(), ])class TianChiDataset(D.Dataset):def __init__(self, paths, rles, transform, test_mode=False):self.paths = pathsself.rles = rlesself.transform = transformself.test_mode = test_modeself.len = len(paths)self.as_tensor = T.Compose([T.ToPILImage(),T.Resize(IMAGE_SIZE),T.ToTensor(),T.Normalize([0.625, 0.448, 0.688],[0.131, 0.177, 0.101]),])# get data operationdef __getitem__(self, index):img = cv2.imread(self.paths[index])if not self.test_mode:mask = rle_decode(self.rles[index])augments = self.transform(image=img, mask=mask)return self.as_tensor(augments['image']), augments['mask'][None]else:return self.as_tensor(img), '' def __len__(self):"""Total number of samples in the dataset"""return self.lentrain_mask = pd.read_csv('data/train_mask.csv', sep='\t', names=['name', 'mask']) train_mask['name'] = train_mask['name'].apply(lambda x: 'data/train/' + x)img = cv2.imread(train_mask['name'].iloc[0]) mask = rle_decode(train_mask['mask'].iloc[0])print(rle_encode(mask) == train_mask['mask'].iloc[0])dataset = TianChiDataset(train_mask['name'].values,train_mask['mask'].fillna('').values,trfm, False )image, mask = dataset[0] plt.figure(figsize=(16,8)) plt.subplot(121) plt.imshow(mask[0], cmap='gray') plt.subplot(122) plt.imshow(image[0]);valid_idx, train_idx = [], [] for i in range(len(dataset)):if i % 7 == 0:valid_idx.append(i) # else:elif i % 7 == 1:train_idx.append(i)train_ds = D.Subset(dataset, train_idx) valid_ds = D.Subset(dataset, valid_idx)# define training and validation data loaders loader = D.DataLoader(train_ds, batch_size=BATCH_SIZE, shuffle=True, num_workers=0)vloader = D.DataLoader(valid_ds, batch_size=BATCH_SIZE, shuffle=False, num_workers=0)def get_model():# model = torchvision.models.segmentation.fcn_resnet50(True)# pth = torch.load("../input/pretrain-coco-weights-pytorch/fcn_resnet50_coco-1167a1af.pth") # for key in ["aux_classifier.0.weight", "aux_classifier.1.weight", "aux_classifier.1.bias", "aux_classifier.1.running_mean", "aux_classifier.1.running_var", "aux_classifier.1.num_batches_tracked", "aux_classifier.4.weight", "aux_classifier.4.bias"]: # del pth[key]# model.classifier[4] = nn.Conv2d(512, 1, kernel_size=(1, 1), stride=(1, 1))model = torchvision.models.segmentation.deeplabv3_resnet50(True)model.classifier[4] = nn.Conv2d(256, 1, kernel_size=(1, 1), stride=(1, 1))return model@torch.no_grad() def validation(model, loader, loss_fn):losses = []model.eval()for image, target in loader:image, target = image.to(DEVICE), target.float().to(DEVICE)output = model(image)['out']loss = loss_fn(output, target)losses.append(loss.item())return np.array(losses).mean()model = get_model() model.to(DEVICE);optimizer = torch.optim.AdamW(model.parameters(),lr=1e-4, weight_decay=1e-3)class SoftDiceLoss(nn.Module):def __init__(self, smooth=1., dims=(-2,-1)):super(SoftDiceLoss, self).__init__()self.smooth = smoothself.dims = dimsdef forward(self, x, y):tp = (x * y).sum(self.dims)fp = (x * (1 - y)).sum(self.dims)fn = ((1 - x) * y).sum(self.dims)dc = (2 * tp + self.smooth) / (2 * tp + fp + fn + self.smooth)dc = dc.mean()return 1 - dcbce_fn = nn.BCEWithLogitsLoss() dice_fn = SoftDiceLoss()def loss_fn(y_pred, y_true):bce = bce_fn(y_pred, y_true)dice = dice_fn(y_pred.sigmoid(), y_true)return 0.8*bce+ 0.2*diceheader = r'''Train | Valid Epoch | Loss | Loss | Time, m ''' # Epoch metrics time raw_line = '{:6d}' + '\u2502{:7.3f}'*2 + '\u2502{:6.2f}' print(header)EPOCHES = 60 best_loss = 10 for epoch in range(1, EPOCHES+1):losses = []start_time = time.time()model.train()for image, target in tqdm_notebook(loader):image, target = image.to(DEVICE), target.float().to(DEVICE)optimizer.zero_grad()output = model(image)['out']loss = loss_fn(output, target)loss.backward()optimizer.step()losses.append(loss.item())# print(loss.item())vloss = validation(model, vloader, loss_fn)print(raw_line.format(epoch, np.array(losses).mean(), vloss,(time.time()-start_time)/60**1))losses = []if vloss < best_loss:best_loss = vlosstorch.save(model.state_dict(), 'model_best.pth')torch.save(model.state_dict(), '/content/drive/MyDrive/SemanticSegmentation/model_best.pth')trfm = T.Compose([T.ToPILImage(),T.Resize(IMAGE_SIZE),T.ToTensor(),T.Normalize([0.625, 0.448, 0.688],[0.131, 0.177, 0.101]), ])subm = []# model.load_state_dict(torch.load("/content/drive/MyDrive/SemanticSegmentation/model_best.pth")) model.load_state_dict(torch.load("./model_best.pth")) model.eval()test_mask = pd.read_csv('data/test_a_samplesubmit.csv', sep='\t', names=['name', 'mask']) test_mask['name'] = test_mask['name'].apply(lambda x: 'data/test_a/' + x)for idx, name in enumerate(tqdm_notebook(test_mask['name'].iloc[:])):image = cv2.imread(name)image = trfm(image)with torch.no_grad():image = image.to(DEVICE)[None]score = model(image)['out'][0][0]score_sigmoid = score.sigmoid().cpu().numpy()score_sigmoid = (score_sigmoid > 0.5).astype(np.uint8)score_sigmoid = cv2.resize(score_sigmoid, (512, 512))# breaksubm.append([name.split('/')[-1], rle_encode(score_sigmoid)])subm = pd.DataFrame(subm) subm.to_csv('./tmp.csv', index=None, header=None, sep='\t') subm.to_csv('/content/drive/MyDrive/SemanticSegmentation/tmp.csv', index=None, header=None, sep='\t')plt.figure(figsize=(16,8)) plt.subplot(121) plt.imshow(rle_decode(subm[1].fillna('').iloc[0]), cmap='gray') plt.subplot(122) plt.imshow(cv2.imread('data/test_a/' + subm[0].iloc[0]));總結(jié)
以上是生活随笔為你收集整理的【天池赛事】零基础入门语义分割-地表建筑物识别 Task1:赛题理解与 baseline的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 【天池赛事】零基础入门语义分割-地表建筑
- 下一篇: 19 01 16 jquery 的