日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 >

nnUnet代码分析一训练

發布時間:2024/1/18 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 nnUnet代码分析一训练 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

nnUnet是做分割的一套完整代碼,用在醫學圖像分析中較多,效果還很不錯。

先看訓練的代碼 run_training.py

一般用法:nnUNet_train 2d nnUNetTrainerV2 TaskXXX_MYTASK FOLD --npz

2d代表2d Unet網絡,nnUNetTrainerV2代表trainer,Task是任務id,

還有其他的參數。詳見代碼。

plans_file, output_folder_name, dataset_directory, batch_dice, stage, \

? ? trainer_class = get_default_configuration(network, task, network_trainer, plans_identifier)

根據網絡、任務、trainer,計劃生產一個trainer_class.

命令:nnUNet_train 2d ?nnUNetTrainerV2 Task004_Hippocampus 1 --npz

輸出結果如下:

###############################################
I am running the following nnUNet: 2d
My trainer class is: ?<class 'nnunet.training.network_training.nnUNetTrainerV2.nnUNetTrainerV2'>
For that I will be using the following configuration:
num_classes: ?2
modalities: ?{0: 'MRI'}
use_mask_for_norm OrderedDict([(0, False)])
keep_only_largest_region None
min_region_size_per_class None
min_size_per_class None
normalization_schemes OrderedDict([(0, 'nonCT')])
stages...

stage: ?0
{'batch_size': 366, 'num_pool_per_axis': [3, 3], 'patch_size': array([56, 40]), 'median_patient_size_in_voxels': array([36, 50, 35]), 'current_spacing': array([1., 1., 1.]), 'original_spacing': array([1., 1., 1.]), 'pool_op_kernel_sizes': [[2, 2], [2, 2], [2, 2]], 'conv_kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3]], 'do_dummy_2D_data_aug': False}

I am using stage 0 from these plans
I am using batch dice + CE loss

I am using data from this folder: ?/mnt/nnUNet_preprocessed/Task004_Hippocampus/nnUNetData_plans_v2.1_2D
###############################################

這里有一個batch dice 和sample dice,stage的概念不是很理解。

按照上面的提示,我們用的trainer是'nnunet.training.network_training.nnUNetTrainerV2.nnUNetTrainerV2'>?

繼承自class nnUNetTrainer(NetworkTrainer):

先看:NetworkTrainer.py,nnUNetTrainer.py

do_split :5折驗證

訓練過程沒有什么特別,sgd+poly_lr

loss是dice+ce

數據增強也非常簡單,只有縮放和旋轉

? ? def setup_DA_params(self):

? ? ? ? """

? ? ? ? - we increase roation angle from [-15, 15] to [-30, 30]

? ? ? ? - scale range is now (0.7, 1.4), was (0.85, 1.25)

? ? ? ? - we don't do elastic deformation anymore

有早停,patience=50.

下面先跑一下海馬體分割的例子。

nnUNet_convert_decathlon_task -i /xxx/Task04_Hippocampus 可以把msd格式的數據轉成nnUnet格式。

nnUNet_plan_and_preprocess -t 4 配置plan和preprocess是nnunet中的重要一環。后續再研究

接下去就是訓練了:nnUNet_train 3d_fullres nnUNetTrainerV2 4 0

msd中的海馬體分割數據集不到30M,經過預處理后生成兩個nnUNetData_plans_v2.1_2D_stage0,nnUNetData_plans_v2.1_stage0 ,每個目錄有166M,還有一個gt_segmentation目錄 ,應該是label?

圖像大小才30*50*30左右,,但是速度挺慢的。

k80一個epoch超過了6分鐘

epoch: ?0
2022-09-03 00:48:13.676541: train loss : -0.3330
2022-09-03 00:48:33.931392: validation loss: -0.7511
2022-09-03 00:48:33.935500: Average global foreground Dice: [0.8332, 0.8122]
2022-09-03 00:48:33.936098: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2022-09-03 00:48:35.234157: lr: 0.009991
2022-09-03 00:48:35.235549: This epoch took 360.665858 s

2022-09-03 00:48:35.236174:?
epoch: ?1

2080ti一個epoch 30s,快了12倍。

epoch: ?0
2022-09-03 00:57:25.939667: train loss : -0.3076
2022-09-03 00:57:28.172749: validation loss: -0.7510
2022-09-03 00:57:28.174898: Average global foreground Dice: [0.8301, 0.8239]
2022-09-03 00:57:28.175579: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2022-09-03 00:57:29.656255: lr: 0.009991
2022-09-03 00:57:29.657619: This epoch took 30.441361 s

2022-09-03 00:57:29.658239:?
epoch: ?1

loss為什么是負的呢?

海馬體分割有兩個區域,2018年nnunet的成績是0.90,0.89

10個epoch可以到0.89,0.877

33個epoch時得到: [0.9017, 0.8877]

2022-09-03 01:13:15.730579:?
epoch: ?33
2022-09-03 01:13:40.102154: train loss : -0.8661
2022-09-03 01:13:41.597394: validation loss: -0.8594
2022-09-03 01:13:41.599368: Average global foreground Dice: [0.9017, 0.8877]
2022-09-03 01:13:41.599944: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2022-09-03 01:13:43.128609: lr: 0.009693
2022-09-03 01:13:43.135475: saving checkpoint...
2022-09-03 01:13:44.503204: done, saving took 1.37 seconds
2022-09-03 01:13:44.544393: This epoch took 28.813107 s

2022-09-03 01:13:44.545232:?

換成local loss后:

10 epoch 到?[0.8562, 0.8408]

33epochs 到[0.8769, 0.8605] 34?[0.8808, 0.8643]

總結

以上是生活随笔為你收集整理的nnUnet代码分析一训练的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。