日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

YunYang1994/tensorflow-yolov3 Readme 翻译

發布時間:2025/3/19 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 YunYang1994/tensorflow-yolov3 Readme 翻译 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

TensorFlow2.0-Examples/4-Object_Detection/YOLOV3

文章目錄

  • TensorFlow2.0-Examples/4-Object_Detection/YOLOV3
    • Please install tensorflow-gpu 1.11.0 ! Since Tensorflow is fucking ridiculous !
    • part 1. Introduction [[代碼剖析]](https://github.com/YunYang1994/CodeFun/blob/master/002-deep_learning/YOLOv3.md)
    • part 2. Quick start(快速上手)
    • part 3. Train on your own dataset(用你自己的數據集進行訓練)
      • 3.1 Train VOC dataset(訓練VOC數據集)
        • how to train it ?(如何訓練它)
          • (1) train from scratch:
          • (2) train from COCO weights(recommend):
        • how to test and evaluate it ?(如何測試和評估呢?)
      • 3.2 Train other dataset(訓練其他的數據集)
    • part 4. Why it is so magical ?(為什么如此神奇?)
      • 4.1 Anchors clustering(錨聚類?)
      • 4.2 Architercutre details(架構細節)
      • 4.3 Neural network io:(神經網絡的輸入輸出)
      • 4.4 Filtering with score threshold(用分數閾值過濾?)
      • 4.5 Do non-maximum suppression 進行非最大抑制
    • part 5. Other Implementations

Please install tensorflow-gpu 1.11.0 ! Since Tensorflow is fucking ridiculous !

(請安裝 tensorflow-gpu 1.11.0! 因為 Tensorflow 實在太 TM 折騰人了!)

part 1. Introduction [代碼剖析]

Implementation of YOLO v3 object detector in Tensorflow. The full details are in this paper. In this project we cover several segments as follows:
(在 Tensorflow 中實現 YOLO v3 對象檢測。 完整的細節在本文中。 在這個項目中,我們涵蓋了以下幾個部分:)

  • YOLO v3 architecture(YOLO v3 架構)
  • Training tensorflow-yolov3 with GIOU loss function(使用GIOU損失函數訓練tensorflow-yolov3)
  • Basic working demo(基本工作演示)
  • Training pipeline(訓練流程)
  • Multi-scale training method(多尺度訓練方法)
  • Compute VOC mAP (計算預測 VOC 數據集的平均精度均值 )

YOLO paper is quite hard to understand, along side that paper. This repo enables you to have a quick understanding of YOLO Algorithmn.
(YOLO 論文超難理解。 此庫可使您快速了解 YOLO 算法。)

part 2. Quick start(快速上手)

  • Clone this file(克隆這個文件)
  • $ git clone https://github.com/YunYang1994/tensorflow-yolov3.git
  • You are supposed to install some dependencies before getting out hands with these codes.(在掌握這些代碼之前,應該先安裝一些依賴項。)
  • $ cd tensorflow-yolov3 $ pip install -r ./docs/requirements.txt
  • Exporting loaded COCO weights as TF checkpoint(yolov3_coco.ckpt)(將已加載的COCO權重文件導出為Tensorflow的checkpoint文件)
  • $ cd checkpoint $ wget https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz $ tar -xvf yolov3_coco.tar.gz $ cd .. $ python convert_weight.py $ python freeze_graph.py

    (大約生成的.pb文件才是最終識別所需要的,正確生成.pb文件需要正確的.names文件、classes數量要正確,還得修改config.py中的__C.YOLO.CLASSES以及__C.YOLO.ORIGINAL_WEIGHT參數,需要嚴格按步驟進行,后期,這都需要查看Tensorflow教程。__C.YOLO.DEMO_WEIGHT參數不知道要不要改,可能運行上述指令,它會自動更新。)

  • Then you will get some .pb files in the root path., and run the demo script(然后,您將在根路徑中獲得一些.pb文件,并運行演示腳本。)
  • $ python image_demo.py $ python video_demo.py # if use camera, set video_path = 0 #(如果使用攝像頭,將video_demo.py中的video_path的值設置為0,將默認調用電腦或筆記本自帶的攝像頭)

    part 3. Train on your own dataset(用你自己的數據集進行訓練)

    Two files are required as follows:(需要兩個文件,如下所示:)

    • dataset.txt:
    xxx/xxx.jpg 18.19,6.32,424.13,421.83,20 323.86,2.65,640.0,421.94,20 xxx/xxx.jpg 48,240,195,371,11 8,12,352,498,14 # image_path x_min, y_min, x_max, y_max, class_id x_min, y_min ,..., class_id
    • class.names:
    person bicycle car ... toothbrush

    3.1 Train VOC dataset(訓練VOC數據集)

    To help you understand my training process, I made this demo of training VOC PASCAL dataset)
    (為了幫助您理解我的訓練過程,我制作了這個訓練PASCAL VOC 【 Visual Object Classes 可視化對象類 】 數據集的演示)

    how to train it ?(如何訓練它)

    Download VOC PASCAL trainval and test data()
    (下載PASCAL VOC 訓練驗證數據集和測試數據集)

    $ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar $ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar $ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar

    Extract all of these tars into one directory and rename them, which should have the following basic structure.
    (將所有這些tar包解壓到一個目錄中并重命名它們,該目錄應具有以下基本結構。)

    VOC # path: /home/yang/test/VOC/ ├── test | └──VOCdevkit | └──VOC2007 (from VOCtest_06-Nov-2007.tar) └── train└──VOCdevkit├──VOC2007 (from VOCtrainval_06-Nov-2007.tar)└──VOC2012 (from VOCtrainval_11-May-2012.tar)$ python scripts/voc_annotation.py --data_path /home/yang/test/VOC

    Then edit your ./core/config.py to make some necessary configurations
    (然后編輯您的./ core / config.py進行一些必要的配置)

    __C.YOLO.CLASSES = "./data/classes/voc.names" __C.TRAIN.ANNOT_PATH = "./data/dataset/voc_train.txt" __C.TEST.ANNOT_PATH = "./data/dataset/voc_test.txt"

    Here are two kinds of training method:
    (這有兩種訓練方法:)

    (1) train from scratch:

    (不使用預訓練模型進行訓練)

    $ python train.py $ tensorboard --logdir ./data
    (2) train from COCO weights(recommend):

    (使用COCO權重文件作為預訓練模型進行訓練【推薦】)

    $ cd checkpoint $ wget https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz $ tar -xvf yolov3_coco.tar.gz

    解壓后會在checkpoint文件夾生成三個文件:

    $ cd .. $ python convert_weight.py --train_from_coco

    運行后會生成四個新的文件:

    $ python train.py

    執行訓練的指令后,就會一直跑一直跑,跑N久,我這用1080Ti顯卡跑了好幾天,跑到迭代45次(印象好像是)的時候不跑了,在checkpoint文件夾生成的一堆權重文件有好幾十個G。權重文件名上標有損失值loss,我挑選出損失值較小的權重文件保留,刪除了其余的權重文件。(如圖:我只保留了yolov3_test_loss=8.4732.ckpt-5和yolov3_test_loss=7.8837.ckpt-12)訓練過程中會自動更新checkpoint文件,具體干嘛用的我也不是很清楚。(注意:即使是訓練相同的迭代次數,每次訓練生成文件的損失值都可能不同。)訓練生成的權重文件會在后續的識別中使用。

    how to test and evaluate it ?(如何測試和評估呢?)

    edit your ./core/config.py to make some necessary configurations, the weight file path is the one that you want to test from what we generated in the previous step.
    (編輯您的./ core / config.py進行一些必要的配置,權重文件路徑就是我們在上一步中生成的權重文件的路徑,從中選擇一個您想要測試的)

    __C.TEST.WEIGHT_FILE = "./checkpoint/yolov3_test_loss=8.4732.ckpt-5" $ python evaluate.py $ cd mAP $ python main.py -na

    運行結果:

    if you are still unfamiliar with training pipline, you can join here to discuss with us.
    (如果您仍然不熟悉馴良流程,可以從這加入與我們討論。)

    3.2 Train other dataset(訓練其他的數據集)

    Download COCO trainval and test data(下載COCO訓練驗證數據集以及測試數據集)

    $ wget http://images.cocodataset.org/zips/train2017.zip $ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip $ wget http://images.cocodataset.org/zips/test2017.zip $ wget http://images.cocodataset.org/annotations/image_info_test2017.zip

    part 4. Why it is so magical ?(為什么如此神奇?)

    YOLO stands for You Only Look Once. It’s an object detector that uses features learned by a deep convolutional neural network to detect an object. Although we has successfully run these codes, we must understand how YOLO works.
    (YOLO代表您只需看一次【就是說它識別速度很快!】。 它是一種物體檢測器,它使用深度卷積神經網絡學習的特征來檢測物體。 盡管我們已經成功運行了這些代碼,但我們必須了解YOLO的工作方式。)

    4.1 Anchors clustering(錨聚類?)

    The paper suggests to use clustering on bounding box shape to find the good anchor box specialization suited for the data. more details see here
    (本文建議對邊界框形狀使用聚類,以找到適合數據的良好錨框特化。 更多細節請看[這里])

    4.2 Architercutre details(架構細節)

    In this project, I use the pretrained weights, where we have 80 trained yolo classes (COCO dataset), for recognition. And the class label is represented as c and it’s integer from 1 to 80, each number represents the class label accordingly. If c=3, then the classified object is a car. The image features learned by the deep convolutional layers are passed onto a classifier and regressor which makes the detection prediction.(coordinates of the bounding boxes, the class label… etc).details also see in the below picture. (thanks Levio for your great image!)
    (在這個項目中,我使用預訓練的權重進行識別,在這里我們有80個訓練過的yolo類(COCO數據集)。 并且類別標簽用c表示,并且是1到80之間的整數,每個數字都相應地代表類別標簽。 如果c = 3,則分類對象是汽車。 深度卷積層學習到的圖像特征傳遞到分類器和回歸器上,以進行檢測預測。(邊界區域的坐標,類標簽等)。詳細信息也請參見下圖。 (感謝Levio提供的NB圖片!))

    4.3 Neural network io:(神經網絡的輸入輸出)

    • input : [None, 416, 416, 3]
    • output : confidece of an object being present in the rectangle, list of rectangles position and sizes and classes of the objects begin detected. Each bounding box is represented by 6 numbers (Rx, Ry, Rw, Rh, Pc, C1..Cn) as explained above. In this case n=80, which means we have c as 80-dimensional vector, and the final size of representing the bounding box is 85.The first number Pc is the confidence of an project, The second four number bx, by, bw, bh represents the information of bounding boxes. The last 80 number each is the output probability of corresponding-index class.
      (如果確定矩形中存在對象,則開始檢測包含矩形位置和大小以及對象的類別的列表。 如上所述,每個邊界框由6個數字“(Rx,Ry,Rw,Rh,Pc,C1…Cn)”表示。 在這種情況下,n = 80,這意味著我們將“ c”作為80維向量,并且表示邊界框最終所用到的列表容量大小為85。第一個數字“ Pc”是項目的置信度,第二組的四個數字bx,by,bw,bh表示邊界框的信息。 每個列表的最后80個是數字對應索引類的輸出概率。【不過我在這里有個疑問,Pc是否屬于后面80個數字的其中之一?】)

    4.4 Filtering with score threshold(用分數閾值過濾?)

    The output result may contain several rectangles that are false positives or overlap, if your input image size of [416, 416, 3], you will get (52X52+26X26+13X13)x3=10647 boxes since YOLO v3 totally uses 9 anchor boxes. (Three for each scale). So It is time to find a way to reduce them. The first attempt to reduce these rectangles is to filter them by score threshold.
    (輸出結果可能包含幾個假陽性或重疊的矩形,如果您輸入的圖像尺寸為[416,416,3],則由于YOLO v3的總和,您將獲得(52X52 + 26X26 + 13X13)x3 = 10647的框。 使用9個錨框。 (每個刻度三個)。 因此,現在是時候找到一種減少它們的方法了。 減少這些矩形的第一個嘗試是按得分閾值對其進行過濾。)

    Input arguments 輸入參數:

    • boxes: tensor of shape 形狀張量 [10647, 4]
    • scores: tensor of shape [10647, 80] containing the detection scores for 80 classes. 形狀為[[10647,80]]的張量,包含80個類別的檢測分數。
    • score_thresh: float value , then get rid of whose boxes with low score 浮動值,然后擺脫得分低的框
    # Step 1: Create a filtering mask based on "box_class_scores" by using "threshold". # 使用“閾值”基于“ box_class_scores”創建過濾掩碼。 score_thresh=0.4 mask = tf.greater_equal(scores, tf.constant(score_thresh))

    4.5 Do non-maximum suppression 進行非最大抑制

    Even after yolo filtering by thresholding over, we still have a lot of overlapping boxes. Second approach and filtering is Non-Maximum suppression algorithm.
    即使在通過閾值進行yolo濾波之后,我們仍然有很多重疊的框。 第二種方法和過濾是非最大抑制算法。

    • Discard all boxes with Pc <= 0.4 丟棄所有Pc <= 0.4的盒子
    • While there are any remaining boxes 當還有剩余的盒子時:
      • Pick the box with the largest Pc 選擇具有最大“ Pc”值的盒子
      • Output that as a prediction 輸出作為預測
      • Discard any remaining boxes with IOU>=0.5 with the box output in the previous step 丟棄上一步輸出的剩余的盒子中IOU> = 0.5的盒子

    In tensorflow, we can simply implement non maximum suppression algorithm like this. more details see here
    在tensorflow中,我們可以像這樣簡單地實現非最大抑制算法。 更多細節請看這里

    for i in range(num_classes):tf.image.non_max_suppression(boxes, score[:,i], iou_threshold=0.5)

    Non-max suppression uses the very important function called “Intersection over Union”, or IoU. Here is an exmaple of non maximum suppression algorithm: on input the aglorithm receive 4 overlapping bounding boxes, and the output returns only one
    非最大抑制使用非常重要的功能,稱為“交并比” **或IoU。 這是非最大抑制算法的一個例子:在輸入算法中,接收4個重疊的邊界框,而輸出僅返回一個

    If you want more details, read the fucking source code and original paper or contact with
    me!

    part 5. Other Implementations

    -YOLOv3目標檢測有了TensorFlow實現,可用自己的數據來訓練

    - Implementing YOLO v3 in Tensorflow (TF-Slim)

    - YOLOv3_TensorFlow

    - Object Detection using YOLOv2 on Pascal VOC2012

    -Understanding YOLO

    總結

    以上是生活随笔為你收集整理的YunYang1994/tensorflow-yolov3 Readme 翻译的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。