日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 >

Opencv 特征训练分类器

發(fā)布時間:2024/3/12 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Opencv 特征训练分类器 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

一、基礎(chǔ)知識準(zhǔn)備

首先,opencv目前僅支持三種特征的訓(xùn)練檢測, HAAR、LBP、HOG,選擇哪個特征就去補充哪個吧。opencv的這個訓(xùn)練算法是基于adaboost而來的,所以需要先對adaboost進行基礎(chǔ)知識補充啊,網(wǎng)上一大堆資料,同志們速度去查閱。我的資源里也有,大家去下載吧,這些我想都不是大家能直接拿來用的,我下面將直接手把手告訴大家訓(xùn)練怎么操作,以及要注意哪些細節(jié)。


二、關(guān)于正樣本的準(zhǔn)備

1、采集正樣本圖片

因為正樣本最后需要大小歸一化,所以我在采集樣本的時候就直接把它從原圖里摳出來了,方便后面縮放嘛,而不是只保存它的框個數(shù)和框位置信息(框個數(shù)、框位置信息看下一步解釋),在裁剪的過程中盡量保持樣本的長寬比例一致。比如我最后要歸一化成20 X 20,在裁剪樣本的時候,我都是20X20或者21X21、22X22等等,最大我也沒有超過30X30(不超過跟我的自身用途有關(guān),對于人臉檢測這種要保證縮放不變性的樣本,肯定就可以超過啦),我資源里也給出可以直接用的裁剪樣本程序。

(這里我說錯了,根據(jù)createsamples.cpp ,我們不需要提前進行縮放操作,它在第3步變成vec時就包含了縮放工作.如果我們是用objectMaker標(biāo)記樣本,程序同時生成的關(guān)于每一幅圖的samplesInfo信息,直接給第三步用即可。當(dāng)然,你提前縮放了也沒關(guān)系,按照第2步操作即可)

2、獲取正樣本路徑列表

在你的圖片文件夾里,編寫一個bat程序(get route.bat,bat是避免每次都需要去dos框輸入,那里又不能復(fù)制又不能粘貼!),如下所示:








運行bat文件,就會生成如下dat文件:


把這個dat文件中的所有非圖片的路徑都刪掉,比如上圖的頭兩行,再將bmp 替換成 bmp 1 0 0 20 20,如下:


(1代表個數(shù),后四個分別對應(yīng) left top width height,如果我們之前不是把樣本裁剪下來的,那么你的這個dat可能就長成這樣1. bmp 3 1 3 24 24 26 28 25 25 60 80 26 26,1.bmp是完全的原圖啊,你之前的樣本就是從這張圖上扣下來的)

3、獲取供訓(xùn)練的vec文件

這里,我們得利用opencv里的一個程序叫opencv_createsamples.exe,可以把它拷貝出來。針對它的命令輸入也是寫成bat文件啦,因為cascade訓(xùn)練的時候用的是vec。如下:


運行bat,就在我們得pos文件夾里生成了如下vec文件:

就此有關(guān)正樣本的東西準(zhǔn)備結(jié)束。

(vec中其實就是保存的每一個sample圖,并且已經(jīng)統(tǒng)一w、h大小了,如果你想看所有的sample,也可以通過調(diào)用opencv_createsamples.exe,使用操作,見附)


三、關(guān)于負樣本的準(zhǔn)備

這個特別簡單,直接拿原始圖,不需要裁剪摳圖(不裁剪還能保證樣本的多樣性),也不需要保存框(網(wǎng)上說只要保證比正樣本大小大哈,大家就保證吧),只要把路徑保存下來。同正樣本類似,步驟圖如下:



至此有關(guān)負樣本的也準(zhǔn)備完成。


四、開始訓(xùn)練吧

這里我們用opencv_traincascade.exe(opencv_haartraining.exe的用法跟這個很相似,具體需要輸入哪些參數(shù)去看opencv的源碼吧,網(wǎng)上資料也有很多,主要是opencv_traincascade.exe比opencv_haartraining.exe包含更多的特征,功能齊全些啊),直接上圖:


命令輸入也直接用bat文件,請務(wù)必保證好大小寫一致,不然不予識別參數(shù)。小白兔,跑起來~~~


這是程序識別到的參數(shù),沒有錯把,如果你哪個字母打錯了,你就會發(fā)現(xiàn)這些參數(shù)會跟你預(yù)設(shè)的不一樣啊,所以大家一定要看清楚了~~~~

跑啊跑啊跑啊跑,如下:


這一級的強訓(xùn)練器達到你預(yù)設(shè)的比例以后就跑去訓(xùn)練下一級了,同志們那個HR比例不要設(shè)置太高,不然會需要好多樣本,然后stagenum不要設(shè)置太小啊,不然到時候拿去檢測速度會很慢。

等這個bat跑結(jié)束,我的xml文件也生成了。如下:


其實這個訓(xùn)練可以中途停止的,因為下次開啟時它會讀取這些xml文件,接著進行上次未完成的訓(xùn)練。哈哈~~~~好人性化啊!

訓(xùn)練結(jié)束,我要到了我的cascade.xml文件,現(xiàn)在我要拿它去做檢測了啊!呼呼~~~~


五、開始檢測吧

opencv有個opencv_performance.exe程序用于檢測,但是它只能用在用opencv_haartraining.exe來用的,所以我這里是針對一些列圖片進行檢測的,檢測代碼如下:

[cpp] view plaincopyprint?
  • #include?<windows.h>??
  • #include?<mmsystem.h>??
  • #include?<stdio.h>??
  • #include?<stdlib.h>??
  • #include?“wininet.h”??
  • #include?<direct.h>??
  • #include?<string.h>??
  • #include?<list>??
  • #pragma?comment(lib,”Wininet.lib”)??
  • ??
  • ??
  • #include?“opencv2/objdetect/objdetect.hpp”??
  • #include?“opencv2/highgui/highgui.hpp”??
  • #include?“opencv2/imgproc/imgproc.hpp”??
  • #include?“opencv2/ml/ml.hpp”??
  • ??
  • #include?<iostream>??
  • #include?<stdio.h>??
  • ??
  • using?namespace?std;??
  • using?namespace?cv;??
  • ??
  • String?cascadeName?=?”./cascade.xml”;//訓(xùn)練數(shù)據(jù)??
  • ??
  • struct?PathElem{??
  • ????TCHAR???SrcImgPath[MAX_PATH*2];??
  • ????TCHAR???RstImgPath[MAX_PATH*2];??
  • };??
  • int?FindImgs(char??pSrcImgPath,?char??pRstImgPath,?std::list<PathElem>?&ImgList);??
  • ??
  • int?main(?)??
  • {??
  • ????CascadeClassifier?cascade;//創(chuàng)建級聯(lián)分類器對象??
  • ????std::list<PathElem>?ImgList;???
  • ????std::list<PathElem>::iterator?pImgListTemp;???
  • ????vector<Rect>?rects;??
  • ????vector<Rect>::const_iterator?pRect;??
  • ??
  • ????double?scale?=?1.;??
  • ????Mat?image;??
  • ????double?t;??
  • ????if(?!cascade.load(?cascadeName?)?)//從指定的文件目錄中加載級聯(lián)分類器??
  • ????{??
  • ????????cerr?<<?”ERROR:?Could?not?load?classifier?cascade”?<<?endl;??
  • ????????return?0;??
  • ????}??
  • ??
  • ??????
  • ????int?nFlag?=?FindImgs(“H:/SrcPic/”,“H:/RstPic/”,?ImgList);?????????
  • ????if(nFlag?!=?0)?????
  • ????{??
  • ????????cout<<”Read?Image??error?!??Input?0?to?exit?\n”;??
  • ????????exit(0);??
  • ????}??
  • ??
  • ????pImgListTemp?=?ImgList.begin();??
  • ????for(int?iik?=?1;?iik?<=?ImgList.size();?iik++,pImgListTemp++)??
  • ????{??
  • ????????image?=?imread(pImgListTemp->SrcImgPath);??????
  • ????????if(?!image.empty()?)//讀取圖片數(shù)據(jù)不能為空??
  • ????????{??
  • ????????????Mat?gray,?smallImg(?cvRound?(image.rows/scale),?cvRound(image.cols/scale),?CV_8UC1?);//將圖片縮小,加快檢測速度??
  • ????????????cvtColor(?image,?gray,?CV_BGR2GRAY?);//因為用的是類haar特征,所以都是基于灰度圖像的,這里要轉(zhuǎn)換成灰度圖像??
  • ????????????resize(?gray,?smallImg,?smallImg.size(),?0,?0,?INTER_LINEAR?);//將尺寸縮小到1/scale,用線性插值??
  • ????????????equalizeHist(?smallImg,?smallImg?);//直方圖均衡??
  • ??
  • ????????????//detectMultiScale函數(shù)中smallImg表示的是要檢測的輸入圖像為smallImg,rects表示檢測到的目標(biāo)序列,1.1表示??
  • ????????????//每次圖像尺寸減小的比例為1.1,2表示每一個目標(biāo)至少要被檢測到3次才算是真的目標(biāo)(因為周圍的像素和不同的窗口大??
  • ????????????//小都可以檢測到目標(biāo)),CV_HAAR_SCALE_IMAGE表示不是縮放分類器來檢測,而是縮放圖像,Size(30,?30)為目標(biāo)的??
  • ????????????//最小最大尺寸??
  • ????????????rects.clear();??
  • ????????????printf(?”begin…\n”);??
  • ????????????t?=?(double)cvGetTickCount();//用來計算算法執(zhí)行時間??
  • ????????????cascade.detectMultiScale(smallImg,rects,1.1,2,0,Size(20,20),Size(30,30));??
  • ????????????//|CV_HAAR_FIND_BIGGEST_OBJECT//|CV_HAAR_DO_ROUGH_SEARCH|CV_HAAR_SCALE_IMAGE,??
  • ??
  • ????????????t?=?(double)cvGetTickCount()?-?t;??
  • ????????????printf(?”detection?time?=?%g?ms\n\n”,?t/((double)cvGetTickFrequency()1000.)?);??
  • ????????????for(pRect?=?rects.begin();?pRect?!=?rects.end();?pRect++)??
  • ????????????{??
  • ????????????????rectangle(image,cvPoint(pRect->x,pRect->y),cvPoint(pRect->x+pRect->width,pRect->y+pRect->height),cvScalar(0,255,0));??
  • ????????????}??
  • ????????????imwrite(pImgListTemp->RstImgPath,image);??
  • ????????}??
  • ????}??
  • ??????
  • ????return?0;??
  • }??
  • ??
  • int?FindImgs(char??pSrcImgPath,?char??pRstImgPath,?std::list<PathElem>?&ImgList)??
  • {??
  • ????//源圖片存在的目錄??
  • ????TCHAR???szFileT1[MAX_PATH*2];??
  • ????lstrcpy(szFileT1,TEXT(pSrcImgPath));?????
  • ????lstrcat(szFileT1,?TEXT(”.*”));??
  • ??
  • ????//結(jié)果圖片存放的目錄??
  • ????TCHAR???RstAddr[MAX_PATH*2];???
  • ????lstrcpy(RstAddr,TEXT(pRstImgPath));??
  • ????_mkdir(RstAddr);???//創(chuàng)建文件夾??
  • ??
  • ????WIN32_FIND_DATA???wfd;????
  • ????HANDLE???hFind???=???FindFirstFile(szFileT1,?&wfd);???
  • ??
  • ??
  • ????PathElem?stPathElemTemp;??
  • ????if(hFind?!=?INVALID_HANDLE_VALUE)?????
  • ????{??
  • ????????do???
  • ????????{???
  • ????????????if(wfd.cFileName[0]?==?TEXT(‘.’))???
  • ????????????????continue;??
  • ????????????if(wfd.dwFileAttributes?&?FILE_ATTRIBUTE_DIRECTORY?||?strcmp(“Thumbs.db”,?TEXT(wfd.cFileName))?==?0)???
  • ????????????{???
  • ????????????????;??
  • ????????????}????
  • ????????????else???
  • ????????????{???
  • ??
  • ????????????????TCHAR???SrcImgPath[MAX_PATH*2];??
  • ????????????????lstrcpy(SrcImgPath,?pSrcImgPath);???
  • ????????????????lstrcat(SrcImgPath,?TEXT(wfd.cFileName));??
  • ??
  • ????????????????lstrcpy(stPathElemTemp.SrcImgPath,?SrcImgPath);???
  • ??
  • ????????????????TCHAR???AdressTemp[MAX_PATH*2];??
  • ????????????????lstrcpy(AdressTemp,pRstImgPath);???
  • ??
  • ????????????????//lstrcat(AdressTemp,?TEXT(“/”));????
  • ????????????????lstrcat(AdressTemp,?TEXT(wfd.cFileName));????
  • ????????????????lstrcpy(stPathElemTemp.RstImgPath,?AdressTemp);???
  • ??
  • ????????????????ImgList.push_back(stPathElemTemp);??
  • ??
  • ????????????}??
  • ????????}while(FindNextFile(hFind,?&wfd));??
  • ????}??
  • ????else??
  • ????{??
  • ????????return?-1;??
  • ????}??
  • ????return?0;??
  • }??
  • #include <windows.h>

    #include <mmsystem.h> #include <stdio.h> #include <stdlib.h> #include "wininet.h" #include <direct.h> #include <string.h> #include <list> #pragma comment(lib,"Wininet.lib") #include "opencv2/objdetect/objdetect.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include "opencv2/ml/ml.hpp" #include <iostream> #include <stdio.h> using namespace std; using namespace cv; String cascadeName = "./cascade.xml";//訓(xùn)練數(shù)據(jù) struct PathElem{ TCHAR SrcImgPath[MAX_PATH*2]; TCHAR RstImgPath[MAX_PATH*2]; }; int FindImgs(char * pSrcImgPath, char * pRstImgPath, std::list<PathElem> &ImgList); int main( ) { CascadeClassifier cascade;//創(chuàng)建級聯(lián)分類器對象 std::list<PathElem> ImgList; std::list<PathElem>::iterator pImgListTemp; vector<Rect> rects; vector<Rect>::const_iterator pRect; double scale = 1.; Mat image; double t; if( !cascade.load( cascadeName ) )//從指定的文件目錄中加載級聯(lián)分類器 { cerr << "ERROR: Could not load classifier cascade" << endl; return 0; } int nFlag = FindImgs("H:/SrcPic/","H:/RstPic/", ImgList); if(nFlag != 0) { cout<<"Read Image error ! Input 0 to exit \n"; exit(0); } pImgListTemp = ImgList.begin(); for(int iik = 1; iik <= ImgList.size(); iik++,pImgListTemp++) { image = imread(pImgListTemp->SrcImgPath); if( !image.empty() )//讀取圖片數(shù)據(jù)不能為空 { Mat gray, smallImg( cvRound (image.rows/scale), cvRound(image.cols/scale), CV_8UC1 );//將圖片縮小,加快檢測速度 cvtColor( image, gray, CV_BGR2GRAY );//因為用的是類haar特征,所以都是基于灰度圖像的,這里要轉(zhuǎn)換成灰度圖像 resize( gray, smallImg, smallImg.size(), 0, 0, INTER_LINEAR );//將尺寸縮小到1/scale,用線性插值 equalizeHist( smallImg, smallImg );//直方圖均衡 //detectMultiScale函數(shù)中smallImg表示的是要檢測的輸入圖像為smallImg,rects表示檢測到的目標(biāo)序列,1.1表示 //每次圖像尺寸減小的比例為1.1,2表示每一個目標(biāo)至少要被檢測到3次才算是真的目標(biāo)(因為周圍的像素和不同的窗口大 //小都可以檢測到目標(biāo)),CV_HAAR_SCALE_IMAGE表示不是縮放分類器來檢測,而是縮放圖像,Size(30, 30)為目標(biāo)的 //最小最大尺寸 rects.clear(); printf( "begin...\n"); t = (double)cvGetTickCount();//用來計算算法執(zhí)行時間 cascade.detectMultiScale(smallImg,rects,1.1,2,0,Size(20,20),Size(30,30)); //|CV_HAAR_FIND_BIGGEST_OBJECT//|CV_HAAR_DO_ROUGH_SEARCH|CV_HAAR_SCALE_IMAGE, t = (double)cvGetTickCount() - t; printf( "detection time = %g ms\n\n", t/((double)cvGetTickFrequency()*1000.) ); for(pRect = rects.begin(); pRect != rects.end(); pRect++) { rectangle(image,cvPoint(pRect->x,pRect->y),cvPoint(pRect->x+pRect->width,pRect->y+pRect->height),cvScalar(0,255,0)); } imwrite(pImgListTemp->RstImgPath,image); } } return 0; } int FindImgs(char * pSrcImgPath, char * pRstImgPath, std::list<PathElem> &ImgList) { //源圖片存在的目錄 TCHAR szFileT1[MAX_PATH*2]; lstrcpy(szFileT1,TEXT(pSrcImgPath)); lstrcat(szFileT1, TEXT("*.*")); //結(jié)果圖片存放的目錄 TCHAR RstAddr[MAX_PATH*2]; lstrcpy(RstAddr,TEXT(pRstImgPath)); _mkdir(RstAddr); //創(chuàng)建文件夾 WIN32_FIND_DATA wfd; HANDLE hFind = FindFirstFile(szFileT1, &wfd); PathElem stPathElemTemp; if(hFind != INVALID_HANDLE_VALUE) { do { if(wfd.cFileName[0] == TEXT('.')) continue; if(wfd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY || strcmp("Thumbs.db", TEXT(wfd.cFileName)) == 0) { ; } else { TCHAR SrcImgPath[MAX_PATH*2]; lstrcpy(SrcImgPath, pSrcImgPath); lstrcat(SrcImgPath, TEXT(wfd.cFileName)); lstrcpy(stPathElemTemp.SrcImgPath, SrcImgPath); TCHAR AdressTemp[MAX_PATH*2]; lstrcpy(AdressTemp,pRstImgPath); //lstrcat(AdressTemp, TEXT("/")); lstrcat(AdressTemp, TEXT(wfd.cFileName)); lstrcpy(stPathElemTemp.RstImgPath, AdressTemp); ImgList.push_back(stPathElemTemp); } }while(FindNextFile(hFind, &wfd)); } else { return -1; } return 0; }

    自己看看自己的檢測結(jié)果咯。效果不好的改進樣本,調(diào)整訓(xùn)練參數(shù)吧~~~嘎嘎


    我覺得我寫的夠白癡,很方便大家直接拿來用。其中一些細節(jié),大家自己琢磨吧~88



    附:

    1、opencv_createsamples.exe的參數(shù)

    (createsamples.cpp)

    [cpp] view plaincopyprint?
  • ”??[-info?<collection_file_name>]\n”??
  • ”??[-img?<image_file_name>]\n”??
  • ”??[-vec?<vec_file_name>]\n”??
  • ”??[-bg?<background_file_name>]\n??[-num?<number_of_samples?=?%d>]\n”??
  • ”??[-bgcolor?<background_color?=?%d>]\n”??
  • ”??[-inv]?[-randinv]?[-bgthresh?<background_color_threshold?=?%d>]\n”??
  • ”??[-maxidev?<max_intensity_deviation?=?%d>]\n”??
  • ”??[-maxxangle?<max_x_rotation_angle?=?%f>]\n”??
  • ”??[-maxyangle?<max_y_rotation_angle?=?%f>]\n”??
  • ”??[-maxzangle?<max_z_rotation_angle?=?%f>]\n”??
  • ”??[-show?[<scale?=?%f>]]\n”??
  • ”??[-w?<sample_width?=?%d>]\n??[-h?<sample_height?=?%d>]\n”//默認24*24??
  • " [-info <collection_file_name>]\n"" [-img <image_file_name>]\n"" [-vec <vec_file_name>]\n"" [-bg <background_file_name>]\n [-num <number_of_samples = %d>]\n"" [-bgcolor <background_color = %d>]\n"" [-inv] [-randinv] [-bgthresh <background_color_threshold = %d>]\n"" [-maxidev <max_intensity_deviation = %d>]\n"" [-maxxangle <max_x_rotation_angle = %f>]\n"" [-maxyangle <max_y_rotation_angle = %f>]\n"" [-maxzangle <max_z_rotation_angle = %f>]\n"" [-show [<scale = %f>]]\n"" [-w <sample_width = %d>]\n [-h <sample_height = %d>]\n"//默認24*24

    以下1)~4)是按順序判斷,且有且僅有一個

    1)提供imagename 和vecname時,調(diào)用以下操作
    [cpp] view plaincopyprint?
  • /*?
  • ?*?cvCreateTrainingSamples?
  • ?*?
  • ?*?Create?training?samples?applying?random?distortions?to?sample?image?and?
  • ?*?store?them?in?.vec?file?
  • ?*?
  • ?*?filename????????-?.vec?file?name?
  • ?*?imgfilename?????-?sample?image?file?name?
  • ?*?bgcolor?????????-?background?color?for?sample?image?
  • ?*?bgthreshold?????-?background?color?threshold.?Pixels?those?colors?are?in?range?
  • ?*???[bgcolor-bgthreshold,?bgcolor+bgthreshold]?are?considered?as?transparent?
  • ?*?bgfilename??????-?background?description?file?name.?If?not?NULL?samples?
  • ?*???will?be?put?on?arbitrary?background?
  • ?*?count???????????-?desired?number?of?samples?
  • ?*?invert??????????-?if?not?0?sample?foreground?pixels?will?be?inverted?
  • ?*???if?invert?==?CV_RANDOM_INVERT?then?samples?will?be?inverted?randomly?
  • ?*?maxintensitydev?-?desired?max?intensity?deviation?of?foreground?samples?pixels?
  • ?*?maxxangle???????-?max?rotation?angles?
  • ?*?maxyangle?
  • ?*?maxzangle?
  • ?*?showsamples?????-?if?not?0?samples?will?be?shown?
  • ?*?winwidth????????-?desired?samples?width?
  • ?*?winheight???????-?desired?samples?height?
  • ?*/??
  • /** cvCreateTrainingSamples** Create training samples applying random distortions to sample image and* store them in .vec file** filename - .vec file name* imgfilename - sample image file name* bgcolor - background color for sample image* bgthreshold - background color threshold. Pixels those colors are in range* [bgcolor-bgthreshold, bgcolor+bgthreshold] are considered as transparent* bgfilename - background description file name. If not NULL samples* will be put on arbitrary background* count - desired number of samples* invert - if not 0 sample foreground pixels will be inverted* if invert == CV_RANDOM_INVERT then samples will be inverted randomly* maxintensitydev - desired max intensity deviation of foreground samples pixels* maxxangle - max rotation angles* maxyangle* maxzangle* showsamples - if not 0 samples will be shown* winwidth - desired samples width* winheight - desired samples height*/2)提供imagename、bgfilename和infoname時
    與1)類似
    3)提供 infoname和 vecname時,調(diào)用以下操作 (這里是我們訓(xùn)練需要的)
    [cpp] view plaincopyprint?
  • /*?
  • ?*?cvCreateTrainingSamplesFromInfo?
  • ?*?
  • ?*?Create?training?samples?from?a?set?of?marked?up?images?and?store?them?into?.vec?file?
  • ?*?infoname????-?file?in?which?marked?up?image?descriptions?are?stored?
  • ?*?num?????????-?desired?number?of?samples?
  • ?*?showsamples?-?if?not?0?samples?will?be?shown?
  • ?*?winwidth????-?sample?width?
  • ?*?winheight???-?sample?height?
  • ?*??
  • ?*?Return?number?of?successfully?created?samples?
  • ?*/??
  • int?cvCreateTrainingSamplesFromInfo(?const?char*?infoname,?const?char*?vecfilename,??
  • ?????????????????????????????????????int?num,??
  • ?????????????????????????????????????int?showsamples,??
  • ?????????????????????????????????????int?winwidth,?int?winheight?)??
  • /** cvCreateTrainingSamplesFromInfo** Create training samples from a set of marked up images and store them into .vec file* infoname - file in which marked up image descriptions are stored* num - desired number of samples* showsamples - if not 0 samples will be shown* winwidth - sample width* winheight - sample height* * Return number of successfully created samples*/ int cvCreateTrainingSamplesFromInfo( const char* infoname, const char* vecfilename,int num,int showsamples,int winwidth, int winheight )

    函數(shù)內(nèi)容:讀取當(dāng)前圖中所有標(biāo)記的sample(x,y,w,h),并將其縮放到winwidth、winheight大小,故在這之前的人為縮放操作不需要

    (可以看到,僅需要num、w、h參數(shù))
    4)僅vecname時,可以將vec里面的所有縮放后的samples都顯示出來
    [cpp] view plaincopyprint?
  • /*?
  • ?*?cvShowVecSamples?
  • ?*?
  • ?*?Shows?samples?stored?in?.vec?file?
  • ?*?
  • ?*?filename?
  • ?*???.vec?file?name?
  • ?*?winwidth?
  • ?*???sample?width?
  • ?*?winheight?
  • ?*???sample?height?
  • ?*?scale?
  • ?*???the?scale?each?sample?is?adjusted?to(這個scale與3中的縮放不是一回事,這里僅為了顯示而再次縮放)?
  • ?*/??
  • void?cvShowVecSamples(?const?char*?filename,?int?winwidth,?int?winheight,?double?scale?);??
  • /** cvShowVecSamples** Shows samples stored in .vec file** filename* .vec file name* winwidth* sample width* winheight* sample height* scale* the scale each sample is adjusted to(這個scale與3中的縮放不是一回事,這里僅為了顯示而再次縮放)*/ void cvShowVecSamples( const char* filename, int winwidth, int winheight, double scale );

    2、opencv_haartraining.exe的參數(shù)

    (haartraining.cpp?)

    [cpp] view plaincopyprint?
  • ”??-data?<dir_name>\n”??
  • ”??-vec?<vec_file_name>\n”??
  • ”??-bg?<background_file_name>\n”??
  • ”??[-bg-vecfile]\n”??
  • ”??[-npos?<number_of_positive_samples?=?%d>]\n”??
  • ”??[-nneg?<number_of_negative_samples?=?%d>]\n”??
  • ”??[-nstages?<number_of_stages?=?%d>]\n”??
  • ”??[-nsplits?<number_of_splits?=?%d>]\n”??
  • ”??[-mem?<memory_in_MB?=?%d>]\n”??
  • ”??[-sym?(default)]?[-nonsym]\n”??
  • ”??[-minhitrate?<min_hit_rate?=?%f>]\n”??
  • ”??[-maxfalsealarm?<max_false_alarm_rate?=?%f>]\n”??
  • ”??[-weighttrimming?<weight_trimming?=?%f>]\n”??
  • ”??[-eqw]\n”??
  • ”??[-mode?<BASIC?(default)?|?CORE?|?ALL>]\n”??
  • ”??[-w?<sample_width?=?%d>]\n”??
  • ”??[-h?<sample_height?=?%d>]\n”??
  • ”??[-bt?<DAB?|?RAB?|?LB?|?GAB?(default)>]\n”??
  • ”??[-err?<misclass?(default)?|?gini?|?entropy>]\n”??
  • ”??[-maxtreesplits?<max_number_of_splits_in_tree_cascade?=?%d>]\n”??
  • ”??[-minpos?<min_number_of_positive_samples_per_cluster?=?%d>]\n”??
  • " -data <dir_name>\n" " -vec <vec_file_name>\n" " -bg <background_file_name>\n" " [-bg-vecfile]\n" " [-npos <number_of_positive_samples = %d>]\n" " [-nneg <number_of_negative_samples = %d>]\n" " [-nstages <number_of_stages = %d>]\n" " [-nsplits <number_of_splits = %d>]\n" " [-mem <memory_in_MB = %d>]\n" " [-sym (default)] [-nonsym]\n" " [-minhitrate <min_hit_rate = %f>]\n" " [-maxfalsealarm <max_false_alarm_rate = %f>]\n" " [-weighttrimming <weight_trimming = %f>]\n" " [-eqw]\n" " [-mode <BASIC (default) | CORE | ALL>]\n" " [-w <sample_width = %d>]\n" " [-h <sample_height = %d>]\n" " [-bt <DAB | RAB | LB | GAB (default)>]\n" " [-err <misclass (default) | gini | entropy>]\n" " [-maxtreesplits <max_number_of_splits_in_tree_cascade = %d>]\n" " [-minpos <min_number_of_positive_samples_per_cluster = %d>]\n"

    3、opencv_performance.exe參數(shù)

    (performance.cpp?)

    [cpp] view plaincopyprint?
  • ”??-data?<classifier_directory_name>\n”??
  • ”??-info?<collection_file_name>\n”??
  • ”??[-maxSizeDiff?<max_size_difference?=?%f>]\n”??
  • ”??[-maxPosDiff?<max_position_difference?=?%f>]\n”??
  • ”??[-sf?<scale_factor?=?%f>]\n”??
  • ”??[-ni?<saveDetected?=?0>]\n”??
  • ”??[-nos?<number_of_stages?=?%d>]\n”??
  • ”??[-rs?<roc_size?=?%d>]\n”??
  • ”??[-w?<sample_width?=?%d>]\n”??
  • ”??[-h?<sample_height?=?%d>]\n”??
  • " -data <classifier_directory_name>\n" " -info <collection_file_name>\n" " [-maxSizeDiff <max_size_difference = %f>]\n" " [-maxPosDiff <max_position_difference = %f>]\n" " [-sf <scale_factor = %f>]\n" " [-ni <saveDetected = 0>]\n" " [-nos <number_of_stages = %d>]\n" " [-rs <roc_size = %d>]\n" " [-w <sample_width = %d>]\n" " [-h <sample_height = %d>]\n"


    4、opencv_traincascade.exe參數(shù)說明

    ——traincascade.cpp?

    [cpp] view plaincopyprint?
  • ?cout?<<?“Usage:?”?<<?argv[0]?<<?endl;??
  • ?cout?<<?”??-data?<cascade_dir_name>”?<<?endl;??
  • ?cout?<<?”??-vec?<vec_file_name>”?<<?endl;??
  • ?cout?<<?”??-bg?<background_file_name>”?<<?endl;??
  • ?cout?<<?”??[-numPos?<number_of_positive_samples?=?”?<<?numPos?<<?“>]”?<<?endl;???//默認2000??
  • ?cout?<<?”??[-numNeg?<number_of_negative_samples?=?”?<<?numNeg?<<?“>]”?<<?endl;???//默認1000??
  • ?cout?<<?”??[-numStages?<number_of_stages?=?”?<<?numStages?<<?“>]”?<<?endl;???//默認20??
  • ?cout?<<?”??[-precalcValBufSize?<precalculated_vals_buffer_size_in_Mb?=?”?<<?precalcValBufSize?<<?“>]”?<<?endl;//默認256??
  • ?cout?<<?”??[-precalcIdxBufSize?<precalculated_idxs_buffer_size_in_Mb?=?”?<<?precalcIdxBufSize?<<?“>]”?<<?endl;//默認256??
  • ?cout?<<?”??[-baseFormatSave]”?<<?endl;?????????????????????//是否按照舊版存xml文件默認false??
  • //?cout?<<?”??[-numThreads?<max_number_of_threads?=?”?<<?numThreads?<<?”>]”?<<?endl;//這個參數(shù)在3.0版本中才出現(xiàn),默認numThreads?=?getNumThreads();??
  • //?cout?<<?”??[-acceptanceRatioBreakValue?<value>?=?”?<<?acceptanceRatioBreakValue?<<?”>]”?<<?endl;//這個參數(shù)在3.0版本中才出現(xiàn),默認-1.0??
  • ?cascadeParams.printDefaults();??
  • ?stageParams.printDefaults();??
  • ?for(?int?fi?=?0;?fi?<?fc;?fi++?)??
  • ?????featureParams[fi]->printDefaults();??
  • cout << "Usage: " << argv[0] << endl;cout << " -data <cascade_dir_name>" << endl;cout << " -vec <vec_file_name>" << endl;cout << " -bg <background_file_name>" << endl;cout << " [-numPos <number_of_positive_samples = " << numPos << ">]" << endl; //默認2000cout << " [-numNeg <number_of_negative_samples = " << numNeg << ">]" << endl; //默認1000cout << " [-numStages <number_of_stages = " << numStages << ">]" << endl; //默認20cout << " [-precalcValBufSize <precalculated_vals_buffer_size_in_Mb = " << precalcValBufSize << ">]" << endl;//默認256cout << " [-precalcIdxBufSize <precalculated_idxs_buffer_size_in_Mb = " << precalcIdxBufSize << ">]" << endl;//默認256cout << " [-baseFormatSave]" << endl; //是否按照舊版存xml文件默認false// cout << " [-numThreads <max_number_of_threads = " << numThreads << ">]" << endl;//這個參數(shù)在3.0版本中才出現(xiàn),默認numThreads = getNumThreads();// cout << " [-acceptanceRatioBreakValue <value> = " << acceptanceRatioBreakValue << ">]" << endl;//這個參數(shù)在3.0版本中才出現(xiàn),默認-1.0cascadeParams.printDefaults();stageParams.printDefaults();for( int fi = 0; fi < fc; fi++ )featureParams[fi]->printDefaults();

    其中cascadeParams.printDefaults();——cascadeclassifier.cpp?如下

    [cpp] view plaincopyprint?
  • cout?<<?“??[-stageType?<”;?????????????????????????????????????????????????//默認BOOST??
  • for(?int?i?=?0;?i?<?(int)(sizeof(stageTypes)/sizeof(stageTypes[0]));?i++?)??
  • {??
  • ????cout?<<?(i???”?|?”?:?“”)?<<?stageTypes[i];??
  • ????if?(?i?==?defaultStageType?)??
  • ????????cout?<<?”(default)”;??
  • }??
  • cout?<<?”>]”?<<?endl;??
  • ??
  • cout?<<?”??[-featureType?<{“;??????????????????????????????????????????????//默認HAAR??
  • for(?int?i?=?0;?i?<?(int)(sizeof(featureTypes)/sizeof(featureTypes[0]));?i++?)??
  • {??
  • ????cout?<<?(i???”,?”?:?“”)?<<?featureTypes[i];??
  • ????if?(?i?==?defaultStageType?)??
  • ????????cout?<<?”(default)”;??
  • }??
  • cout?<<?”}>]”?<<?endl;??
  • cout?<<?”??[-w?<sampleWidth?=?”?<<?winSize.width?<<?“>]”?<<?endl;????????//默認24*24??
  • cout?<<?”??[-h?<sampleHeight?=?”?<<?winSize.height?<<?“>]”?<<?endl;??
  • cout << " [-stageType <"; //默認BOOSTfor( int i = 0; i < (int)(sizeof(stageTypes)/sizeof(stageTypes[0])); i++ ){cout << (i ? " | " : "") << stageTypes[i];if ( i == defaultStageType )cout << "(default)";}cout << ">]" << endl;cout << " [-featureType <{"; //默認HAARfor( int i = 0; i < (int)(sizeof(featureTypes)/sizeof(featureTypes[0])); i++ ){cout << (i ? ", " : "") << featureTypes[i];if ( i == defaultStageType )cout << "(default)";}cout << "}>]" << endl;cout << " [-w <sampleWidth = " << winSize.width << ">]" << endl; //默認24*24cout << " [-h <sampleHeight = " << winSize.height << ">]" << endl;stageParams.printDefaults();——boost.cpp如下

    [cpp] view plaincopyprint?
  • cout?<<?“–boostParams–”?<<?endl;??
  • cout?<<?”??[-bt?<{“?<<?CC_DISCRETE_BOOST?<<?“,?”??
  • ????????????????????<<?CC_REAL_BOOST?<<?”,?”??
  • ????????????????????<<?CC_LOGIT_BOOST?”,?”??
  • ????????????????????<<?CC_GENTLE_BOOST?<<?”(default)}>]”?<<?endl;?????????????????????????//默認CC_GENTLE_BOOST???
  • cout?<<?”??[-minHitRate?<min_hit_rate>?=?”?<<?minHitRate?<<?“>]”?<<?endl;?????????????????//默認0.995??
  • cout?<<?”??[-maxFalseAlarmRate?<max_false_alarm_rate?=?”?<<?maxFalseAlarm?<<?“>]”?<<?endl;//默認0.5??
  • cout?<<?”??[-weightTrimRate?<weight_trim_rate?=?”?<<?weight_trim_rate?<<?“>]”?<<?endl;????//默認0.95??
  • cout?<<?”??[-maxDepth?<max_depth_of_weak_tree?=?”?<<?max_depth?<<?“>]”?<<?endl;???????????//默認1??
  • cout?<<?”??[-maxWeakCount?<max_weak_tree_count?=?”?<<?weak_count?<<?“>]”?<<?endl;?????????//默認100??
  • cout << "--boostParams--" << endl;cout << " [-bt <{" << CC_DISCRETE_BOOST << ", "<< CC_REAL_BOOST << ", "<< CC_LOGIT_BOOST ", "<< CC_GENTLE_BOOST << "(default)}>]" << endl; //默認CC_GENTLE_BOOST cout << " [-minHitRate <min_hit_rate> = " << minHitRate << ">]" << endl; //默認0.995cout << " [-maxFalseAlarmRate <max_false_alarm_rate = " << maxFalseAlarm << ">]" << endl;//默認0.5cout << " [-weightTrimRate <weight_trim_rate = " << weight_trim_rate << ">]" << endl; //默認0.95cout << " [-maxDepth <max_depth_of_weak_tree = " << max_depth << ">]" << endl; //默認1cout << " [-maxWeakCount <max_weak_tree_count = " << weak_count << ">]" << endl; //默認100featureParams[fi]->printDefaults();——haarfeatures.cpp 如下

    [cpp] view plaincopyprint?
  • cout?<<?“??[-mode?<”?CC_MODE_BASIC?<<?“(default)|?”??//默認CC_MODE_BASIC??
  • ???????????<<?CC_MODE_CORE?<<”?|?”?<<?CC_MODE_ALL?<<?endl;??
  • cout << " [-mode <" CC_MODE_BASIC << "(default)| " //默認CC_MODE_BASIC<< CC_MODE_CORE <<" | " << CC_MODE_ALL << endl;

    通用參數(shù):

    -data<cascade_dir_name>

    目錄名,如不存在訓(xùn)練程序會創(chuàng)建它,用于存放訓(xùn)練好的分類器


    -vec<vec_file_name>

    包含正樣本的vec文件名(由?opencv_createsamples?程序生成)


    -bg<background_file_name>

    背景描述文件,也就是包含負樣本文件名的那個描述文件


    -numPos<number_of_positive_samples>

    每級分類器訓(xùn)練時所用的正樣本數(shù)目


    -numNeg<number_of_negative_samples>

    每級分類器訓(xùn)練時所用的負樣本數(shù)目,可以大于?-bg?指定的圖片數(shù)目


    -numStages<number_of_stages>

    訓(xùn)練的分類器的級數(shù)。


    -precalcValBufSize<precalculated_vals_buffer_size_in_Mb>

    緩存大小,用于存儲預(yù)先計算的特征值(feature?values),單位為MB


    -precalcIdxBufSize<precalculated_idxs_buffer_size_in_Mb>

    緩存大小,用于存儲預(yù)先計算的特征索引(feature?indices),單位為MB。內(nèi)存越大,訓(xùn)練時間越短


    -baseFormatSave

    這個參數(shù)僅在使用Haar特征時有效。如果指定這個參數(shù),那么級聯(lián)分類器將以老的格式存儲


    級聯(lián)參數(shù):

    -stageType<BOOST(default)>

    級別(stage)參數(shù)。目前只支持將BOOST分類器作為級別的類型


    -featureType<{HAAR(default),LBP}>

    特征的類型:?HAAR?-?類Haar特征;LBP?-?局部紋理模式特征


    -w<sampleWidth>

    -h<sampleHeight>

    訓(xùn)練樣本的尺寸(單位為像素)。必須跟訓(xùn)練樣本創(chuàng)建(使用?opencv_createsamples?程序創(chuàng)建)時的尺寸保持一致


    Boosted分類器參數(shù):

    -bt<{DAB,RAB,LB,GAB(default)}>

    Boosted分類器的類型:?DAB?-?Discrete?AdaBoost,RAB?-?Real?AdaBoost,LB?-?LogitBoost,?GAB?-?Gentle?AdaBoost


    -minHitRate<min_hit_rate>

    分類器的每一級希望得到的最小檢測率(正樣本被判成正樣本的比例)。總的檢測率大約為?min_hit_rate^number_of_stages??梢栽O(shè)很高,如0.999


    -maxFalseAlarmRate<max_false_alarm_rate>

    分類器的每一級希望得到的最大誤檢率(負樣本被判成正樣本的比例)。總的誤檢率大約為?max_false_alarm_rate^number_of_stages??梢栽O(shè)較低,如0.5


    -weightTrimRate<weight_trim_rate>

    Specifies?whether?trimming?should?be?used?and?its?weight.?一個還不錯的數(shù)值是0.95


    -maxDepth<max_depth_of_weak_tree>

    弱分類器樹最大的深度。一個還不錯的數(shù)值是1,是二叉樹(stumps)


    -maxWeakCount<max_weak_tree_count>

    每一級中的弱分類器的最大數(shù)目。The?boosted?classifier?(stage)?will?have?so?many?weak?trees?(<=maxWeakCount),?as?needed?to?achieve?the?given-maxFalseAlarmRate


    類Haar特征參數(shù):

    -mode<BASIC(default)|?CORE|ALL>

    選擇訓(xùn)練過程中使用的Haar特征的類型。?BASIC?只使用右上特征,?ALL?使用所有右上特征和45度旋轉(zhuǎn)特征


    5、detectMultiScale函數(shù)參數(shù)說明

    該函數(shù)會在輸入圖像的不同尺度中檢測目標(biāo):

    image? -輸入的灰度圖像,

    objects? -被檢測到的目標(biāo)矩形框向量組,

    scaleFactor? -為每一個圖像尺度中的尺度參數(shù),默認值為1.1

    minNeighbors? -為每一個級聯(lián)矩形應(yīng)該保留的鄰近個數(shù),默認為3,表示至少有3次檢測到目標(biāo),才認為是目標(biāo)

    flags -CV_HAAR_DO_CANNY_PRUNING,利用Canny邊緣檢測器來排除一些邊緣很少或者很多的圖像區(qū)域;

    ?? ? ? ? ??CV_HAAR_SCALE_IMAGE,按比例正常檢測;

    ? ?? ? ? ??CV_HAAR_FIND_BIGGEST_OBJECT,只檢測最大的物體;

    ?? ? ? ? ??CV_HAAR_DO_ROUGH_SEARCH,只做粗略檢測。默認值是0

    minSize和maxSize -用來限制得到的目標(biāo)區(qū)域的范圍(先找maxsize,再用1.1參數(shù)縮小,直到小于minSize終止檢測)


    6、opencv關(guān)于Haar介紹

    (haarfeatures.cpp ——opencv3.0)

    Detailed Description

    Haar Feature-based Cascade Classifier for Object Detection

    The object detector described below has been initially proposed by Paul Viola?[pdf]?and improved by Rainer Lienhart?[pdf]?.

    First, a classifier (namely a?cascade of boosted classifiers working with haar-like features) is trained with a few hundred sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size.

    After a classifier is trained, it can be applied to a region of interest (of the same size as used during the training) in an input image. The classifier outputs a “1” if the region is likely to show the object (i.e., face/car), and “0” otherwise. To search for the object in the whole image one can move the search window across the image and check every location using the classifier. The classifier is designed so that it can be easily “resized” in order to be able to find the objects of interest at different sizes, which is more efficient than resizing the image itself. So, to find an object of an unknown size in the image the scan procedure should be done several times at different scales.

    The word “cascade” in the classifier name means that the resultant classifier consists of several simpler classifiers (stages) that are applied subsequently to a region of interest until at some stage the candidate is rejected or all the stages are passed. The word “boosted” means that the classifiers at every stage of the cascade are complex themselves and they are built out of basic classifiers using one of four different boosting techniques (weighted voting). Currently Discrete Adaboost, Real Adaboost, Gentle Adaboost and Logitboost are supported. The basic classifiers are decision-tree classifiers with at least 2 leaves. Haar-like features are the input to the basic classifiers, and are calculated as described below. The current algorithm uses the following Haar-like features:

    image

    The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied). For example, in the case of the third line feature (2c) the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas. The sums of pixel values over a rectangular regions are calculated rapidly using integral images (see below and the integral description).

    To see the object detector at work, have a look at the facedetect demo:?https://github.com/Itseez/opencv/tree/master/samples/cpp/dbt_face_detection.cpp

    The following reference is for the detection part only. There is a separate application called opencv_traincascade that can train a cascade of boosted classifiers from a set of samples.

    Note

    In the new C++ interface it is also possible to use LBP (local binary pattern) features in addition to Haar-like features. .. [Viola01] Paul Viola and Michael J. Jones. Rapid Object Detection using a Boosted Cascade of Simple Features. IEEE CVPR, 2001. The paper is available online at?https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf(上述有提到)


    7、opencv關(guān)于boost

    (boost.cpp——opencv3.0)

    Boosting

    A common machine learning task is supervised learning. In supervised learning, the goal is to learn the functional relationship?F:y=F(x)?between the input?x?and the output?y?. Predicting the qualitative output is called?classification, while predicting the quantitative output is called?regression.

    Boosting is a powerful learning concept that provides a solution to the supervised classification learning task. It combines the performance of many “weak” classifiers to produce a powerful committee?[125]?. A weak classifier is only required to be better than chance, and thus can be very simple and computationally inexpensive. However, many of them smartly combine results to a strong classifier that often outperforms most “monolithic” strong classifiers such as SVMs and Neural Networks.

    Decision trees are the most popular weak classifiers used in boosting schemes. Often the simplest decision trees with only a single split node per tree (called stumps ) are sufficient.

    The boosted model is based on?N?training examples?(xi,yi)1N?with?xiRK?and?yi?1,+1?.?xi?is a?K?-component vector. Each component encodes a feature relevant to the learning task at hand. The desired two-class output is encoded as -1 and +1.

    Different variants of boosting are known as Discrete Adaboost, Real AdaBoost, LogitBoost, and Gentle AdaBoost?[49]?. All of them are very similar in their overall structure. Therefore, this chapter focuses only on the standard two-class Discrete AdaBoost algorithm, outlined below. Initially the same weight is assigned to each sample (step 2). Then, a weak classifier?fm(x)?is trained on the weighted training data (step 3a). Its weighted training error and scaling factor?cm?is computed (step 3b). The weights are increased for training samples that have been misclassified (step 3c). All weights are then normalized, and the process of finding the next weak classifier continues for another?M?-1 times. The final classifier?F(x)?is the sign of the weighted sum over the individual weak classifiers (step 4).

    Two-class Discrete AdaBoost Algorithm

    • Set?N?examples?(xi,yi)1N?with?xiRK,yi?1,+1?.
    • Assign weights as?wi=1/N,i=1,...,N?.
    • Repeat for?m=1,2,...,M?:
      • Fit the classifier?fm(x)?1,1, using weights?wi?on the training data.
      • Compute?errm=Ew[1(yfm(x))],cm=log((1?errm)/errm)?.
      • Set?wi?wiexp[cm1(yifm(xi))],i=1,2,...,N,?and renormalize so that?Σiwi=1?.

    • Classify new samples?x?using the formula:?sign(Σm=1Mcmfm(x))?.
    Note
    Similar to the classical boosting methods, the current implementation supports two-class classifiers only. For M > 2 classes, there is theAdaBoost.MH?algorithm (described in?[49]) that reduces the problem to the two-class problem, yet with a much larger training set.

    To reduce computation time for boosted models without substantially losing accuracy, the influence trimming technique can be employed. As the training algorithm proceeds and the number of trees in the ensemble is increased, a larger number of the training samples are classified correctly and with increasing confidence, thereby those samples receive smaller weights on the subsequent iterations. Examples with a very low relative weight have a small impact on the weak classifier training. Thus, such examples may be excluded during the weak classifier training without having much effect on the induced classifier. This process is controlled with the weight_trim_rate parameter. Only examples with the summary fraction weight_trim_rate of the total weight mass are used in the weak classifier training. Note that the weights for?all?training examples are recomputed at each training iteration. Examples deleted at a particular iteration may be used again for learning some of the weak classifiers further?[49]

    See also
    cv::ml::Boost

    Prediction with Boost

    StatModel::predict(samples, results, flags) should be used. Pass flags=StatModel::RAW_OUTPUT to get the raw sum from Boost classifier.

    8、關(guān)于訓(xùn)練過程打印信息的解釋

    1)POS count : consumed ? n1 : n2

    每次都調(diào)用updateTrainingSet( requiredLeafFARate, tempLeafFARate );函數(shù)

    [cpp] view plaincopyprint?
  • bool?CvCascadeClassifier::updateTrainingSet(?double?minimumAcceptanceRatio,?double&?acceptanceRatio)??
  • {??
  • ????int64?posConsumed?=?0,?negConsumed?=?0;??
  • ????imgReader.restart();??
  • ????int?posCount?=?fillPassedSamples(?0,?numPos,?true,?0,?posConsumed?);//Consumed消耗??
  • ????if(?!posCount?)??
  • ????????return?false;??
  • ????cout?<<?”POS?count?:?consumed???”?<<?posCount?<<?“?:?”?<<?(int)posConsumed?<<?endl;//這就是打印信息,我的理解是這個stage判成正樣本數(shù)和正樣本數(shù)??
  • ??
  • ????int?proNumNeg?=?cvRound(?(?((double)numNeg)?*?((double)posCount)?)?/?numPos?);?//?apply?only?a?fraction?of?negative?samples.?double?is?required?since?overflow?is?possible??
  • ????int?negCount?=?fillPassedSamples(?posCount,?proNumNeg,?false,?minimumAcceptanceRatio,?negConsumed?);??
  • ????if?(?!negCount?)??
  • ????????return?false;??
  • ??
  • ????curNumSamples?=?posCount?+?negCount;??
  • ????acceptanceRatio?=?negConsumed?==?0???0?:?(?(double)negCount/(double)(int64)negConsumed?);??
  • ????cout?<<?”NEG?count?:?acceptanceRatio????”?<<?negCount?<<?“?:?”?<<?acceptanceRatio?<<?endl;//打印信息,我的理解是??
  • ????return?true;??
  • }??
  • bool CvCascadeClassifier::updateTrainingSet( double minimumAcceptanceRatio, double& acceptanceRatio) {int64 posConsumed = 0, negConsumed = 0;imgReader.restart();int posCount = fillPassedSamples( 0, numPos, true, 0, posConsumed );//Consumed消耗if( !posCount )return false;cout << "POS count : consumed " << posCount << " : " << (int)posConsumed << endl;//這就是打印信息,我的理解是這個stage判成正樣本數(shù)和正樣本數(shù)int proNumNeg = cvRound( ( ((double)numNeg) * ((double)posCount) ) / numPos ); // apply only a fraction of negative samples. double is required since overflow is possibleint negCount = fillPassedSamples( posCount, proNumNeg, false, minimumAcceptanceRatio, negConsumed );if ( !negCount )return false;curNumSamples = posCount + negCount;acceptanceRatio = negConsumed == 0 ? 0 : ( (double)negCount/(double)(int64)negConsumed );cout << "NEG count : acceptanceRatio " << negCount << " : " << acceptanceRatio << endl;//打印信息,我的理解是return true; } [cpp] view plaincopyprint?
  • int?CvCascadeClassifier::fillPassedSamples(?int?first,?int?count,?bool?isPositive,?double?minimumAcceptanceRatio,?int64&?consumed?)??
  • {??
  • ????int?getcount?=?0;??
  • ????Mat?img(cascadeParams.winSize,?CV_8UC1);??
  • ????for(?int?i?=?first;?i?<?first?+?count;?i++?)??
  • ????{??
  • ????????for(?;?;?)??
  • ????????{??
  • ????????????if(?consumed?!=?0?&&?((double)getcount+1)/(double)(int64)consumed?<=?minimumAcceptanceRatio?)??
  • ????????????????return?getcount;??
  • ??
  • ????????????bool?isGetImg?=?isPositive???imgReader.getPos(?img?)?:??
  • ???????????????????????????????????????????imgReader.getNeg(?img?);??
  • ????????????if(?!isGetImg?)??
  • ????????????????return?getcount;??
  • ????????????consumed++;??
  • ??
  • ????????????featureEvaluator->setImage(?img,?isPositive???1?:?0,?i?);??
  • ????????????if(?predict(?i?)?==?1.0F?)??
  • ????????????{??
  • ????????????????getcount++;??
  • ????????????????printf(”%s?current?samples:?%d\r”,?isPositive???“POS”:“NEG”,?getcount);??
  • ????????????????break;??
  • ????????????}??
  • ????????}??
  • ????}??
  • ????return?getcount;??
  • }??
  • int CvCascadeClassifier::fillPassedSamples( int first, int count, bool isPositive, double minimumAcceptanceRatio, int64& consumed ) {int getcount = 0;Mat img(cascadeParams.winSize, CV_8UC1);for( int i = first; i < first + count; i++ ){for( ; ; ){if( consumed != 0 && ((double)getcount+1)/(double)(int64)consumed <= minimumAcceptanceRatio )return getcount;bool isGetImg = isPositive ? imgReader.getPos( img ) :imgReader.getNeg( img );if( !isGetImg )return getcount;consumed++;featureEvaluator->setImage( img, isPositive ? 1 : 0, i );if( predict( i ) == 1.0F ){getcount++;printf("%s current samples: %d\r", isPositive ? "POS":"NEG", getcount);break;}}}return getcount; } [cpp] view plaincopyprint?
  • int?CvCascadeClassifier::predict(?int?sampleIdx?)??
  • {??
  • ????CV_DbgAssert(?sampleIdx?<?numPos?+?numNeg?);??
  • ????for?(vector<?Ptr<CvCascadeBoost>?>::iterator?it?=?stageClassifiers.begin();??
  • ????????it?!=?stageClassifiers.end();?it++?)??
  • ????{??
  • ????????if?(?(*it)->predict(?sampleIdx?)?==?0.f?)??
  • ????????????return?0;??
  • ????}??
  • ????return?1;??
  • }??
  • int CvCascadeClassifier::predict( int sampleIdx ) {CV_DbgAssert( sampleIdx < numPos + numNeg );for (vector< Ptr<CvCascadeBoost> >::iterator it = stageClassifiers.begin();it != stageClassifiers.end(); it++ ){if ( (*it)->predict( sampleIdx ) == 0.f )return 0;}return 1; } [cpp] view plaincopyprint?
  • float?CvCascadeBoost::predict(?int?sampleIdx,?bool?returnSum?)?const??
  • {??
  • ????CV_Assert(?weak?);??
  • ????double?sum?=?0;??
  • ????CvSeqReader?reader;??
  • ????cvStartReadSeq(?weak,?&reader?);??
  • ????cvSetSeqReaderPos(?&reader,?0?);??
  • ????for(?int?i?=?0;?i?<?weak->total;?i++?)??
  • ????{??
  • ????????CvBoostTree*?wtree;??
  • ????????CV_READ_SEQ_ELEM(?wtree,?reader?);??
  • ????????sum?+=?((CvCascadeBoostTree*)wtree)->predict(sampleIdx)->value;??
  • ????}??
  • ????if(?!returnSum?)??
  • ????????sum?=?sum?<?threshold?-?CV_THRESHOLD_EPS???0.0?:?1.0;??
  • ????return?(float)sum;??
  • }??
  • float CvCascadeBoost::predict( int sampleIdx, bool returnSum ) const {CV_Assert( weak );double sum = 0;CvSeqReader reader;cvStartReadSeq( weak, &reader );cvSetSeqReaderPos( &reader, 0 );for( int i = 0; i < weak->total; i++ ){CvBoostTree* wtree;CV_READ_SEQ_ELEM( wtree, reader );sum += ((CvCascadeBoostTree*)wtree)->predict(sampleIdx)->value;}if( !returnSum )sum = sum < threshold - CV_THRESHOLD_EPS ? 0.0 : 1.0;return (float)sum; }






    總結(jié)

    以上是生活随笔為你收集整理的Opencv 特征训练分类器的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。