日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

OpenCV3.3中决策树(Decision Tree)接口简介及使用

發(fā)布時(shí)間:2023/11/27 生活经验 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 OpenCV3.3中决策树(Decision Tree)接口简介及使用 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

OpenCV 3.3中給出了決策樹Decision Tres算法的實(shí)現(xiàn),即cv::ml::DTrees類,此類的聲明在include/opencv2/ml.hpp文件中,實(shí)現(xiàn)在modules/ml/src/tree.cpp文件中。其中:

(1)、cv::ml::DTrees類:繼承自cv::ml::StateModel,而cv::ml::StateModel又繼承自cv::Algorithm;

(2)、create函數(shù):為static,new一個(gè)DTreesImpl對(duì)象用來創(chuàng)建一個(gè)DTrees對(duì)象;

(3)、setMaxCategories/getMaxCategories函數(shù):設(shè)置/獲取最大的類別數(shù),默認(rèn)值為10;

(4)、setMaxDepth/getMaxDepth函數(shù):設(shè)置/獲取樹的最大深度,默認(rèn)值為INT_MAX;

(5)、setMinSampleCount/getMinSampleCount函數(shù):設(shè)置/獲取最小訓(xùn)練樣本數(shù),默認(rèn)值為10;

(6)、setCVFolds/getCVFolds函數(shù):設(shè)置/獲取CVFolds(thenumber of cross-validation folds)值,默認(rèn)值為10,如果此值大于1,用于修剪構(gòu)建的決策樹;

(7)、setUseSurrogates/getUseSurrogates函數(shù):設(shè)置/獲取是否使用surrogatesplits方法,默認(rèn)值為false;

(8)、setUse1SERule/getUse1SERule函數(shù):設(shè)置/獲取是否使用1-SE規(guī)則,默認(rèn)值為true;

(9)、setTruncatePrunedTree/getTruncatedTree函數(shù):設(shè)置/獲取是否進(jìn)行剪枝后移除操作,默認(rèn)值為true;

(10)、setRegressionAccuracy/getRegressionAccuracy函數(shù):設(shè)置/獲取回歸時(shí)用于終止的標(biāo)準(zhǔn),默認(rèn)值為0.01;

(11)、setPriors/getPriors函數(shù):設(shè)置/獲取先驗(yàn)概率數(shù)值,用于調(diào)整決策樹的偏好,默認(rèn)值為空的Mat;

(12)、getRoots函數(shù):獲取根節(jié)點(diǎn)索引;

(13)、getNodes函數(shù):獲取所有節(jié)點(diǎn)索引;

(14)、getSplits函數(shù):獲取所有拆分索引;

(15)、getSubsets函數(shù):獲取分類拆分的所有bitsets;

(16)、load函數(shù):load已序列化的model文件。

關(guān)于決策樹算法的簡(jiǎn)介可以參考:http://blog.csdn.net/fengbingchun/article/details/78880934

以下是從數(shù)據(jù)集MNIST中提取的40幅圖像,0,1,2,3四類各20張,每類的前10幅來自于訓(xùn)練樣本,用于訓(xùn)練,后10幅來自測(cè)試樣本,用于測(cè)試,如下圖:


關(guān)于MNIST的介紹可以參考:http://blog.csdn.net/fengbingchun/article/details/49611549

測(cè)試代碼如下:

#include "opencv.hpp"
#include <string>
#include <vector>
#include <memory>
#include <algorithm>
#include <opencv2/opencv.hpp>
#include <opencv2/ml.hpp>
#include "common.hpp"/ Decision Tree 
int test_opencv_decision_tree_train()
{const std::string image_path{ "E:/GitCode/NN_Test/data/images/digit/handwriting_0_and_1/" };cv::Mat tmp = cv::imread(image_path + "0_1.jpg", 0);CHECK(tmp.data != nullptr);const int train_samples_number{ 40 };const int every_class_number{ 10 };cv::Mat train_data(train_samples_number, tmp.rows * tmp.cols, CV_32FC1);cv::Mat train_labels(train_samples_number, 1, CV_32FC1);float* p = (float*)train_labels.data;for (int i = 0; i < 4; ++i) {std::for_each(p + i * every_class_number, p + (i + 1)*every_class_number, [i](float& v){v = (float)i; });}// train datafor (int i = 0; i < 4; ++i) {static const std::vector<std::string> digit{ "0_", "1_", "2_", "3_" };static const std::string suffix{ ".jpg" };for (int j = 1; j <= every_class_number; ++j) {std::string image_name = image_path + digit[i] + std::to_string(j) + suffix;cv::Mat image = cv::imread(image_name, 0);CHECK(!image.empty() && image.isContinuous());image.convertTo(image, CV_32FC1);image = image.reshape(0, 1);tmp = train_data.rowRange(i * every_class_number + j - 1, i * every_class_number + j);image.copyTo(tmp);}}cv::Ptr<cv::ml::DTrees> dtree = cv::ml::DTrees::create();dtree->setMaxCategories(4);dtree->setMaxDepth(10);dtree->setMinSampleCount(10);dtree->setCVFolds(0);dtree->setUseSurrogates(false);dtree->setUse1SERule(false);dtree->setTruncatePrunedTree(false);dtree->setRegressionAccuracy(0);dtree->setPriors(cv::Mat());dtree->train(train_data, cv::ml::ROW_SAMPLE, train_labels);const std::string save_file{ "E:/GitCode/NN_Test/data/decision_tree_model.xml" }; // .xml, .yaml, .jsonsdtree->save(save_file);return 0;
}int test_opencv_decision_tree_predict()
{const std::string image_path{ "E:/GitCode/NN_Test/data/images/digit/handwriting_0_and_1/" };const std::string load_file{ "E:/GitCode/NN_Test/data/decision_tree_model.xml" }; // .xml, .yaml, .jsonsconst int predict_samples_number{ 40 };const int every_class_number{ 10 };cv::Mat tmp = cv::imread(image_path + "0_1.jpg", 0);CHECK(tmp.data != nullptr);// predict dattacv::Mat predict_data(predict_samples_number, tmp.rows * tmp.cols, CV_32FC1);for (int i = 0; i < 4; ++i) {static const std::vector<std::string> digit{ "0_", "1_", "2_", "3_" };static const std::string suffix{ ".jpg" };for (int j = 11; j <= every_class_number + 10; ++j) {std::string image_name = image_path + digit[i] + std::to_string(j) + suffix;cv::Mat image = cv::imread(image_name, 0);CHECK(!image.empty() && image.isContinuous());image.convertTo(image, CV_32FC1);image = image.reshape(0, 1);tmp = predict_data.rowRange(i * every_class_number + j - 10 - 1, i * every_class_number + j - 10);image.copyTo(tmp);}}cv::Mat result;cv::Ptr<cv::ml::DTrees> dtrees = cv::ml::DTrees::load(load_file);dtrees->predict(predict_data, result);CHECK(result.rows == predict_samples_number);cv::Mat predict_labels(predict_samples_number, 1, CV_32FC1);float* p = (float*)predict_labels.data;for (int i = 0; i < 4; ++i) {std::for_each(p + i * every_class_number, p + (i + 1)*every_class_number, [i](float& v){v = (float)i; });}int count{ 0 };for (int i = 0; i < predict_samples_number; ++i) {float value1 = ((float*)predict_labels.data)[i];float value2 = ((float*)result.data)[i];fprintf(stdout, "expected value: %f, actual value: %f\n", value1, value2);if (int(value1) == int(value2)) ++count;}fprintf(stdout, "accuracy: %f\n", count * 1.f / predict_samples_number);return 0;
}
執(zhí)行結(jié)果如下:由于訓(xùn)練樣本數(shù)量少,所以識(shí)別率只有72.5%,為了提高識(shí)別率,可以增加訓(xùn)練樣本數(shù)。


GitHub:?https://github.com/fengbingchun/NN_Test

總結(jié)

以上是生活随笔為你收集整理的OpenCV3.3中决策树(Decision Tree)接口简介及使用的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。