日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

hdl_localization代码解析

發(fā)布時間:2023/12/20 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 hdl_localization代码解析 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

hdl_localization代碼解析

簡介

hdl_localization是基于UKF濾波框架,融合了ndt點云配準結果,在已經(jīng)構建的點云地圖上實習激光重定位的一種方法。在使用16線激光雷達進行機器人定位時,不用IMU也可以取得不錯的效果。

安裝依賴

依賴庫

1.Ros-Melodic
2.Pcl-1.8
3.Open-MP
4.Eigen3.3(及以上)

依賴包

1.ndt_omp
2.fast-gicp
3.hdl_global_localization

運行

rosparam set use_sim_time true roslaunch hdl_localization hdl_localization.launchroscd hdl_localization/rviz rviz -d hdl_localization.rviz
注意

1.如果進行純定位時的初始位姿在地圖坐標系附近,在launch文件中可以將 “specify_init_pose" 設為 ”true“,這樣,其默認的三維位置(0,0,0)和默認的表示旋轉的四元數(shù)(0,0,0,1)就可以很好的給予點云一個初始狀態(tài),有利于其后續(xù)匹配和重定位。
2.如果想在地圖中任意位置進行重定位,需要在開啟rviz -d hdl_localization.rviz后,選擇rviz上方的2D pose estimator,并在地圖中左鍵點擊和鼠標拖動,選擇一個與真實位置相近的位置與航向。

效果

UKF知識補充(無跡卡爾曼濾波)

網(wǎng)頁鏈接
原始高斯分布經(jīng)過非線性變換之后其實是一個不規(guī)則的非線性分布,在EKF中我們在高斯均值點附近用泰勒級數(shù)近似非線性分布,從而獲得近似高斯分布。但是問題在于,這個近似高斯分布有時不是足夠精確,單單一個均值點往往無法預測上圖中這種很不規(guī)則的分布的。這個時候我們就需要無跡卡爾曼濾波UKF了,通過無跡轉換將非線性函數(shù)變換的結果近似成高斯分布。
以下是無跡變換執(zhí)行的步驟:

1.計算Sigma點集

2.為每個Sigma點分配權重

3.把所有單個Sigma點代入非線性函數(shù)f

4.對經(jīng)過上述加權和轉換的點近似新的高斯分布

5.計算新高斯分布的均值和方差。

代碼閱讀

總覽

該項目是使用nodelet統(tǒng)一管理的,apps為定義的兩個類,也就是程序實現(xiàn)。include內為狀態(tài)估計器和無跡卡爾曼的實現(xiàn)。launch是啟動文件。rviz內為rviz的配置文件。data為實例的定位用點云地圖。

launch

定義了幾個參數(shù),使用nodelet運行了velodyne_nodelet_manager、globalmap_server_nodelet、hdl_localization_nodelet三個節(jié)點。如果只用于仿真,可以在 arguments 前面加上。

<param name="use_sim_time" value="true"/>

apps(程序實現(xiàn))

本文件夾是只有兩個cpp文件,直接繼承了nodelet的類。

globalmap_server_nodelet

類GlobalmapServerNodelet 繼承了 nodelet::Nodelet。
關于ros,聲明了三個句柄,1個發(fā)布,1個計時器,1個globalmap的變量。

ros::NodeHandle nh; ros::NodeHandle mt_nh; ros::NodeHandle private_nh;ros::Publisher globalmap_pub;ros::WallTimer globalmap_pub_timer; pcl::PointCloud<PointT>::Ptr globalmap;
globalmap_server_nodelet::onInit()

這里是在重寫了初始化函數(shù)。同時利用計時器出發(fā)回調函數(shù)。

void onInit() override {//定義三個節(jié)點,nh = getNodeHandle();mt_nh = getMTNodeHandle();private_nh = getPrivateNodeHandle();initialize_params();// publish globalmap with "latched" publisherglobalmap_pub = nh.advertise<sensor_msgs::PointCloud2>("/globalmap", 5, true);globalmap_pub_timer = nh.createWallTimer(ros::WallDuration(0.05), &GlobalmapServerNodelet::pub_once_cb, this, true, true); //20Hz}
globalmap_server_nodelet::initialize_params()

在程序initialize_params()中,完成了讀取地圖pcd文件的功能,并對該地圖進行下采樣,最終的globalmap是下采樣的地圖。

void initialize_params() {// read globalmap from a pcd filestd::string globalmap_pcd = private_nh.param<std::string>("globalmap_pcd", "");globalmap.reset(new pcl::PointCloud<PointT>());pcl::io::loadPCDFile(globalmap_pcd, *globalmap);globalmap->header.frame_id = "map";//TODO:這個實際上是沒有到這里來的,初步想法是沒有utm文件。類似于經(jīng)緯度的坐標文件。std::ifstream utm_file(globalmap_pcd + ".utm");if (utm_file.is_open() && private_nh.param<bool>("convert_utm_to_local", true)) {std::cout << "now utf_file is open" << std::endl;double utm_easting;double utm_northing;double altitude;utm_file >> utm_easting >> utm_northing >> altitude;for(auto& pt : globalmap->points) {pt.getVector3fMap() -= Eigen::Vector3f(utm_easting, utm_northing, altitude);}ROS_INFO_STREAM("Global map offset by UTM reference coordinates (x = " << utm_easting << ", y = " << utm_northing << ") and altitude (z = " << altitude << ")");}//endTODO// downsample globalmapdouble downsample_resolution = private_nh.param<double>("downsample_resolution", 0.1);boost::shared_ptr<pcl::VoxelGrid<PointT>> voxelgrid(new pcl::VoxelGrid<PointT>());voxelgrid->setLeafSize(downsample_resolution, downsample_resolution, downsample_resolution);voxelgrid->setInputCloud(globalmap);pcl::PointCloud<PointT>::Ptr filtered(new pcl::PointCloud<PointT>());voxelgrid->filter(*filtered);globalmap = filtered;}

同時,每隔0.05s發(fā)布一次。(onInit定義的)

void pub_once_cb(const ros::WallTimerEvent& event) {globalmap_pub.publish(globalmap);}

hdl_localization_nodelet

類 HdlLocalizationNodelet 繼承了 nodelet::Nodelet,先看初始化。

hdl_localization_nodelet::onInit()
void onInit() override {//依然是三個句柄nh = getNodeHandle();mt_nh = getMTNodeHandle();private_nh = getPrivateNodeHandle();//這里的時間用了boost庫里的 circular_buffer<double>。感興趣的可以自己百度一下,畢竟……我也沒用過。processing_time.resize(16);//這些參數(shù),又來了。initialize_params();//這個默認的base_link, launch里覆蓋了。實際上是velodyne。參數(shù)類的在launch里改寫了一部分,這里就不一一贅述了。可以自己對比來看,比較容易。odom_child_frame_id = private_nh.param<std::string>("odom_child_frame_id", "base_link");//是否使用imuuse_imu = private_nh.param<bool>("use_imu", true);//imu是否倒置invert_imu = private_nh.param<bool>("invert_imu", false); if(use_imu) {//如果使用imu,則定義訂閱函數(shù)。NODELET_INFO("enable imu-based prediction");imu_sub = mt_nh.subscribe("/gpsimu_driver/imu_data", 256, &HdlLocalizationNodelet::imu_callback, this);}//點云數(shù)據(jù)、全局地圖、初始位姿的訂閱。initialpose_sub只是用于rviz劃點用的。points_sub = mt_nh.subscribe("/velodyne_points", 5, &HdlLocalizationNodelet::points_callback, this);globalmap_sub = nh.subscribe("/globalmap", 1, &HdlLocalizationNodelet::globalmap_callback, this);initialpose_sub = nh.subscribe("/initialpose", 8, &HdlLocalizationNodelet::initialpose_callback, this);//發(fā)布里程計信息,以及對齊后的點云數(shù)據(jù)。pose_pub = nh.advertise<nav_msgs::Odometry>("/odom", 5, false);aligned_pub = nh.advertise<sensor_msgs::PointCloud2>("/aligned_points", 5, false);}
hdl_localization_nodelet::initialize_params()

初始化參數(shù)

void initialize_params() {// intialize scan matching methoddouble downsample_resolution = private_nh.param<double>("downsample_resolution", 0.1);std::string ndt_neighbor_search_method = private_nh.param<std::string>("ndt_neighbor_search_method", "DIRECT7");double ndt_resolution = private_nh.param<double>("ndt_resolution", 1.0);boost::shared_ptr<pcl::VoxelGrid<PointT>> voxelgrid(new pcl::VoxelGrid<PointT>());voxelgrid->setLeafSize(downsample_resolution, downsample_resolution, downsample_resolution);downsample_filter = voxelgrid;//定義了ndt和glcppclomp::NormalDistributionsTransform<PointT, PointT>::Ptr ndt(new pclomp::NormalDistributionsTransform<PointT, PointT>());pclomp::GeneralizedIterativeClosestPoint<PointT, PointT>::Ptr gicp(new pclomp::GeneralizedIterativeClosestPoint<PointT, PointT>());//ndt參數(shù)與搜索方法。默認DIRECT7,作者說效果不好可以嘗試改為DIRECT1.ndt->setTransformationEpsilon(0.01);ndt->setResolution(ndt_resolution);if(ndt_neighbor_search_method == "DIRECT1") {NODELET_INFO("search_method DIRECT1 is selected");ndt->setNeighborhoodSearchMethod(pclomp::DIRECT1);registration = ndt;} else if(ndt_neighbor_search_method == "DIRECT7") {NODELET_INFO("search_method DIRECT7 is selected");ndt->setNeighborhoodSearchMethod(pclomp::DIRECT7);registration = ndt;} else if(ndt_neighbor_search_method == "GICP_OMP"){NODELET_INFO("search_method GICP_OMP is selected");registration = gicp;}else {if(ndt_neighbor_search_method == "KDTREE") {NODELET_INFO("search_method KDTREE is selected");} else {NODELET_WARN("invalid search method was given");NODELET_WARN("default method is selected (KDTREE)");}ndt->setNeighborhoodSearchMethod(pclomp::KDTREE);registration = ndt;}// initialize pose estimator//設置起點用。if(private_nh.param<bool>("specify_init_pose", true)) {NODELET_INFO("initialize pose estimator with specified parameters!!");pose_estimator.reset(new hdl_localization::PoseEstimator(registration,ros::Time::now(),Eigen::Vector3f(private_nh.param<double>("init_pos_x", 0.0), private_nh.param<double>("init_pos_y", 0.0), private_nh.param<double>("init_pos_z", 0.0)),Eigen::Quaternionf(private_nh.param<double>("init_ori_w", 1.0), private_nh.param<double>("init_ori_x", 0.0), private_nh.param<double>("init_ori_y", 0.0), private_nh.param<double>("init_ori_z", 0.0)),private_nh.param<double>("cool_time_duration", 0.5)));}}

-----------下面就是回調函數(shù)的處理---------
實際使用的回調函數(shù)就是HdlLocalizationNodelet::imu_callback、HdlLocalizationNodelet::points_callback 以及 HdlLocalizationNodelet::globalmap_callback三個。分別訂閱了 “/gpsimu_driver/imu_data”、“/velodyne_points” 以及 "/globalmap"兩個話題。
首先看HdlLocalizationNodelet::globalmap_callback。完成了對全局地圖的訂閱以及從ros消息到點云的轉化。

HdlLocalizationNodelet::globalmap_callback
void globalmap_callback(const sensor_msgs::PointCloud2ConstPtr& points_msg) {NODELET_INFO("globalmap received!");pcl::PointCloud<PointT>::Ptr cloud(new pcl::PointCloud<PointT>());pcl::fromROSMsg(*points_msg, *cloud);globalmap = cloud;//發(fā)布出來的全局地圖用作配準用的目標點云。這里就是globalmap_server_nodelet發(fā)出來的。registration->setInputTarget(globalmap);}
HdlLocalizationNodelet::imu_callback

接收imu并扔到imu_data里去。會在HdlLocalizationNodelet::points_callback里用到。

void imu_callback(const sensor_msgs::ImuConstPtr& imu_msg) {std::lock_guard<std::mutex> lock(imu_data_mutex);imu_data.push_back(imu_msg);}
HdlLocalizationNodelet::points_callback

輸入點云輸出位姿。

void points_callback(const sensor_msgs::PointCloud2ConstPtr& points_msg) {//加鎖std::lock_guard<std::mutex> estimator_lock(pose_estimator_mutex);if(!pose_estimator) {//等待位姿估計器初始化NODELET_ERROR("waiting for initial pose input!!");return;}if(!globalmap) {//等待全局地圖NODELET_ERROR("globalmap has not been received!!");return;}//將ros消息轉化為點云const auto& stamp = points_msg->header.stamp;pcl::PointCloud<PointT>::Ptr pcl_cloud(new pcl::PointCloud<PointT>());pcl::fromROSMsg(*points_msg, *pcl_cloud);//檢查if(pcl_cloud->empty()) {NODELET_ERROR("cloud is empty!!");return;}//將點云轉換到odom_child_frame_id 坐標系。//但是這個tf是自己發(fā)的,這一個還要再看一下。TODO。雞生蛋蛋生雞的問題一直搞不太懂。// transform pointcloud into odom_child_frame_id pcl::PointCloud<PointT>::Ptr cloud(new pcl::PointCloud<PointT>()); if(!pcl_ros::transformPointCloud(odom_child_frame_id, *pcl_cloud, *cloud, this->tf_listener)) {NODELET_ERROR("point cloud cannot be transformed into target frame!!");return;} //對點云下采樣。這里用到的是同一文件下的函數(shù)。后面會放上。auto filtered = downsample(cloud);// predictif(!use_imu) {//不用imu則用0。pose_estimator->predict(stamp, Eigen::Vector3f::Zero(), Eigen::Vector3f::Zero());} else {std::lock_guard<std::mutex> lock(imu_data_mutex);auto imu_iter = imu_data.begin();//利用imu數(shù)據(jù)迭代。for(imu_iter; imu_iter != imu_data.end(); imu_iter++) {//若當前點云時間早于imu雷達,則跳出。imu做預測,點云觀測。if(stamp < (*imu_iter)->header.stamp) {break;}//讀取線加速度和角速度。判斷是否倒置imuconst auto& acc = (*imu_iter)->linear_acceleration;const auto& gyro = (*imu_iter)->angular_velocity;double gyro_sign = invert_imu ? -1.0 : 1.0;//利用imu數(shù)據(jù)做位姿的預測。這里用了pose_estimator→predict,進一步調用了ukf進行估計。還沒具體看ukf。 TODOpose_estimator->predict((*imu_iter)->header.stamp, Eigen::Vector3f(acc.x, acc.y, acc.z), gyro_sign * Eigen::Vector3f(gyro.x, gyro.y, gyro.z));}//刪除用過的imu數(shù)據(jù)imu_data.erase(imu_data.begin(), imu_iter);}// correctauto t1 = ros::WallTime::now();//用pose_estimator 來矯正點云。pcl庫配準。獲取到結果后利用ukf矯正位姿。auto aligned = pose_estimator->correct(filtered);auto t2 = ros::WallTime::now();processing_time.push_back((t2 - t1).toSec());double avg_processing_time = std::accumulate(processing_time.begin(), processing_time.end(), 0.0) / processing_time.size();// NODELET_INFO_STREAM("processing_time: " << avg_processing_time * 1000.0 << "[msec]");//如果有訂閱才發(fā)布if(aligned_pub.getNumSubscribers()) {aligned->header.frame_id = "map";aligned->header.stamp = cloud->header.stamp;aligned_pub.publish(aligned);}//發(fā)布里程計。時間戳為當前幀雷達時間,里程計位姿為ukf校正后位姿。同時也會發(fā)布從map到odom_child_frame_id的tfpublish_odometry(points_msg->header.stamp, pose_estimator->matrix());}

----------主要流程到此結束,下面是其他的一些功能函數(shù)-----------

downsample(const pcl::PointCloud::ConstPtr& cloud)

當前幀點云數(shù)據(jù)下采樣

pcl::PointCloud<PointT>::ConstPtr downsample(const pcl::PointCloud<PointT>::ConstPtr& cloud) const {//在函數(shù)initialize_params()里聲明了。0.1,0.1,0.1網(wǎng)格if(!downsample_filter) {return cloud;}pcl::PointCloud<PointT>::Ptr filtered(new pcl::PointCloud<PointT>());downsample_filter->setInputCloud(cloud);downsample_filter->filter(*filtered);filtered->header = cloud->header;return filtered;}
publish_odometry

發(fā)布里程計的tf和msg。輸入為當前幀點云時間戳與pose_estimator的結果矩陣。這里還用到了matrix2transform這個函數(shù),用于做eigen矩陣到tf的轉化(取值)。

void publish_odometry(const ros::Time& stamp, const Eigen::Matrix4f& pose) {// broadcast the transform over tfgeometry_msgs::TransformStamped odom_trans = matrix2transform(stamp, pose, "map", odom_child_frame_id);pose_broadcaster.sendTransform(odom_trans);// publish the transformnav_msgs::Odometry odom;odom.header.stamp = stamp;odom.header.frame_id = "map";odom.pose.pose.position.x = pose(0, 3);odom.pose.pose.position.y = pose(1, 3);odom.pose.pose.position.z = pose(2, 3);odom.pose.pose.orientation = odom_trans.transform.rotation;odom.child_frame_id = odom_child_frame_id;odom.twist.twist.linear.x = 0.0;odom.twist.twist.linear.y = 0.0;odom.twist.twist.angular.z = 0.0;pose_pub.publish(odom);}
matrix2transform

matrix2transform(const ros::Time& stamp, const Eigen::Matrix4f& pose, const std::string& frame_id, const std::string& child_frame_id) 從matrix 到 geometry_msgs::TransformStamped。

geometry_msgs::TransformStamped matrix2transform(const ros::Time& stamp, const Eigen::Matrix4f& pose, const std::string& frame_id, const std::string& child_frame_id) {Eigen::Quaternionf quat(pose.block<3, 3>(0, 0));quat.normalize();geometry_msgs::Quaternion odom_quat;odom_quat.w = quat.w();odom_quat.x = quat.x();odom_quat.y = quat.y();odom_quat.z = quat.z();geometry_msgs::TransformStamped odom_trans;odom_trans.header.stamp = stamp;odom_trans.header.frame_id = frame_id;odom_trans.child_frame_id = child_frame_id;odom_trans.transform.translation.x = pose(0, 3);odom_trans.transform.translation.y = pose(1, 3);odom_trans.transform.translation.z = pose(2, 3);odom_trans.transform.rotation = odom_quat;return odom_trans;}

include(狀態(tài)估計器及ukf)

apps里的兩個cpp大致內容均為以上。可以看到在points_callback里使用了pose_estimator作為位姿的估計。而該類又使用了ukf作為位姿的解算。二者的實現(xiàn)都在include文件夾內。

hdl_localization/pose_estimator.hpp

該文件定義了類PoseEstimator。

PoseEstimator(構造函數(shù))

首先看構造函數(shù)。可以看到在初始化之后,最重要的是進入了ukf的處理。

PoseEstimator(pcl::Registration<PointT, PointT>::Ptr& registration, const ros::Time& stamp, const Eigen::Vector3f& pos, const Eigen::Quaternionf& quat, double cool_time_duration = 1.0): init_stamp(stamp),registration(registration),cool_time_duration(cool_time_duration){//單位陣初始化,隨后給過程噪聲。process_noise = Eigen::MatrixXf::Identity(16, 16);process_noise.middleRows(0, 3) *= 1.0;process_noise.middleRows(3, 3) *= 1.0;process_noise.middleRows(6, 4) *= 0.5;process_noise.middleRows(10, 3) *= 1e-6;process_noise.middleRows(13, 3) *= 1e-6;//測量噪聲,單位陣Eigen::MatrixXf measurement_noise = Eigen::MatrixXf::Identity(7, 7);measurement_noise.middleRows(0, 3) *= 0.01;measurement_noise.middleRows(3, 4) *= 0.001;//加權平均的位姿。Eigen::VectorXf mean(16);mean.middleRows(0, 3) = pos;mean.middleRows(3, 3).setZero();mean.middleRows(6, 4) = Eigen::Vector4f(quat.w(), quat.x(), quat.y(), quat.z());mean.middleRows(10, 3).setZero();mean.middleRows(13, 3).setZero();//初始化協(xié)方差Eigen::MatrixXf cov = Eigen::MatrixXf::Identity(16, 16) * 0.01;//聲明posesystem。PoseSystem system;//初始化ukfukf.reset(new kkl::alg::UnscentedKalmanFilterX<float, PoseSystem>(system, 16, 6, 7, process_noise, measurement_noise, mean, cov));}
pose_estimator->predict(預測)

另外在hdl_localization.cpp中用到的pose_estimator->predict等也在本文件進行了解釋。

void predict(const ros::Time& stamp, const Eigen::Vector3f& acc, const Eigen::Vector3f& gyro) {//當前與初始化的時間間隔小于設置的時間,或prev_stamp(上次更新時間)為0(未更新),或prev_stamp等于當前時間。則更新prev_stamp并跳出。if((stamp - init_stamp).toSec() < cool_time_duration || prev_stamp.is_zero() || prev_stamp == stamp) {prev_stamp = stamp;return;}//正常處理,首先計算dt,更新prev_stamp。double dt = (stamp - prev_stamp).toSec();prev_stamp = stamp;//對ukf設置噪聲和處理間隔。ukf->setProcessNoiseCov(process_noise * dt);ukf->system.dt = dt;//利用imu數(shù)據(jù)定義控制量Eigen::VectorXf control(6);control.head<3>() = acc;control.tail<3>() = gyro;//利用ukf預測。ukf->predict(control);}
pose_estimator->correct(觀測)
pcl::PointCloud<PointT>::Ptr correct(const pcl::PointCloud<PointT>::ConstPtr& cloud) {//單位陣來初始化Eigen::Matrix4f init_guess = Eigen::Matrix4f::Identity();init_guess.block<3, 3>(0, 0) = quat().toRotationMatrix();init_guess.block<3, 1>(0, 3) = pos();//點云的配準。ndtpcl::PointCloud<PointT>::Ptr aligned(new pcl::PointCloud<PointT>());registration->setInputSource(cloud);registration->align(*aligned, init_guess);//讀取數(shù)據(jù)Eigen::Matrix4f trans = registration->getFinalTransformation();Eigen::Vector3f p = trans.block<3, 1>(0, 3);Eigen::Quaternionf q(trans.block<3, 3>(0, 0));if(quat().coeffs().dot(q.coeffs()) < 0.0f) {q.coeffs() *= -1.0f;}//填充至觀測矩陣observationEigen::VectorXf observation(7);observation.middleRows(0, 3) = p;observation.middleRows(3, 4) = Eigen::Vector4f(q.w(), q.x(), q.y(), q.z());//ukf更新ukf->correct(observation);return aligned;}

----------還有一些簡單的函數(shù)不再說明了。直接懟上,很好理解。-----------

/* getters */Eigen::Vector3f pos() const {return Eigen::Vector3f(ukf->mean[0], ukf->mean[1], ukf->mean[2]);}Eigen::Vector3f vel() const {return Eigen::Vector3f(ukf->mean[3], ukf->mean[4], ukf->mean[5]);}Eigen::Quaternionf quat() const {return Eigen::Quaternionf(ukf->mean[6], ukf->mean[7], ukf->mean[8], ukf->mean[9]).normalized();}Eigen::Matrix4f matrix() const {Eigen::Matrix4f m = Eigen::Matrix4f::Identity();m.block<3, 3>(0, 0) = quat().toRotationMatrix();m.block<3, 1>(0, 3) = pos();return m;}

hdl_localization/pose_system.hpp

本文件定義了完成了類PoseSystem的實現(xiàn)。主要是實現(xiàn)了ukf里 矩陣f(定義了系統(tǒng))和h(觀測)代碼實現(xiàn)。這是要扔到ukf中去的。
系統(tǒng)狀態(tài)量16位,分別是位姿(3)、速度(3)、四元數(shù)(4)、加速度偏差(3)、陀螺儀偏差(3)。另還有6位控制量,加速度(3)和陀螺儀(3)。

狀態(tài)量表示
位置pt=[px,py,pz]Tpt = [p x , p y , p z ] Tpt=[px,py,pz]T
速度vt=[vx,vy,vz]Tvt = [v x , v y , v z ] Tvt=[vx,vy,vz]T
四元數(shù)qt=[qw,qx,qy,qz]Tqt = [q w , q x , q y , q z ] Tqt=[qw,qx,qy,qz]T
加速度偏差acc?bias=[acc?biasx,acc?biasy,acc?biasz]Tacc_-bias = [a c c_-b i a s _x , a c c_-b i a s_y , a c c_-b i a s_z ] Tacc??bias=[acc??biasx?,acc??biasy?,acc??biasz?]T
陀螺儀偏差gyro?bias=[gyro?biasx,gyro?biasy,gyro?biasz]Tgyro_-bias = [g y r o _- b i a s_x , g y r o _- b i a s_y , g y r o _- b i a s_z ] Tgyro??bias=[gyro??biasx?,gyro??biasy?,gyro??biasz?]T
控制量表示
加速度raw?acc=[raw?accx,raw?accy,raw?accz]Traw_-acc = [r a w _ -a c c_x , r a w _- a c c _y , r a w _ -a c c _z ] Traw??acc=[raw??accx?,raw??accy?,raw??accz?]T
陀螺儀raw?gyro=[gyro?biasx,gyro?biasy,gyro?biasz]Traw_-gyro = [g y r o _- b i a s_x , g y r o _ -b i a s_ y , g y r o _- b i a s_z ] Traw??gyro=[gyro??biasx?,gyro??biasy?,gyro??biasz?]T

f(系統(tǒng)狀態(tài)方程)

VectorXt f(const VectorXt& state, const VectorXt& control) const {VectorXt next_state(16);Vector3t pt = state.middleRows(0, 3); //位置Vector3t vt = state.middleRows(3, 3); //速度Quaterniont qt(state[6], state[7], state[8], state[9]);qt.normalize(); // 歸一化四元數(shù)Vector3t acc_bias = state.middleRows(10, 3); //加速度偏差Vector3t gyro_bias = state.middleRows(13, 3); //陀螺儀偏差Vector3t raw_acc = control.middleRows(0, 3); //加速度控制Vector3t raw_gyro = control.middleRows(3, 3); //陀螺儀控制//下一時刻狀態(tài)// position 。 首先更新位置next_state.middleRows(0, 3) = pt + vt * dt; //// velocity。 更新速度,實際上并沒有利用加速度矯正速度,原因是認為加速度噪聲較大,對最終的精度并沒有貢獻。Vector3t g(0.0f, 0.0f, -9.80665f);Vector3t acc_ = raw_acc - acc_bias;Vector3t acc = qt * acc_;next_state.middleRows(3, 3) = vt; // + (acc - g) * dt; // acceleration didn't contribute to accuracy due to large noise// orientation。首先完成了陀螺儀的增量計算并歸一化(直接轉化為四元數(shù)形式),將其轉換為下一時刻的四元數(shù)。Vector3t gyro = raw_gyro - gyro_bias;Quaterniont dq(1, gyro[0] * dt / 2, gyro[1] * dt / 2, gyro[2] * dt / 2);dq.normalize();Quaterniont qt_ = (qt * dq).normalized();next_state.middleRows(6, 4) << qt_.w(), qt_.x(), qt_.y(), qt_.z();//將當前控制量傳入下一時刻的狀態(tài)向量。認為加速度和角速度上偏差不變next_state.middleRows(10, 3) = state.middleRows(10, 3); // constant bias on accelerationnext_state.middleRows(13, 3) = state.middleRows(13, 3); // constant bias on angular velocityreturn next_state;}

h (觀測方程)

觀測方程直接將當前輸入狀態(tài)量作為觀測量。這里的輸入是在更新階段(correct)生成的帶誤差方差的(error variances)的擴展狀態(tài)空間下的(extended state space)狀態(tài)量,也就是ext_sigma_points。

// observation equationVectorXt h(const VectorXt& state) const {VectorXt observation(7);observation.middleRows(0, 3) = state.middleRows(0, 3);observation.middleRows(3, 4) = state.middleRows(6, 4).normalized();return observation;}

kkl/unscented_kalman_filter.hpp

本文件中主要的函數(shù)也就構造函數(shù)、預測、矯正、計算sigma點、使協(xié)方差矩陣正有限(不太清楚)五個。

UnscentedKalmanFilterX2

首先,構造函數(shù)。可以看到輸入了一系列包括待估計系統(tǒng)、狀態(tài)向量維度、輸入維度、觀測維度、兩個噪聲、參數(shù)等等。完成了初始化操作。

UnscentedKalmanFilterX(const System& system, int state_dim, int input_dim, int measurement_dim, const MatrixXt& process_noise, const MatrixXt& measurement_noise, const VectorXt& mean, const MatrixXt& cov): state_dim(state_dim),input_dim(input_dim),measurement_dim(measurement_dim),N(state_dim),M(input_dim),K(measurement_dim),S(2 * state_dim + 1),mean(mean),cov(cov),system(system),process_noise(process_noise),measurement_noise(measurement_noise),lambda(1),normal_dist(0.0, 1.0){//設置長度。weights.resize(S, 1);sigma_points.resize(S, N);ext_weights.resize(2 * (N + K) + 1, 1);ext_sigma_points.resize(2 * (N + K) + 1, N + K);expected_measurements.resize(2 * (N + K) + 1, K);// initialize weights for unscented filterweights[0] = lambda / (N + lambda);for (int i = 1; i < 2 * N + 1; i++) {weights[i] = 1 / (2 * (N + lambda));}// weights for extended state space which includes error variancesext_weights[0] = lambda / (N + K + lambda);for (int i = 1; i < 2 * (N + K) + 1; i++) {ext_weights[i] = 1 / (2 * (N + K + lambda));}}

ukf->predict

通過pose_estimator->predict調用。

void predict(const VectorXt& control) {// calculate sigma points. ukf的sigma點ensurePositiveFinite(cov);computeSigmaPoints(mean, cov, sigma_points);//sigma_points更新。用在posesystem中定義的f函數(shù)來進行。for (int i = 0; i < S; i++) {sigma_points.row(i) = system.f(sigma_points.row(i), control);}/*----至此,sigma_points里存儲的就是當前時刻的由ukf輸出的系統(tǒng)狀態(tài)。-----*///過程噪聲,即ukf中的矩陣Rconst auto& R = process_noise;// unscented transform。定義當前的平均狀態(tài)和協(xié)方差矩陣,并設置為0矩陣。VectorXt mean_pred(mean.size());MatrixXt cov_pred(cov.rows(), cov.cols());mean_pred.setZero();cov_pred.setZero();//加權平均,預測狀態(tài)for (int i = 0; i < S; i++) {mean_pred += weights[i] * sigma_points.row(i);}//根據(jù)狀態(tài)預測協(xié)方差。for (int i = 0; i < S; i++) {VectorXt diff = sigma_points.row(i).transpose() - mean_pred;cov_pred += weights[i] * diff * diff.transpose();}//附加過程噪聲R,在pose_estimator中給出初值cov_pred += R;//更新mean和covmean = mean_pred;cov = cov_pred;}

ukf->correct

通過pose_estimator->correct調用。

void correct(const VectorXt& measurement) {//N-狀態(tài)方程維度。K-觀測維度// create extended state space which includes error variancesVectorXt ext_mean_pred = VectorXt::Zero(N + K, 1);MatrixXt ext_cov_pred = MatrixXt::Zero(N + K, N + K);//左上角N行1列ext_mean_pred.topLeftCorner(N, 1) = VectorXt(mean);//左上角N行N列ext_cov_pred.topLeftCorner(N, N) = MatrixXt(cov);//右下角K行K列。初始化為在pose_estimator輸入的噪聲。位置噪聲0.01,四元數(shù)0.001ext_cov_pred.bottomRightCorner(K, K) = measurement_noise;/*---------------- 經(jīng)過以上操作,現(xiàn)在擴展狀態(tài)變量前N項為mean,擴展協(xié)方差左上角為N*N的cov,右下角為K*K的觀測噪聲--------------*///驗證并計算ensurePositiveFinite(ext_cov_pred);//利用擴展狀態(tài)空間的參數(shù)計算sigma點computeSigmaPoints(ext_mean_pred, ext_cov_pred, ext_sigma_points);// unscented transform//這里使用了 ukf 的h 函數(shù)來計算觀測。//ext_sigma_points、expected_measurements是(2 * (N + K) + 1, K)的矩陣//沒太看明白 TODO//取左上角前N個量,加上右下角K個量。expected_measurements.setZero();for (int i = 0; i < ext_sigma_points.rows(); i++) {expected_measurements.row(i) = system.h(ext_sigma_points.row(i).transpose().topLeftCorner(N, 1));expected_measurements.row(i) += VectorXt(ext_sigma_points.row(i).transpose().bottomRightCorner(K, 1));}//加權平均。同predict函數(shù)相似。VectorXt expected_measurement_mean = VectorXt::Zero(K);for (int i = 0; i < ext_sigma_points.rows(); i++) {expected_measurement_mean += ext_weights[i] * expected_measurements.row(i);}MatrixXt expected_measurement_cov = MatrixXt::Zero(K, K);for (int i = 0; i < ext_sigma_points.rows(); i++) {VectorXt diff = expected_measurements.row(i).transpose() - expected_measurement_mean;expected_measurement_cov += ext_weights[i] * diff * diff.transpose();}// calculated transformed covariance//轉換方差。用于計算sigama,進而計算卡爾曼增益MatrixXt sigma = MatrixXt::Zero(N + K, K);for (int i = 0; i < ext_sigma_points.rows(); i++) {auto diffA = (ext_sigma_points.row(i).transpose() - ext_mean_pred);auto diffB = (expected_measurements.row(i).transpose() - expected_measurement_mean);sigma += ext_weights[i] * (diffA * diffB.transpose());}kalman_gain = sigma * expected_measurement_cov.inverse();const auto& K = kalman_gain;//更新最后的ukf。VectorXt ext_mean = ext_mean_pred + K * (measurement - expected_measurement_mean);MatrixXt ext_cov = ext_cov_pred - K * expected_measurement_cov * K.transpose();mean = ext_mean.topLeftCorner(N, 1);cov = ext_cov.topLeftCorner(N, N);}

computeSigmaPoints

通過mean和cov計算sigma點。思路是將cov做Cholesky分解,用下三角矩陣L對mean做處理。得到一系列sigma_points.

void computeSigmaPoints(const VectorXt& mean, const MatrixXt& cov, MatrixXt& sigma_points) {const int n = mean.size();assert(cov.rows() == n && cov.cols() == n);//llt分解。求Cholesky分解A=LL^*=U^*U。L是下三角矩陣Eigen::LLT<MatrixXt> llt;llt.compute((n + lambda) * cov);MatrixXt l = llt.matrixL();//mean是列向量。這里會自動轉置處理。sigma_points.row(0) = mean;for (int i = 0; i < n; i++) {sigma_points.row(1 + i * 2) = mean + l.col(i); //奇數(shù)1357sigma_points.row(1 + i * 2 + 1) = mean - l.col(i); //偶數(shù)2468}}
ensurePositiveFinite

保證協(xié)方差的正有限。未實際應用。

void ensurePositiveFinite(MatrixXt& cov) {return;//就到這里了,在上面就return掉了。const double eps = 1e-9;Eigen::EigenSolver<MatrixXt> solver(cov);MatrixXt D = solver.pseudoEigenvalueMatrix(); //特征值MatrixXt V = solver.pseudoEigenvectors(); //特征向量for (int i = 0; i < D.rows(); i++) {if (D(i, i) < eps) {D(i, i) = eps;}}cov = V * D * V.inverse();}

總結

以上是生活随笔為你收集整理的hdl_localization代码解析的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內容還不錯,歡迎將生活随笔推薦給好友。