GTSAM Tutorial学习笔记
GTSAM Tutorial學習筆記
- GTSAM Tutorial學習筆記
- 1. 基本原理
- 2. Demo代碼
- 3. LIO-SAM中部分代碼分析
- 3.1 預積分因子圖
- 3.2 關鍵幀因子圖
GTSAM Tutorial學習筆記
為了學習LIO-SAM,我快速過了一遍《機器人感知:因子圖在SLAM中的應用》以及董靖大佬在泡泡機器人分享的《GTSAM Tutorial》,本博客內容主要是《GTSAM Tutorial》的學習筆記,并對GTSAM在LIO-SAM中的實際應用進行一些簡單分析,如果對GTSAM有基本了解的同學可以直接跳到第三部分。
1. 基本原理
在下圖是一個典型SLAM場景
其中,機器人對特征點的觀測可以構建為如上左圖所示的一個貝葉斯網絡,該貝葉斯網絡中xix_ixi?為機器人狀態,ziz_izi?為機器人觀測值,lil_ili?為特征點。這樣一個貝葉斯網絡的聯合分布概率可以通過如下公式表示:P(X,L,Z)=P(x0)∏i=1MP(xi∣xi?1,ui)∏k=1KP(zk∣xik,ljk)P(X, L, Z)=P\left(x_{0}\right) \prod_{i=1}^{M} P\left(x_{i} \mid x_{i-1}, u_{i}\right) \prod_{k=1}^{K} P\left(z_{k} \mid x_{i_{k}}, l_{j_{k}}\right) P(X,L,Z)=P(x0?)i=1∏M?P(xi?∣xi?1?,ui?)k=1∏K?P(zk?∣xik??,ljk??)其中,
P(x0)P\left(x_{0}\right)P(x0?)為先驗狀態概率分布,
P(xi∣xi?1,ui)P\left(x_{i} \mid x_{i-1}, u_{i}\right)P(xi?∣xi?1?,ui?)表示已知狀態xi?1x_{i-1}xi?1?和控制量uiu_{i}ui?分布的情況下,xi\boldsymbol{x}_{i}xi?的概率分布,具體為:xi=fi(xi?1,ui)+wi?x_{i}=f_{i}\left(x_{i-1}, u_{i}\right)+w_{i} \quad \Leftrightarrow xi?=fi?(xi?1?,ui?)+wi??P(xi∣xi?1,ui)∝exp??12∥fi(xi?1,ui)?xi∥Λi2P\left(x_{i} \mid x_{i-1}, u_{i}\right) \propto \exp -\frac{1}{2}\left\|f_{i}\left(x_{i-1}, u_{i}\right)-x_{i}\right\|_{\Lambda_{i}}^{2} P(xi?∣xi?1?,ui?)∝exp?21?∥fi?(xi?1?,ui?)?xi?∥Λi?2?P(zk∣xik,ljk)P\left(z_{k} \mid x_{i_{k}}, l_{j_{k}}\right)P(zk?∣xik??,ljk??)標示已知狀態xik\boldsymbol{x}_{i_{k}}xik??和ljkl_{j_{k}}ljk??的分布的情況下,zkz_{k}zk?的概率分布,具體為:zk=hk(xik,ljk)+vk?z_{k}=h_{k}\left(x_{i_{k}}, l_{j_{k}}\right)+v_{k} \quad \Leftrightarrow zk?=hk?(xik??,ljk??)+vk??P(zk∣xik,ljk)∝exp??12∥hk(xik,ljk)?zk∥Σk2P\left(z_{k} \mid x_{i_{k}}, l_{j_{k}}\right) \propto \exp -\frac{1}{2}\left\|h_{k}\left(x_{i_{k}}, l_{j_{k}}\right)-z_{k}\right\|_{\Sigma_{k}}^{2} P(zk?∣xik??,ljk??)∝exp?21?∥hk?(xik??,ljk??)?zk?∥Σk?2?如果將以上分布都假定為高斯分布,那么就可以構建如下因子圖:
我們通過如下公式表示以上因子圖:P(Θ)∝∏i?i(θi)∏{i,j},i<jψij(θi,θj),Θ?(X,L)P(\Theta) \propto \prod_{i} \phi_{i}\left(\theta_{i}\right) \prod_{\{i, j\}, i<j} \psi_{i j}\left(\theta_{i}, \theta_{j}\right),\Theta \triangleq(X, L) P(Θ)∝i∏??i?(θi?){i,j},i<j∏?ψij?(θi?,θj?),Θ?(X,L)其中?0(x0)∝P(x0)\phi_{0}\left(x_{0}\right) \propto P\left(x_{0}\right) ?0?(x0?)∝P(x0?)ψ(i?1)i(xi?1,xi)∝P(xi∣xi?1,ui)\psi_{(i-1) i}\left(x_{i-1}, x_{i}\right) \propto P\left(x_{i} \mid x_{i-1}, u_{i}\right) ψ(i?1)i?(xi?1?,xi?)∝P(xi?∣xi?1?,ui?)ψikjk(xik,ljk)∝P(zk∣xik,ljk)\psi_{i_{k} j_{k}}\left(x_{i_{k}}, l_{j_{k}}\right) \propto P\left(z_{k} \mid x_{i_{k}}, l_{j_{k}}\right) ψik?jk??(xik??,ljk??)∝P(zk?∣xik??,ljk??)這樣,我們相當于將機器人狀態和周圍的地圖點都等同于因子圖中的一個因子,整個因子圖的后驗概率其實就可以建模是所有因子之間的后驗概率的乘積:f(Θ)=∏ifi(Θi)Θ?(X,L)for?each?fi(Θi)∝exp?(?12∥hi(Θi)?zi∥Σi2)f(\Theta)=\prod_{i} f_{i}\left(\Theta_{i}\right) \quad \Theta \triangleq(X, L) \quad \text { for each } f_{i}\left(\Theta_{i}\right) \propto \exp \left(-\frac{1}{2}\left\|h_{i}\left(\Theta_{i}\right)-z_{i}\right\|_{\Sigma_{i}}^{2}\right) f(Θ)=i∏?fi?(Θi?)Θ?(X,L)?for?each?fi?(Θi?)∝exp(?21?∥hi?(Θi?)?zi?∥Σi?2?)而我們的目標就是讓總的后驗概率最大:Θ?增量優化:=arg?max?Θf(Θ)\Theta^{*增量優化:}=\underset{\Theta}{\arg \max } f(\Theta) Θ?增量優化:=Θargmax?f(Θ)直觀理解就是我們求解什么樣的機器人狀態和地圖點分布可以讓當前的所有觀測發生的概率最大,因為是高斯概率分布,因此我們通常將概率分布轉到對數域:arg?min?Θ(?log?f(Θ))=arg?min?Θ12∑i∥hi(Θi)?zi∥Σi2\underset{\Theta}{\arg \min }(-\log f(\Theta))=\underset{\Theta}{\arg \min } \frac{1}{2} \sum_{i}\left\|h_{i}\left(\Theta_{i}\right)-z_{i}\right\|_{\Sigma_{i}}^{2} Θargmin?(?logf(Θ))=Θargmin?21?i∑?∥hi?(Θi?)?zi?∥Σi?2?求解該問題主要就是通過非線性優化算法,例如高斯牛頓法或者列文伯格法,那么整個求解過程如下
這里還有幾點需要注意:
2. Demo代碼
在《GTSAM Tutorial》提供的Demo代碼中,描述了如何構造如下圖一個帶回環的因子圖
首先構建一個因子圖容器
接下來構建和添加先驗因子,也就是圖中的藍色因子部分,注意該因子為一元邊
// prior measurment noise model noiseModel::Diagonal::shared_ptr priorModel = noiseModel::Diagonal::Sigmas(Vector3(1.0, 1.0, 0.1));// Add a prior on the first pose, setting it to the origin// The prior is needed to fix/align the whole trajectory at world frame// A prior factor consists of a mean value and a noise model (covariance matrix)graph.add(PriorFactor<Pose2>(Symbol('x', 1), Pose2(0, 0, 0), priorModel));然后是構建和添加里程計因子,也就是圖中的黑色因子部分,這些因子為二元邊
// odometry measurement noise model (covariance matrix)noiseModel::Diagonal::shared_ptr odomModel = noiseModel::Diagonal::Sigmas(Vector3(0.5, 0.5, 0.1));// Add odometry factors// Create odometry (Between) factors between consecutive poses// robot makes 90 deg right turns at x3 - x5graph.add(BetweenFactor<Pose2>(Symbol('x', 1), Symbol('x', 2), Pose2(5, 0, 0), odomModel));graph.add(BetweenFactor<Pose2>(Symbol('x', 2), Symbol('x', 3), Pose2(5, 0, -M_PI_2), odomModel));graph.add(BetweenFactor<Pose2>(Symbol('x', 3), Symbol('x', 4), Pose2(5, 0, -M_PI_2), odomModel));graph.add(BetweenFactor<Pose2>(Symbol('x', 4), Symbol('x', 5), Pose2(5, 0, -M_PI_2), odomModel));還有回環因子,也就是途中的紅色因子部分,該因子為二元邊
// loop closure measurement noise modelnoiseModel::Diagonal::shared_ptr loopModel = noiseModel::Diagonal::Sigmas(Vector3(0.5, 0.5, 0.1));// Add the loop closure constraintgraph.add(BetweenFactor<Pose2>(Symbol('x', 5), Symbol('x', 2), Pose2(5, 0, -M_PI_2), loopModel));從上面可以看到,添加因子的過程就是先構造噪聲模型,然后將因子類型,因子連接的節點,以及初始值添加到因子圖的容器中,接下來就是初始化各個節點的噪聲模型
// initial varible values for the optimization// add random noise from ground truth valuesValues initials;initials.insert(Symbol('x', 1), Pose2(0.2, -0.3, 0.2));initials.insert(Symbol('x', 2), Pose2(5.1, 0.3, -0.1));initials.insert(Symbol('x', 3), Pose2(9.9, -0.1, -M_PI_2 - 0.2));initials.insert(Symbol('x', 4), Pose2(10.2, -5.0, -M_PI + 0.1));initials.insert(Symbol('x', 5), Pose2(5.1, -5.1, M_PI_2 - 0.1));最后就是構建優化器以及執行優化
// Use Gauss-Newton method optimizes the initial valuesGaussNewtonParams parameters;// print per iterationparameters.setVerbosity("ERROR");// optimize!GaussNewtonOptimizer optimizer(graph, initials, parameters);Values results = optimizer.optimize();通過Marginals類可以獲得各個節點優化后的殘差值
// Calculate marginal covariances for all posesMarginals marginals(graph, results);// print marginal covariancescout << "x1 covariance:\n" << marginals.marginalCovariance(Symbol('x', 1)) << endl;cout << "x2 covariance:\n" << marginals.marginalCovariance(Symbol('x', 2)) << endl;cout << "x3 covariance:\n" << marginals.marginalCovariance(Symbol('x', 3)) << endl;cout << "x4 covariance:\n" << marginals.marginalCovariance(Symbol('x', 4)) << endl;cout << "x5 covariance:\n" << marginals.marginalCovariance(Symbol('x', 5)) << endl;以上就是Demo代碼中的核心部分,可以看出來,GTSAM構建因子圖的過程是非常簡單明了的,接下來我們看看GTSAM在實際的工程中是怎么應用的。
3. LIO-SAM中部分代碼分析
LIO-SAM中構建的因子圖如下:
上圖看上去是一個因子圖,但是在LIO-SAM中實際上一共有兩種類型的因子圖,第一種因子圖是在imuIntergration.cpp文件中,用激光里程計,兩幀激光里程計之間的IMU預積分量構建因子圖,優化當前幀的狀態(包括位姿、速度、偏置),這里我們稱之為預積分因子圖。第二種因子圖是在mapOptimization.cpp文件中,關鍵幀加入因子圖,添加激光里程計因子、GPS因子、閉環因子,執行因子圖優化,更新所有關鍵幀位姿,這里我們成為關鍵幀因子圖。下面具體來看(部分代碼注釋參考LIO-SAM源碼解析):
3.1 預積分因子圖
預積分因子圖相關代碼主要在imuIntergration.cpp的IMUPreintegration類中,該類定義如下:
class IMUPreintegration : public ParamServer { public:std::mutex mtx;// 訂閱與發布ros::Subscriber subImu;ros::Subscriber subOdometry;ros::Publisher pubImuOdometry;bool systemInitialized = false;// 噪聲協方差gtsam::noiseModel::Diagonal::shared_ptr priorPoseNoise;gtsam::noiseModel::Diagonal::shared_ptr priorVelNoise;gtsam::noiseModel::Diagonal::shared_ptr priorBiasNoise;gtsam::noiseModel::Diagonal::shared_ptr correctionNoise;gtsam::noiseModel::Diagonal::shared_ptr correctionNoise2;gtsam::Vector noiseModelBetweenBias;// imu預積分器gtsam::PreintegratedImuMeasurements *imuIntegratorOpt_;gtsam::PreintegratedImuMeasurements *imuIntegratorImu_;// imu數據隊列std::deque<sensor_msgs::Imu> imuQueOpt;std::deque<sensor_msgs::Imu> imuQueImu;// imu因子圖優化過程中的狀態變量gtsam::Pose3 prevPose_;gtsam::Vector3 prevVel_;gtsam::NavState prevState_;gtsam::imuBias::ConstantBias prevBias_;// imu狀態gtsam::NavState prevStateOdom;gtsam::imuBias::ConstantBias prevBiasOdom;bool doneFirstOpt = false;double lastImuT_imu = -1;double lastImuT_opt = -1;// ISAM2優化器gtsam::ISAM2 optimizer;gtsam::NonlinearFactorGraph graphFactors;gtsam::Values graphValues;const double delta_t = 0;// 用于記錄第幾個關鍵幀int key = 1;// imu-lidar位姿變換gtsam::Pose3 imu2Lidar = gtsam::Pose3(gtsam::Rot3(1, 0, 0, 0), gtsam::Point3(-extTrans.x(), -extTrans.y(), -extTrans.z()));gtsam::Pose3 lidar2Imu = gtsam::Pose3(gtsam::Rot3(1, 0, 0, 0), gtsam::Point3(extTrans.x(), extTrans.y(), extTrans.z()));// 構造函數IMUPreintegration();// 重置iSAM2優化器void resetOptimization();// 重置參數void resetParams();// 訂閱激光里程計void odometryHandler(const nav_msgs::Odometry::ConstPtr& odomMsg);// IMU因子圖優化結果判定bool failureDetection(const gtsam::Vector3& velCur, const gtsam::imuBias::ConstantBias& biasCur);// 訂閱原始IMU數據void imuHandler(const sensor_msgs::Imu::ConstPtr& imu_raw); };其中類的成員變量包括需要優化的各個狀態,以及各個先驗因子的噪聲協方差,這些參數都在構造函數中初始化,下面我們看構造函數內容:
IMUPreintegration() {// 訂閱imu原始數據,用下面因子圖優化的結果,施加兩幀之間的imu預計分量,預測每一時刻(imu頻率)的imu里程計subImu = nh.subscribe<sensor_msgs::Imu> (imuTopic, 2000, &IMUPreintegration::imuHandler, this, ros::TransportHints().tcpNoDelay());// 訂閱激光里程計,來自mapOptimization,用兩幀之間的imu預計分量構建因子圖,優化當前幀位姿(這個位姿僅用于更新每時刻的imu里程計,以及下一次因子圖優化)subOdometry = nh.subscribe<nav_msgs::Odometry>("lio_sam/mapping/odometry_incremental", 5, &IMUPreintegration::odometryHandler, this, ros::TransportHints().tcpNoDelay());// 發布imu里程計pubImuOdometry = nh.advertise<nav_msgs::Odometry> (odomTopic+"_incremental", 2000);// imu預積分的噪聲協方差boost::shared_ptr<gtsam::PreintegrationParams> p = gtsam::PreintegrationParams::MakeSharedU(imuGravity);p->accelerometerCovariance = gtsam::Matrix33::Identity(3,3) * pow(imuAccNoise, 2); // acc white noise in continuousp->gyroscopeCovariance = gtsam::Matrix33::Identity(3,3) * pow(imuGyrNoise, 2); // gyro white noise in continuousp->integrationCovariance = gtsam::Matrix33::Identity(3,3) * pow(1e-4, 2); // error committed in integrating position from velocitiesgtsam::imuBias::ConstantBias prior_imu_bias((gtsam::Vector(6) << 0, 0, 0, 0, 0, 0).finished());; // assume zero initial bias// 噪聲先驗priorPoseNoise = gtsam::noiseModel::Diagonal::Sigmas((gtsam::Vector(6) << 1e-2, 1e-2, 1e-2, 1e-2, 1e-2, 1e-2).finished()); // rad,rad,rad,m, m, mpriorVelNoise = gtsam::noiseModel::Isotropic::Sigma(3, 1e4); // m/spriorBiasNoise = gtsam::noiseModel::Isotropic::Sigma(6, 1e-3); // 1e-2 ~ 1e-3 seems to be good// 激光里程計scan-to-map優化過程中發生退化,則選擇一個較大的協方差correctionNoise = gtsam::noiseModel::Diagonal::Sigmas((gtsam::Vector(6) << 0.05, 0.05, 0.05, 0.1, 0.1, 0.1).finished()); // rad,rad,rad,m, m, mcorrectionNoise2 = gtsam::noiseModel::Diagonal::Sigmas((gtsam::Vector(6) << 1, 1, 1, 1, 1, 1).finished()); // rad,rad,rad,m, m, mnoiseModelBetweenBias = (gtsam::Vector(6) << imuAccBiasN, imuAccBiasN, imuAccBiasN, imuGyrBiasN, imuGyrBiasN, imuGyrBiasN).finished();// imu預積分器,用于預測每一時刻(imu頻率)的imu里程計(轉到lidar系了,與激光里程計同一個系)imuIntegratorImu_ = new gtsam::PreintegratedImuMeasurements(p, prior_imu_bias); // setting up the IMU integration for IMU message thread// imu預積分器,用于因子圖優化imuIntegratorOpt_ = new gtsam::PreintegratedImuMeasurements(p, prior_imu_bias); // setting up the IMU integration for optimization }接下來構建因子圖最核心的部分就在訂閱激光里程計函數函數中,具體如下:
/*** 訂閱激光里程計,來自mapOptimization* 1、每隔100幀激光里程計,重置ISAM2優化器,添加里程計、速度、偏置先驗因子,執行優化* 2、計算前一幀激光里程計與當前幀激光里程計之間的imu預積分量,用前一幀狀態施加預積分量得到當前幀初始狀態估計,添加來自mapOptimization的當前幀位姿,進行因子圖優化,更新當前幀狀態* 3、優化之后,執行重傳播;優化更新了imu的偏置,用最新的偏置重新計算當前激光里程計時刻之后的imu預積分,這個預積分用于計算每時刻位姿 */ void odometryHandler(const nav_msgs::Odometry::ConstPtr& odomMsg) {std::lock_guard<std::mutex> lock(mtx);// 當前幀激光里程計時間戳double currentCorrectionTime = ROS_TIME(odomMsg);// 確保imu優化隊列中有imu數據進行預積分if (imuQueOpt.empty())return;// 當前幀激光位姿,來自scan-to-map匹配、因子圖優化后的位姿float p_x = odomMsg->pose.pose.position.x;float p_y = odomMsg->pose.pose.position.y;float p_z = odomMsg->pose.pose.position.z;float r_x = odomMsg->pose.pose.orientation.x;float r_y = odomMsg->pose.pose.orientation.y;float r_z = odomMsg->pose.pose.orientation.z;float r_w = odomMsg->pose.pose.orientation.w;bool degenerate = (int)odomMsg->pose.covariance[0] == 1 ? true : false;gtsam::Pose3 lidarPose = gtsam::Pose3(gtsam::Rot3::Quaternion(r_w, r_x, r_y, r_z), gtsam::Point3(p_x, p_y, p_z));// 0. 系統初始化,第一幀if (systemInitialized == false){// 重置ISAM2優化器resetOptimization();// 從imu優化隊列中刪除當前幀激光里程計時刻之前的imu數據while (!imuQueOpt.empty()){if (ROS_TIME(&imuQueOpt.front()) < currentCorrectionTime - delta_t){lastImuT_opt = ROS_TIME(&imuQueOpt.front());imuQueOpt.pop_front();}elsebreak;}// 添加里程計位姿先驗因子prevPose_ = lidarPose.compose(lidar2Imu);gtsam::PriorFactor<gtsam::Pose3> priorPose(X(0), prevPose_, priorPoseNoise);graphFactors.add(priorPose);// 添加速度先驗因子prevVel_ = gtsam::Vector3(0, 0, 0);gtsam::PriorFactor<gtsam::Vector3> priorVel(V(0), prevVel_, priorVelNoise);graphFactors.add(priorVel);// 添加imu偏置先驗因子prevBias_ = gtsam::imuBias::ConstantBias();gtsam::PriorFactor<gtsam::imuBias::ConstantBias> priorBias(B(0), prevBias_, priorBiasNoise);graphFactors.add(priorBias);// 變量節點賦初值graphValues.insert(X(0), prevPose_);graphValues.insert(V(0), prevVel_);graphValues.insert(B(0), prevBias_);// 優化一次optimizer.update(graphFactors, graphValues);graphFactors.resize(0);graphValues.clear();// 重置優化之后的偏置imuIntegratorImu_->resetIntegrationAndSetBias(prevBias_);imuIntegratorOpt_->resetIntegrationAndSetBias(prevBias_);key = 1;systemInitialized = true;return;}// 每隔100幀激光里程計,重置ISAM2優化器,保證優化效率if (key == 100){// 前一幀的位姿、速度、偏置噪聲模型gtsam::noiseModel::Gaussian::shared_ptr updatedPoseNoise = gtsam::noiseModel::Gaussian::Covariance(optimizer.marginalCovariance(X(key-1)));gtsam::noiseModel::Gaussian::shared_ptr updatedVelNoise = gtsam::noiseModel::Gaussian::Covariance(optimizer.marginalCovariance(V(key-1)));gtsam::noiseModel::Gaussian::shared_ptr updatedBiasNoise = gtsam::noiseModel::Gaussian::Covariance(optimizer.marginalCovariance(B(key-1)));// 重置ISAM2優化器resetOptimization();// 添加位姿先驗因子,用前一幀的值初始化gtsam::PriorFactor<gtsam::Pose3> priorPose(X(0), prevPose_, updatedPoseNoise);graphFactors.add(priorPose);// 添加速度先驗因子,用前一幀的值初始化gtsam::PriorFactor<gtsam::Vector3> priorVel(V(0), prevVel_, updatedVelNoise);graphFactors.add(priorVel);// 添加偏置先驗因子,用前一幀的值初始化gtsam::PriorFactor<gtsam::imuBias::ConstantBias> priorBias(B(0), prevBias_, updatedBiasNoise);graphFactors.add(priorBias);// 變量節點賦初值,用前一幀的值初始化graphValues.insert(X(0), prevPose_);graphValues.insert(V(0), prevVel_);graphValues.insert(B(0), prevBias_);// 優化一次optimizer.update(graphFactors, graphValues);graphFactors.resize(0);graphValues.clear();key = 1;}// 1. 計算前一幀與當前幀之間的imu預積分量,用前一幀狀態施加預積分量得到當前幀初始狀態估計,添加來自mapOptimization的當前幀位姿,進行因子圖優化,更新當前幀狀態while (!imuQueOpt.empty()){// 提取前一幀與當前幀之間的imu數據,計算預積分sensor_msgs::Imu *thisImu = &imuQueOpt.front();double imuTime = ROS_TIME(thisImu);if (imuTime < currentCorrectionTime - delta_t){// 兩幀imu數據時間間隔double dt = (lastImuT_opt < 0) ? (1.0 / 500.0) : (imuTime - lastImuT_opt);// imu預積分數據輸入:加速度、角速度、dtimuIntegratorOpt_->integrateMeasurement(gtsam::Vector3(thisImu->linear_acceleration.x, thisImu->linear_acceleration.y, thisImu->linear_acceleration.z),gtsam::Vector3(thisImu->angular_velocity.x, thisImu->angular_velocity.y, thisImu->angular_velocity.z), dt);lastImuT_opt = imuTime;// 從隊列中刪除已經處理的imu數據imuQueOpt.pop_front();}elsebreak;}// 添加imu預積分因子const gtsam::PreintegratedImuMeasurements& preint_imu = dynamic_cast<const gtsam::PreintegratedImuMeasurements&>(*imuIntegratorOpt_);// 參數:前一幀位姿,前一幀速度,當前幀位姿,當前幀速度,前一幀偏置,預積分量gtsam::ImuFactor imu_factor(X(key - 1), V(key - 1), X(key), V(key), B(key - 1), preint_imu);graphFactors.add(imu_factor);// 添加imu偏置因子,前一幀偏置,當前幀偏置,觀測值,噪聲協方差;deltaTij()是積分段的時間graphFactors.add(gtsam::BetweenFactor<gtsam::imuBias::ConstantBias>(B(key - 1), B(key), gtsam::imuBias::ConstantBias(),gtsam::noiseModel::Diagonal::Sigmas(sqrt(imuIntegratorOpt_->deltaTij()) * noiseModelBetweenBias)));// 添加位姿因子gtsam::Pose3 curPose = lidarPose.compose(lidar2Imu);gtsam::PriorFactor<gtsam::Pose3> pose_factor(X(key), curPose, degenerate ? correctionNoise2 : correctionNoise);graphFactors.add(pose_factor);// 用前一幀的狀態、偏置,施加imu預計分量,得到當前幀的狀態gtsam::NavState propState_ = imuIntegratorOpt_->predict(prevState_, prevBias_);// 變量節點賦初值graphValues.insert(X(key), propState_.pose());graphValues.insert(V(key), propState_.v());graphValues.insert(B(key), prevBias_);// 優化optimizer.update(graphFactors, graphValues);optimizer.update();graphFactors.resize(0);graphValues.clear();// 優化結果gtsam::Values result = optimizer.calculateEstimate();// 更新當前幀位姿、速度prevPose_ = result.at<gtsam::Pose3>(X(key));prevVel_ = result.at<gtsam::Vector3>(V(key));// 更新當前幀狀態prevState_ = gtsam::NavState(prevPose_, prevVel_);// 更新當前幀imu偏置prevBias_ = result.at<gtsam::imuBias::ConstantBias>(B(key));// 重置預積分器,設置新的偏置,這樣下一幀激光里程計進來的時候,預積分量就是兩幀之間的增量imuIntegratorOpt_->resetIntegrationAndSetBias(prevBias_);// imu因子圖優化結果,速度或者偏置過大,認為失敗if (failureDetection(prevVel_, prevBias_)){// 重置參數resetParams();return;}// 2. 優化之后,執行重傳播;優化更新了imu的偏置,用最新的偏置重新計算當前激光里程計時刻之后的imu預積分,這個預積分用于計算每時刻位姿prevStateOdom = prevState_;prevBiasOdom = prevBias_;// 從imu隊列中刪除當前激光里程計時刻之前的imu數據double lastImuQT = -1;while (!imuQueImu.empty() && ROS_TIME(&imuQueImu.front()) < currentCorrectionTime - delta_t){lastImuQT = ROS_TIME(&imuQueImu.front());imuQueImu.pop_front();}// 對剩余的imu數據計算預積分if (!imuQueImu.empty()){// 重置預積分器和最新的偏置imuIntegratorImu_->resetIntegrationAndSetBias(prevBiasOdom);// 計算預積分for (int i = 0; i < (int)imuQueImu.size(); ++i){sensor_msgs::Imu *thisImu = &imuQueImu[i];double imuTime = ROS_TIME(thisImu);double dt = (lastImuQT < 0) ? (1.0 / 500.0) :(imuTime - lastImuQT);imuIntegratorImu_->integrateMeasurement(gtsam::Vector3(thisImu->linear_acceleration.x, thisImu->linear_acceleration.y, thisImu->linear_acceleration.z),gtsam::Vector3(thisImu->angular_velocity.x, thisImu->angular_velocity.y, thisImu->angular_velocity.z), dt);lastImuQT = imuTime;}}++key;doneFirstOpt = true; }其中較為特殊一點的就是gtsam::PreintegratedImuMeasurements類的使用,找個類相當于把一堆預積分及其雅克比推導公式都給封裝起來了,簡直不要太爽。從代碼中我們也可以看出來,在構建因子圖過程中稍微復雜一點的部分就是因子的Noise Model的給定過程。
3.2 關鍵幀因子圖
關鍵幀因子圖相關代碼主要在mapOptimization.cpp的saveKeyFramesAndFactor函數中,在mapOptimization.cpp中除了關鍵幀因子圖部分,還包括Scan-to-Map的匹配算法,關鍵幀判定等,對這些代碼我們就不做過多介紹,saveKeyFramesAndFactor函數如下:
/*** 設置當前幀為關鍵幀并執行因子圖優化* 1、計算當前幀與前一幀位姿變換,如果變化太小,不設為關鍵幀,反之設為關鍵幀* 2、添加激光里程計因子、GPS因子、閉環因子* 3、執行因子圖優化* 4、得到當前幀優化后位姿,位姿協方差* 5、添加cloudKeyPoses3D,cloudKeyPoses6D,更新transformTobeMapped,添加當前關鍵幀的角點、平面點集合 */ void saveKeyFramesAndFactor() {// 計算當前幀與前一幀位姿變換,如果變化太小,不設為關鍵幀,反之設為關鍵幀if (saveFrame() == false)return;// 激光里程計因子addOdomFactor();// GPS因子addGPSFactor();// 閉環因子addLoopFactor();// cout << "****************************************************" << endl;// gtSAMgraph.print("GTSAM Graph:\n");// 執行優化isam->update(gtSAMgraph, initialEstimate);isam->update();if (aLoopIsClosed == true){isam->update();isam->update();isam->update();isam->update();isam->update();}// update之后要清空一下保存的因子圖,注:歷史數據不會清掉,ISAM保存起來了gtSAMgraph.resize(0);initialEstimate.clear();PointType thisPose3D;PointTypePose thisPose6D;Pose3 latestEstimate;// 優化結果isamCurrentEstimate = isam->calculateEstimate();// 當前幀位姿結果latestEstimate = isamCurrentEstimate.at<Pose3>(isamCurrentEstimate.size()-1);// cout << "****************************************************" << endl;// isamCurrentEstimate.print("Current estimate: ");// cloudKeyPoses3D加入當前幀位姿thisPose3D.x = latestEstimate.translation().x();thisPose3D.y = latestEstimate.translation().y();thisPose3D.z = latestEstimate.translation().z();// 索引thisPose3D.intensity = cloudKeyPoses3D->size(); cloudKeyPoses3D->push_back(thisPose3D);// cloudKeyPoses6D加入當前幀位姿thisPose6D.x = thisPose3D.x;thisPose6D.y = thisPose3D.y;thisPose6D.z = thisPose3D.z;thisPose6D.intensity = thisPose3D.intensity ; thisPose6D.roll = latestEstimate.rotation().roll();thisPose6D.pitch = latestEstimate.rotation().pitch();thisPose6D.yaw = latestEstimate.rotation().yaw();thisPose6D.time = timeLaserInfoCur;cloudKeyPoses6D->push_back(thisPose6D);// cout << "****************************************************" << endl;// cout << "Pose covariance:" << endl;// cout << isam->marginalCovariance(isamCurrentEstimate.size()-1) << endl << endl;// 位姿協方差poseCovariance = isam->marginalCovariance(isamCurrentEstimate.size()-1);// transformTobeMapped更新當前幀位姿transformTobeMapped[0] = latestEstimate.rotation().roll();transformTobeMapped[1] = latestEstimate.rotation().pitch();transformTobeMapped[2] = latestEstimate.rotation().yaw();transformTobeMapped[3] = latestEstimate.translation().x();transformTobeMapped[4] = latestEstimate.translation().y();transformTobeMapped[5] = latestEstimate.translation().z();// 當前幀激光角點、平面點,降采樣集合pcl::PointCloud<PointType>::Ptr thisCornerKeyFrame(new pcl::PointCloud<PointType>());pcl::PointCloud<PointType>::Ptr thisSurfKeyFrame(new pcl::PointCloud<PointType>());pcl::copyPointCloud(*laserCloudCornerLastDS, *thisCornerKeyFrame);pcl::copyPointCloud(*laserCloudSurfLastDS, *thisSurfKeyFrame);// 保存特征點降采樣集合cornerCloudKeyFrames.push_back(thisCornerKeyFrame);surfCloudKeyFrames.push_back(thisSurfKeyFrame);// 更新里程計軌跡updatePath(thisPose6D); }歷史上所有狀態變量優化的結果保存在isamCurrentEstimate中,而最新幀的結果在latestEstimate中,其中添加里程計因子的函數為
void addOdomFactor() {if (cloudKeyPoses3D->points.empty()){noiseModel::Diagonal::shared_ptr priorNoise = noiseModel::Diagonal::Variances((Vector(6) << 1e-2, 1e-2, M_PI*M_PI, 1e8, 1e8, 1e8).finished()); // rad*rad, meter*metergtSAMgraph.add(PriorFactor<Pose3>(0, trans2gtsamPose(transformTobeMapped), priorNoise));initialEstimate.insert(0, trans2gtsamPose(transformTobeMapped));}else{noiseModel::Diagonal::shared_ptr odometryNoise = noiseModel::Diagonal::Variances((Vector(6) << 1e-6, 1e-6, 1e-6, 1e-4, 1e-4, 1e-4).finished());gtsam::Pose3 poseFrom = pclPointTogtsamPose3(cloudKeyPoses6D->points.back());gtsam::Pose3 poseTo = trans2gtsamPose(transformTobeMapped);gtSAMgraph.add(BetweenFactor<Pose3>(cloudKeyPoses3D->size()-1, cloudKeyPoses3D->size(), poseFrom.between(poseTo), odometryNoise));initialEstimate.insert(cloudKeyPoses3D->size(), poseTo);} }添加GPS因子函數為,這個相對復雜些,需要判斷是否加入以及插值:
void addGPSFactor() {if (gpsQueue.empty())return;// wait for system initialized and settles downif (cloudKeyPoses3D->points.empty())return;else{if (pointDistance(cloudKeyPoses3D->front(), cloudKeyPoses3D->back()) < 5.0)return;}// pose covariance small, no need to correctif (poseCovariance(3,3) < poseCovThreshold && poseCovariance(4,4) < poseCovThreshold)return;// last gps positionstatic PointType lastGPSPoint;while (!gpsQueue.empty()){if (gpsQueue.front().header.stamp.toSec() < timeLaserInfoCur - 0.2){// message too oldgpsQueue.pop_front();}else if (gpsQueue.front().header.stamp.toSec() > timeLaserInfoCur + 0.2){// message too newbreak;}else{nav_msgs::Odometry thisGPS = gpsQueue.front();gpsQueue.pop_front();// GPS too noisy, skipfloat noise_x = thisGPS.pose.covariance[0];float noise_y = thisGPS.pose.covariance[7];float noise_z = thisGPS.pose.covariance[14];if (noise_x > gpsCovThreshold || noise_y > gpsCovThreshold)continue;float gps_x = thisGPS.pose.pose.position.x;float gps_y = thisGPS.pose.pose.position.y;float gps_z = thisGPS.pose.pose.position.z;if (!useGpsElevation){gps_z = transformTobeMapped[5];noise_z = 0.01;}// GPS not properly initialized (0,0,0)if (abs(gps_x) < 1e-6 && abs(gps_y) < 1e-6)continue;// Add GPS every a few metersPointType curGPSPoint;curGPSPoint.x = gps_x;curGPSPoint.y = gps_y;curGPSPoint.z = gps_z;if (pointDistance(curGPSPoint, lastGPSPoint) < 5.0)continue;elselastGPSPoint = curGPSPoint;gtsam::Vector Vector3(3);Vector3 << max(noise_x, 1.0f), max(noise_y, 1.0f), max(noise_z, 1.0f);noiseModel::Diagonal::shared_ptr gps_noise = noiseModel::Diagonal::Variances(Vector3);gtsam::GPSFactor gps_factor(cloudKeyPoses3D->size(), gtsam::Point3(gps_x, gps_y, gps_z), gps_noise);gtSAMgraph.add(gps_factor);aLoopIsClosed = true;break;}} }添加回環因子函數如下所示:
void addLoopFactor() {if (loopIndexQueue.empty())return;for (int i = 0; i < (int)loopIndexQueue.size(); ++i){int indexFrom = loopIndexQueue[i].first;int indexTo = loopIndexQueue[i].second;gtsam::Pose3 poseBetween = loopPoseQueue[i];gtsam::noiseModel::Diagonal::shared_ptr noiseBetween = loopNoiseQueue[i];gtSAMgraph.add(BetweenFactor<Pose3>(indexFrom, indexTo, poseBetween, noiseBetween));}loopIndexQueue.clear();loopPoseQueue.clear();loopNoiseQueue.clear();aLoopIsClosed = true; }那么以上就簡單介紹了GTSAM在LIO-SAM工程中的應用,介紹得還是比較粗糙,后面再慢慢補充,我也是剛剛接觸GTSAM,有問題歡迎大家交流哈!
此外,對SLAM算法感興趣的同學可以看考我的博客SLAM算法總結——經典SLAM算法框架總結
總結
以上是生活随笔為你收集整理的GTSAM Tutorial学习笔记的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 视觉激光融合——VLOAM / LIMO
- 下一篇: 激光IMU融合——LIO-Mapping