日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

LIO-SAM:在高斯牛顿法求解过程中用SO3代替欧拉角

發布時間:2023/12/14 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 LIO-SAM:在高斯牛顿法求解过程中用SO3代替欧拉角 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
  • LIO-SAM發表于IROS2020,是一個效果非常好的慣性-激光緊耦合里程計
  • 我打算給我們的機器人搞一個激光里程計,于是打算把LIO-SAM改一改搞過來
  • 修改過程中發現一個問題:在里程計求解(mapOptimization的LMOptimization函數)過程中,作者是用歐拉角來表示位姿的旋轉部分的
    • 這對于移動機器人來說還算可行,因為一般情況下地面移動機器人的pitch不會到達90°,也就不會引起死鎖
    • 但我們的機器人是攀爬機器人,經常出現翻轉的動作,因此不能使用歐拉角來表示旋轉;因此,需要對這部分做一定修改
    • 修改思路:“用SO3加平移”的方式來表示位姿
  • 先看看原代碼整體思路:
void laserCloudInfoHandler(const lio_sam::cloud_infoConstPtr& msgIn){// extract time stamptimeLaserInfoStamp = msgIn->header.stamp;timeLaserInfoCur = msgIn->header.stamp.toSec();// extract info and feature cloudcloudInfo = *msgIn;pcl::fromROSMsg(msgIn->cloud_corner, *laserCloudCornerLast);pcl::fromROSMsg(msgIn->cloud_surface, *laserCloudSurfLast);std::lock_guard<std::mutex> lock(mtx);static double timeLastProcessing = -1;if (timeLaserInfoCur - timeLastProcessing >= mappingProcessInterval){timeLastProcessing = timeLaserInfoCur;updateInitialGuess();extractSurroundingKeyFrames();downsampleCurrentScan();scan2MapOptimization();saveKeyFramesAndFactor();correctPoses();publishOdometry();publishFrames();}}
  • 翻譯下,整個mapOptimization的大致流程就是:
    • updateInitialGuess:將IMU的預測結果設為優化初值
    • extractSurroundingKeyFrames:提取與當前幀相鄰的關鍵幀點云,組成一個局部地圖
    • downsampleCurrentScan:降采樣咯,分別對corner和surface特征集合降采樣
    • scan2MapOptimization:重點!!利用高斯牛頓法求解當前幀的位姿
    • saveKeyFramesAndFactor:判斷是否需要插入新的關鍵幀;給iSAM加入新的因子,進行進一步優化(這個優化的效果主要體現在長距離、長時間、有回環情況下的誤差抑制)
    • correctPoses:與IMU預測的pitch,roll進行融合
  • 那么,我要改的主要是scan2MapOptimization這個部分,接下來看一看scan2MapOptimization里面的主要流程:
void scan2MapOptimization(){if (cloudKeyPoses3D->points.empty())return;if (laserCloudCornerLastDSNum > edgeFeatureMinValidNum && laserCloudSurfLastDSNum > surfFeatureMinValidNum){kdtreeCornerFromMap->setInputCloud(laserCloudCornerFromMapDS);kdtreeSurfFromMap->setInputCloud(laserCloudSurfFromMapDS);for (int iterCount = 0; iterCount < 30; iterCount++){laserCloudOri->clear();coeffSel->clear();cornerOptimization();surfOptimization();combineOptimizationCoeffs();if (LMOptimization(iterCount) == true)break; }transformUpdate(); } else {ROS_WARN("Not enough features! Only %d edge and %d planar features available.", laserCloudCornerLastDSNum, laserCloudSurfLastDSNum);}}
  • 翻譯下:
    • cornerOptimization, surfOptimization里進行了特征匹配、殘差計算以及對平移部分的雅克比計算(旋轉部分的雅克比可由平移部分的雅克比進一步計算得到)
    • combineOptimizationCoeffs是將上面兩個函數計算的殘差與雅克比的部分計算合并在一起,以便在LMOptimization里面用同一個for來計算迭代步長
    • LMOptimization里面計算對旋轉部分的雅克比,并用高斯牛頓法計算一次迭代步長、更新位姿
    • 整個scan2MapOptimization的內容就是迭代優化30次,每次迭代都會重新匹配特征
  • 好了,那么我具體要改哪些函數?
    • 首要明確:我要改的是在優化過程中,狀態變量的表示方式
    • 優化迭代公式如下:J^T*J*δx = - J^T * f(x) ,狀態變量的表示方式會影響雅克比的計算;本文修改的是旋轉的表示方式,因此涉及旋轉雅克比計算的函數都要進行修改,狀態更新的地方也要修改
    • 因此,主要修改這三個函數:cornerOptimization, surfOptimization,LMOptimization
  • 先看surfOptimization:
void surfOptimization() {// 更新當前幀位姿updatePointAssociateToMap();// 遍歷當前幀平面點集合#pragma omp parallel for num_threads(numberOfCores)for (int i = 0; i < laserCloudSurfLastDSNum; i++){PointType pointOri, pointSel, coeff;std::vector<int> pointSearchInd;std::vector<float> pointSearchSqDis;// 平面點(坐標還是lidar系)pointOri = laserCloudSurfLastDS->points[i];// 根據當前幀位姿,變換到世界坐標系(map系)下pointAssociateToMap(&pointOri, &pointSel); // 在局部平面點map中查找當前平面點相鄰的5個平面點kdtreeSurfFromMap->nearestKSearch(pointSel, 5, pointSearchInd, pointSearchSqDis);Eigen::Matrix<float, 5, 3> matA0;Eigen::Matrix<float, 5, 1> matB0;Eigen::Vector3f matX0;matA0.setZero();matB0.fill(-1);matX0.setZero();// 要求距離都小于1mif (pointSearchSqDis[4] < 1.0) {for (int j = 0; j < 5; j++) {matA0(j, 0) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].x;matA0(j, 1) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].y;matA0(j, 2) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].z;}// 假設平面方程為ax+by+cz+1=0,這里就是求方程的系數abc,d=1matX0 = matA0.colPivHouseholderQr().solve(matB0);// 平面方程的系數,也是法向量的分量float pa = matX0(0, 0);float pb = matX0(1, 0);float pc = matX0(2, 0);float pd = 1;// 單位法向量float ps = sqrt(pa * pa + pb * pb + pc * pc);pa /= ps; pb /= ps; pc /= ps; pd /= ps;// 檢查平面是否合格,如果5個點中有點到平面的距離超過0.2m,那么認為這些點太分散了,不構成平面bool planeValid = true;for (int j = 0; j < 5; j++) {if (fabs(pa * laserCloudSurfFromMapDS->points[pointSearchInd[j]].x +pb * laserCloudSurfFromMapDS->points[pointSearchInd[j]].y +pc * laserCloudSurfFromMapDS->points[pointSearchInd[j]].z + pd) > 0.2) {planeValid = false;break;}}// 平面合格if (planeValid) {// 當前激光幀點到平面距離float pd2 = pa * pointSel.x + pb * pointSel.y + pc * pointSel.z + pd;float s = 1 - 0.9 * fabs(pd2) / sqrt(sqrt(pointSel.x * pointSel.x+ pointSel.y * pointSel.y + pointSel.z * pointSel.z));// 點到平面垂線單位法向量(其實等價于平面法向量)coeff.x = s * pa;coeff.y = s * pb;coeff.z = s * pc;// 點到平面距離coeff.intensity = s * pd2;if (s > 0.1) {// 當前激光幀平面點,加入匹配集合中laserCloudOriSurfVec[i] = pointOri;// 參數coeffSelSurfVec[i] = coeff;laserCloudOriSurfFlag[i] = true;}}}} }
  • 總結下:
    • 先把當前特征點轉到map坐標系下,用kd樹找map中 與此點相鄰的5個平面點
    • 用這五個平面點擬合一個平面,并檢查這個平面質量是否足夠好
    • 如果平面質量夠好,就計算點到面的殘差,然后根據這個點到面的距離,給這個殘差項分配一個可信度因子(距離被匹配平面遠的點可信度沒那么高);如果可信度因子足夠大,則將這個點放到匹配集合里去
    • 這里要注意一下coeff變量,它的xyz保存的是(乘了一個可信度因子的)這個面的法向量,intensity保存的是(乘了一個可信度因子的)點到這個面的距離
    • 這里要注意一下:按照點面特征的雅克比計算公式("SO3加平移"版本),點面殘差對平移部分的雅克比就是這個面的單位法向量轉置,而對旋轉部分的雅克比也只是在這個基礎上右乘一個-Rp(當前旋轉估計乘以當前點在雷達坐標系下的表示)的反對稱矩陣而已
    • 對旋轉部分的雅克比計算是在LMOptimization中完成的,在LMOptimization之前我們就要完成-Rp的計算了
    • 修改后的代碼:
    Eigen::Vector3d CulRp(PointType const * const pi) {return Eigen::Vector3d(transPointAssociateToMap(0,0) * pi->x + transPointAssociateToMap(0,1) * pi->y + transPointAssociateToMap(0,2) * pi->z,transPointAssociateToMap(1,0) * pi->x + transPointAssociateToMap(1,1) * pi->y + transPointAssociateToMap(1,2) * pi->z,transPointAssociateToMap(2,0) * pi->x + transPointAssociateToMap(2,1) * pi->y + transPointAssociateToMap(2,2) * pi->z); }void surfOptimization() {updatePointAssociateToMap();#pragma omp parallel for num_threads(numberOfCores)for (int i = 0; i < laserCloudSurfLastDSNum; i++){PointType pointOri, pointSel, coeff;std::vector<int> pointSearchInd;std::vector<float> pointSearchSqDis;pointOri = laserCloudSurfLastDS->points[i];pointAssociateToMap(&pointOri, &pointSel); kdtreeSurfFromMap->nearestKSearch(pointSel, 5, pointSearchInd, pointSearchSqDis);Eigen::Matrix<float, 5, 3> matA0;Eigen::Matrix<float, 5, 1> matB0;Eigen::Vector3f matX0;matA0.setZero();matB0.fill(-1);matX0.setZero();if (pointSearchSqDis[4] < 1.0) {for (int j = 0; j < 5; j++) {matA0(j, 0) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].x;matA0(j, 1) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].y;matA0(j, 2) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].z;}matX0 = matA0.colPivHouseholderQr().solve(matB0);float pa = matX0(0, 0);float pb = matX0(1, 0);float pc = matX0(2, 0);float pd = 1;float ps = sqrt(pa * pa + pb * pb + pc * pc);pa /= ps; pb /= ps; pc /= ps; pd /= ps;bool planeValid = true;for (int j = 0; j < 5; j++) {if (fabs(pa * laserCloudSurfFromMapDS->points[pointSearchInd[j]].x +pb * laserCloudSurfFromMapDS->points[pointSearchInd[j]].y +pc * laserCloudSurfFromMapDS->points[pointSearchInd[j]].z + pd) > 0.2) {planeValid = false;break;}}if (planeValid) {float pd2 = pa * pointSel.x + pb * pointSel.y + pc * pointSel.z + pd;float s = 1 - 0.9 * fabs(pd2) / sqrt(sqrt(pointSel.x * pointSel.x+ pointSel.y * pointSel.y + pointSel.z * pointSel.z));coeff.x = s * pa;coeff.y = s * pb;coeff.z = s * pc;coeff.intensity = s * pd2;if (s > 0.1) {laserCloudOriSurfVec[i] = pointOri;coeffSelSurfVec[i] = coeff;laserCloudOriSurfFlag[i] = true;surfRp[i] = (CulRp(&pointOri)); // 新增的類成員變量, std::vector<Eigen::Vector3d> surfRp; // for jacobian orientation}}}} }
  • 其實就加了一行surfRp[i] = (CulRp(&pointOri)); // 新增的類成員變量, std::vector<Eigen::Vector3d> surfRp; // for jacobian orientation。。。
  • cornerOptimization的修改思路與上面差不多,但線特征的雅克比計算有所不同:

    + 放一放原碼:
void cornerOptimization(){updatePointAssociateToMap();#pragma omp parallel for num_threads(numberOfCores)for (int i = 0; i < laserCloudCornerLastDSNum; i++){PointType pointOri, pointSel, coeff;std::vector<int> pointSearchInd;std::vector<float> pointSearchSqDis;pointOri = laserCloudCornerLastDS->points[i];pointAssociateToMap(&pointOri, &pointSel);kdtreeCornerFromMap->nearestKSearch(pointSel, 5, pointSearchInd, pointSearchSqDis);cv::Mat matA1(3, 3, CV_32F, cv::Scalar::all(0));cv::Mat matD1(1, 3, CV_32F, cv::Scalar::all(0));cv::Mat matV1(3, 3, CV_32F, cv::Scalar::all(0));if (pointSearchSqDis[4] < 1.0) {float cx = 0, cy = 0, cz = 0;for (int j = 0; j < 5; j++) {cx += laserCloudCornerFromMapDS->points[pointSearchInd[j]].x;cy += laserCloudCornerFromMapDS->points[pointSearchInd[j]].y;cz += laserCloudCornerFromMapDS->points[pointSearchInd[j]].z;}cx /= 5; cy /= 5; cz /= 5;float a11 = 0, a12 = 0, a13 = 0, a22 = 0, a23 = 0, a33 = 0;for (int j = 0; j < 5; j++) {float ax = laserCloudCornerFromMapDS->points[pointSearchInd[j]].x - cx;float ay = laserCloudCornerFromMapDS->points[pointSearchInd[j]].y - cy;float az = laserCloudCornerFromMapDS->points[pointSearchInd[j]].z - cz;a11 += ax * ax; a12 += ax * ay; a13 += ax * az;a22 += ay * ay; a23 += ay * az;a33 += az * az;}a11 /= 5; a12 /= 5; a13 /= 5; a22 /= 5; a23 /= 5; a33 /= 5;matA1.at<float>(0, 0) = a11; matA1.at<float>(0, 1) = a12; matA1.at<float>(0, 2) = a13;matA1.at<float>(1, 0) = a12; matA1.at<float>(1, 1) = a22; matA1.at<float>(1, 2) = a23;matA1.at<float>(2, 0) = a13; matA1.at<float>(2, 1) = a23; matA1.at<float>(2, 2) = a33;cv::eigen(matA1, matD1, matV1);if (matD1.at<float>(0, 0) > 3 * matD1.at<float>(0, 1)) {float x0 = pointSel.x;float y0 = pointSel.y;float z0 = pointSel.z;float x1 = cx + 0.1 * matV1.at<float>(0, 0);float y1 = cy + 0.1 * matV1.at<float>(0, 1);float z1 = cz + 0.1 * matV1.at<float>(0, 2);float x2 = cx - 0.1 * matV1.at<float>(0, 0);float y2 = cy - 0.1 * matV1.at<float>(0, 1);float z2 = cz - 0.1 * matV1.at<float>(0, 2);// a012其實就是((p_i - p_a) x (p_i - p_b)).normfloat a012 = sqrt(((x0 - x1)*(y0 - y2) - (x0 - x2)*(y0 - y1)) * ((x0 - x1)*(y0 - y2) - (x0 - x2)*(y0 - y1)) + ((x0 - x1)*(z0 - z2) - (x0 - x2)*(z0 - z1)) * ((x0 - x1)*(z0 - z2) - (x0 - x2)*(z0 - z1)) + ((y0 - y1)*(z0 - z2) - (y0 - y2)*(z0 - z1)) * ((y0 - y1)*(z0 - z2) - (y0 - y2)*(z0 - z1)));float l12 = sqrt((x1 - x2)*(x1 - x2) + (y1 - y2)*(y1 - y2) + (z1 - z2)*(z1 - z2)); // (pa - pb).normfloat la = ((y1 - y2)*((x0 - x1)*(y0 - y2) - (x0 - x2)*(y0 - y1)) + (z1 - z2)*((x0 - x1)*(z0 - z2) - (x0 - x2)*(z0 - z1))) / a012 / l12;float lb = -((x1 - x2)*((x0 - x1)*(y0 - y2) - (x0 - x2)*(y0 - y1)) - (z1 - z2)*((y0 - y1)*(z0 - z2) - (y0 - y2)*(z0 - z1))) / a012 / l12;float lc = -((x1 - x2)*((x0 - x1)*(z0 - z2) - (x0 - x2)*(z0 - z1)) + (y1 - y2)*((y0 - y1)*(z0 - z2) - (y0 - y2)*(z0 - z1))) / a012 / l12;float ld2 = a012 / l12;float s = 1 - 0.9 * fabs(ld2);coeff.x = s * la;coeff.y = s * lb;coeff.z = s * lc;coeff.intensity = s * ld2;if (s > 0.1) {laserCloudOriCornerVec[i] = pointOri;coeffSelCornerVec[i] = coeff;laserCloudOriCornerFlag[i] = true;cornerRp[i] = CulRp(&pointOri);}}}}}
  • 對于這個函數的coeff變量,它的xyz保存的直接就是是(乘了一個可信度因子的)對平移部分的雅克比,intensity保存的是(乘了一個可信度因子的)點到這個線特征的距離
  • 注意:線特征對旋轉部分的雅克比矩陣中,Rp是不帶負號的
  • 修改完surfOptimization和cornerOptimization后,由于多出了Rp的計算,因此要把Rp也合并到一起去,因此要改combineOptimizationCoeffs:
void combineOptimizationCoeffs(){// combine corner coeffsfor (int i = 0; i < laserCloudCornerLastDSNum; ++i){if (laserCloudOriCornerFlag[i] == true){laserCloudOri->push_back(laserCloudOriCornerVec[i]);coeffSel->push_back(coeffSelCornerVec[i]);pcl::PointXYZ p;p.x = cornerRp[i].x(); p.y = cornerRp[i].y(); p.z = cornerRp[i].z();Rp->push_back(p); // pcl::PointCloud<pcl::PointXYZ>::Ptr Rp; // for jacobian orientation}}// combine surf coeffsfor (int i = 0; i < laserCloudSurfLastDSNum; ++i){if (laserCloudOriSurfFlag[i] == true){laserCloudOri->push_back(laserCloudOriSurfVec[i]);coeffSel->push_back(coeffSelSurfVec[i]);pcl::PointXYZ p;p.x = -1 * surfRp[i].x(); p.y = -1 * surfRp[i].y(); p.z = -1 * surfRp[i].z();Rp->push_back(p);}}// reset flag for next iterationstd::fill(laserCloudOriCornerFlag.begin(), laserCloudOriCornerFlag.end(), false);std::fill(laserCloudOriSurfFlag.begin(), laserCloudOriSurfFlag.end(), false);}
  • 如果是線特征,則給Rp加負號
  • 最后,要修改LMOptimization,先上一波源碼:
bool LMOptimization(int iterCount) {// This optimization is from the original loam_velodyne by Ji Zhang, need to cope with coordinate transformation// lidar <- camera --- camera <- lidar// x = z --- x = y// y = x --- y = z// z = y --- z = x// roll = yaw --- roll = pitch// pitch = roll --- pitch = yaw// yaw = pitch --- yaw = roll// lidar -> camerafloat srx = sin(transformTobeMapped[1]);float crx = cos(transformTobeMapped[1]);float sry = sin(transformTobeMapped[2]);float cry = cos(transformTobeMapped[2]);float srz = sin(transformTobeMapped[0]);float crz = cos(transformTobeMapped[0]);// 當前幀匹配特征點數太少int laserCloudSelNum = laserCloudOri->size();if (laserCloudSelNum < 50) {return false;}cv::Mat matA(laserCloudSelNum, 6, CV_32F, cv::Scalar::all(0));cv::Mat matAt(6, laserCloudSelNum, CV_32F, cv::Scalar::all(0));cv::Mat matAtA(6, 6, CV_32F, cv::Scalar::all(0));cv::Mat matB(laserCloudSelNum, 1, CV_32F, cv::Scalar::all(0));cv::Mat matAtB(6, 1, CV_32F, cv::Scalar::all(0));cv::Mat matX(6, 1, CV_32F, cv::Scalar::all(0));PointType pointOri, coeff;// 遍歷匹配特征點,構建Jacobian矩陣for (int i = 0; i < laserCloudSelNum; i++) {// lidar -> camera todopointOri.x = laserCloudOri->points[i].y;pointOri.y = laserCloudOri->points[i].z;pointOri.z = laserCloudOri->points[i].x;// lidar -> cameracoeff.x = coeffSel->points[i].y;coeff.y = coeffSel->points[i].z;coeff.z = coeffSel->points[i].x;coeff.intensity = coeffSel->points[i].intensity;// in camerafloat arx = (crx*sry*srz*pointOri.x + crx*crz*sry*pointOri.y - srx*sry*pointOri.z) * coeff.x+ (-srx*srz*pointOri.x - crz*srx*pointOri.y - crx*pointOri.z) * coeff.y+ (crx*cry*srz*pointOri.x + crx*cry*crz*pointOri.y - cry*srx*pointOri.z) * coeff.z;float ary = ((cry*srx*srz - crz*sry)*pointOri.x + (sry*srz + cry*crz*srx)*pointOri.y + crx*cry*pointOri.z) * coeff.x+ ((-cry*crz - srx*sry*srz)*pointOri.x + (cry*srz - crz*srx*sry)*pointOri.y - crx*sry*pointOri.z) * coeff.z;float arz = ((crz*srx*sry - cry*srz)*pointOri.x + (-cry*crz-srx*sry*srz)*pointOri.y)*coeff.x+ (crx*crz*pointOri.x - crx*srz*pointOri.y) * coeff.y+ ((sry*srz + cry*crz*srx)*pointOri.x + (crz*sry-cry*srx*srz)*pointOri.y)*coeff.z;// lidar -> cameramatA.at<float>(i, 0) = arz;matA.at<float>(i, 1) = arx;matA.at<float>(i, 2) = ary;matA.at<float>(i, 3) = coeff.z;matA.at<float>(i, 4) = coeff.x;matA.at<float>(i, 5) = coeff.y;// 點到直線距離、平面距離,作為觀測值matB.at<float>(i, 0) = -coeff.intensity;}cv::transpose(matA, matAt);matAtA = matAt * matA;matAtB = matAt * matB;// J^T·J·delta_x = -J^T·f 高斯牛頓cv::solve(matAtA, matAtB, matX, cv::DECOMP_QR);// 首次迭代,檢查近似Hessian矩陣(J^T·J)是否退化,或者稱為奇異,行列式值=0 todoif (iterCount == 0) {cv::Mat matE(1, 6, CV_32F, cv::Scalar::all(0));cv::Mat matV(6, 6, CV_32F, cv::Scalar::all(0));cv::Mat matV2(6, 6, CV_32F, cv::Scalar::all(0));cv::eigen(matAtA, matE, matV);matV.copyTo(matV2);isDegenerate = false;float eignThre[6] = {100, 100, 100, 100, 100, 100};for (int i = 5; i >= 0; i--) {if (matE.at<float>(0, i) < eignThre[i]) {for (int j = 0; j < 6; j++) {matV2.at<float>(i, j) = 0;}isDegenerate = true;} else {break;}}matP = matV.inv() * matV2;}if (isDegenerate){cv::Mat matX2(6, 1, CV_32F, cv::Scalar::all(0));matX.copyTo(matX2);matX = matP * matX2;}// 更新當前位姿 x = x + delta_xtransformTobeMapped[0] += matX.at<float>(0, 0);transformTobeMapped[1] += matX.at<float>(1, 0);transformTobeMapped[2] += matX.at<float>(2, 0);transformTobeMapped[3] += matX.at<float>(3, 0);transformTobeMapped[4] += matX.at<float>(4, 0);transformTobeMapped[5] += matX.at<float>(5, 0);float deltaR = sqrt(pow(pcl::rad2deg(matX.at<float>(0, 0)), 2) +pow(pcl::rad2deg(matX.at<float>(1, 0)), 2) +pow(pcl::rad2deg(matX.at<float>(2, 0)), 2));float deltaT = sqrt(pow(matX.at<float>(3, 0) * 100, 2) +pow(matX.at<float>(4, 0) * 100, 2) +pow(matX.at<float>(5, 0) * 100, 2));// delta_x很小,認為收斂if (deltaR < 0.05 && deltaT < 0.05) {return true; }return false; }
  • 要改的地方:
    • 狀態變量(狀態變量定義為[roll, pitch, yaw, x, y, z],優化過程中會臨時修改這個順序)
    • 雅克比矩陣( matA就是雅克比矩陣,matAt是其轉置,matB是殘差),matX是迭代步長
  • 狀態變量:
    • 從[roll, pitch, yaw, x, y, z]修改為[so3, x, y, z]
    • 只有優化過程這樣改,其他部分還是保持原來的表示;因此這里有一個歐拉角到SO3的轉換過程
    • 對旋轉進行更新時,需要將迭代步長的旋轉部分從李代數轉成李群,然后再左乘到原狀態上
    • 計算完迭代之后,要將SO3轉回到歐拉角
  • 雅克比矩陣:
    • 平移部分是不用改了,只要改旋轉部分
  • 懶得寫了…看碼吧
bool LMOptimization(int iterCount){// This optimization is from the original loam_velodyne by Ji Zhang, need to cope with coordinate transformation// lidar <- camera --- camera <- lidar// x = z --- x = y// y = x --- y = z// z = y --- z = x// roll = yaw --- roll = pitch// pitch = roll --- pitch = yaw// yaw = pitch --- yaw = roll// lidar -> camera // float srx = sin(transformTobeMapped[1]); // float crx = cos(transformTobeMapped[1]); // float sry = sin(transformTobeMapped[2]); // float cry = cos(transformTobeMapped[2]); // float srz = sin(transformTobeMapped[0]); // float crz = cos(transformTobeMapped[0]);int laserCloudSelNum = laserCloudOri->size();if (laserCloudSelNum < 50) {return false;}cv::Mat matA(laserCloudSelNum, 6, CV_32F, cv::Scalar::all(0));cv::Mat matAt(6, laserCloudSelNum, CV_32F, cv::Scalar::all(0));cv::Mat matAtA(6, 6, CV_32F, cv::Scalar::all(0));cv::Mat matB(laserCloudSelNum, 1, CV_32F, cv::Scalar::all(0));cv::Mat matAtB(6, 1, CV_32F, cv::Scalar::all(0));cv::Mat matX(6, 1, CV_32F, cv::Scalar::all(0));PointType pointOri, coeff;for (int i = 0; i < laserCloudSelNum; i++) { // // lidar -> camera // pointOri.x = laserCloudOri->points[i].y; // pointOri.y = laserCloudOri->points[i].z; // pointOri.z = laserCloudOri->points[i].x; // // lidar -> camera // coeff.x = coeffSel->points[i].y; // coeff.y = coeffSel->points[i].z; // coeff.z = coeffSel->points[i].x; // coeff.intensity = coeffSel->points[i].intensity;// lidar -> camerapointOri.x = laserCloudOri->points[i].x;pointOri.y = laserCloudOri->points[i].y;pointOri.z = laserCloudOri->points[i].z;// lidar -> cameracoeff.x = coeffSel->points[i].x;coeff.y = coeffSel->points[i].y;coeff.z = coeffSel->points[i].z;coeff.intensity = coeffSel->points[i].intensity;// skew of Rpauto p_pcl = Rp->points[i];Eigen::Vector3f p(p_pcl.x, p_pcl.y, p_pcl.z);Eigen::Matrix3f RpSkew = mapOptimization::skewSymmetric(p);Eigen::Vector3f tmp(coeff.x, coeff.y, coeff.z);Eigen::Matrix<float, 1, 3> jacobian_theta = tmp.transpose() * RpSkew;// // in camera // float arx = (crx*sry*srz*pointOri.x + crx*crz*sry*pointOri.y - srx*sry*pointOri.z) * coeff.x // + (-srx*srz*pointOri.x - crz*srx*pointOri.y - crx*pointOri.z) * coeff.y // + (crx*cry*srz*pointOri.x + crx*cry*crz*pointOri.y - cry*srx*pointOri.z) * coeff.z; // // float ary = ((cry*srx*srz - crz*sry)*pointOri.x // + (sry*srz + cry*crz*srx)*pointOri.y + crx*cry*pointOri.z) * coeff.x // + ((-cry*crz - srx*sry*srz)*pointOri.x // + (cry*srz - crz*srx*sry)*pointOri.y - crx*sry*pointOri.z) * coeff.z; // // float arz = ((crz*srx*sry - cry*srz)*pointOri.x + (-cry*crz-srx*sry*srz)*pointOri.y)*coeff.x // + (crx*crz*pointOri.x - crx*srz*pointOri.y) * coeff.y // + ((sry*srz + cry*crz*srx)*pointOri.x + (crz*sry-cry*srx*srz)*pointOri.y)*coeff.z; // // lidar -> camera // matA.at<float>(i, 0) = arz; // matA.at<float>(i, 1) = arx; // matA.at<float>(i, 2) = ary;matA.at<float>(i, 0) = jacobian_theta(0, 0);matA.at<float>(i, 1) = jacobian_theta(0, 1);matA.at<float>(i, 2) = jacobian_theta(0, 2); // matA.at<float>(i, 3) = coeff.z; // matA.at<float>(i, 4) = coeff.x; // matA.at<float>(i, 5) = coeff.y;matA.at<float>(i, 3) = coeff.x;matA.at<float>(i, 4) = coeff.y;matA.at<float>(i, 5) = coeff.z;matB.at<float>(i, 0) = -coeff.intensity;}cv::transpose(matA, matAt);matAtA = matAt * matA;matAtB = matAt * matB;cv::solve(matAtA, matAtB, matX, cv::DECOMP_QR);if (iterCount == 0) {cv::Mat matE(1, 6, CV_32F, cv::Scalar::all(0));cv::Mat matV(6, 6, CV_32F, cv::Scalar::all(0));cv::Mat matV2(6, 6, CV_32F, cv::Scalar::all(0));cv::eigen(matAtA, matE, matV);matV.copyTo(matV2);isDegenerate = false;float eignThre[6] = {100, 100, 100, 100, 100, 100};for (int i = 5; i >= 0; i--) {if (matE.at<float>(0, i) < eignThre[i]) {for (int j = 0; j < 6; j++) {matV2.at<float>(i, j) = 0;}isDegenerate = true;} else {break;}}matP = matV.inv() * matV2;}if (isDegenerate){cv::Mat matX2(6, 1, CV_32F, cv::Scalar::all(0));matX.copyTo(matX2);matX = matP * matX2;}Eigen::Quaternionf quat(transPointAssociateToMap.matrix().block<3, 3>(0, 0));Sophus::SO3f so3_orientation;so3_orientation.setQuaternion(quat.normalized()); // transformTobeMapped[0] += matX.at<float>(0, 0); // transformTobeMapped[1] += matX.at<float>(1, 0); // transformTobeMapped[2] += matX.at<float>(2, 0); // transformTobeMapped[3] += matX.at<float>(3, 0); // transformTobeMapped[4] += matX.at<float>(4, 0); // transformTobeMapped[5] += matX.at<float>(5, 0);Eigen::Vector3f update_so3(matX.at<float>(0, 0), matX.at<float>(1, 0), matX.at<float>(2, 0));so3_orientation = Sophus::SO3f::exp(update_so3) * so3_orientation;Eigen::Vector3f euler_angles = so3_orientation.matrix().eulerAngles(2, 1, 0);transformTobeMapped[0] = euler_angles[2];transformTobeMapped[1] = euler_angles[1];transformTobeMapped[2] = euler_angles[0];transformTobeMapped[3] += matX.at<float>(3, 0);transformTobeMapped[4] += matX.at<float>(4, 0);transformTobeMapped[5] += matX.at<float>(5, 0);// float deltaR = sqrt( // pow(pcl::rad2deg(matX.at<float>(0, 0)), 2) + // pow(pcl::rad2deg(matX.at<float>(1, 0)), 2) + // pow(pcl::rad2deg(matX.at<float>(2, 0)), 2));auto tmp = Sophus::SO3f::exp(update_so3);float deltaR = sqrt(pow(pcl::rad2deg(tmp.angleX()), 2) +pow(pcl::rad2deg(tmp.angleY()), 2) +pow(pcl::rad2deg(tmp.angleZ()), 2));float deltaT = sqrt(pow(matX.at<float>(3, 0) * 100, 2) +pow(matX.at<float>(4, 0) * 100, 2) +pow(matX.at<float>(5, 0) * 100, 2));if (deltaR < 0.05 && deltaT < 0.05) {return true; // converged}return false; // keep optimizing}
  • 這里的SO3轉回歐拉角后,再與IMU做融合會出問題(可能涉及到歐拉角的插值問題),因此我把scan2MapOptimization中的transformUpdate換成incrementalOdometryAffineBack = trans2Affine3f(transformTobeMapped);,這樣就不會與IMU的pitch,roll融合了

總結

以上是生活随笔為你收集整理的LIO-SAM:在高斯牛顿法求解过程中用SO3代替欧拉角的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。