日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

激光雷达障碍物检测与追踪实战——基于欧几里德聚类的激光雷达障碍物检测

發布時間:2023/12/20 编程问答 61 豆豆
生活随笔 收集整理的這篇文章主要介紹了 激光雷达障碍物检测与追踪实战——基于欧几里德聚类的激光雷达障碍物检测 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • 歐幾里德聚類
  • 分割點云
  • bounding box
  • 實驗效果

激光雷達是實現無人駕駛環境感知的重要傳感器,激光雷達以其穩定可靠、精度高并且能同時應用于定位和環境感知而被廣泛采用

激光雷達傳統的障礙物檢測與跟蹤的流程:

  • 提取感興趣ROI區域
  • 濾波:一般采用體素濾波(體素濾波即將空間按照一定尺寸的立方體格進行劃分,每個立方格內僅保留一個點),降低點云密度,減少后續計算量
  • 地面分割:常見的方法RANSAC,地面平面擬合(Ground Plane Fitting),linefit_ground_segmentation等。
  • 點云聚類:采用歐幾里得聚類,設置不同半徑閾值進行聚類獲取獲得目標輪廓的點云簇,由于采用不同半徑閾值聚類,可能會把一個物體分割成多個,需要對不同的點云簇進行merge。
  • 在完成聚類后,可得到障礙物的輪廓,接下來計算bounding box,采用凸多邊形擬合,根據凸多邊形的頂點計算斜矩形,得到bounding box的中心,優化bounding box的長寬高和中心和朝向。
  • 目標追蹤:對檢測出的障礙物進行追蹤,最經典采用卡爾曼濾波方法,如EKF。另一種跟蹤算法就是基于cnn-kcf系列,利用online kernal learning可以與檢測的網絡做到一起。

現今,點云上的深度學習變得越來越流行,涌現許多基于深度神經網絡的點云端到端檢測算法,該方法依賴密集點云,通常采用高線的激光雷達,對于低速,簡單場景下,低線激光雷達采用聚類方法也可以取得較好的障礙物感知效果。

歐幾里德聚類

歐式聚類是一種基于歐氏距離度量的聚類算法。采用基于KD-Tree的近鄰查詢算法是加速歐式聚類。
KD-Tree最近鄰搜索
KD-Tree是對數據點在k維空間中劃分的一種數據結構;KD-Tree是一種平衡二叉樹

KD-Tree采用分而治之的思想,即將整個空間劃分為幾個小部分。KD-Tree算法的應用可以分為兩方面,一方面是有關KD-Tree樹建立的算法,另一方面是在KD-Tree樹基礎上進行最鄰近查找。

KD-Tree是每個節點均為k維數值點的二叉樹,其上的每個節點代表一個超平面,該超平面垂直于當前劃分維度的坐標軸,并在該維度上將空間劃分為兩部分,一部分在其左子樹,另一部分在其右子樹。即若當前節點的劃分維度為d,其左子樹上所有點在d維的坐標值均小于當前值,右子樹上所有點在d維的坐標值均大于等于當前值,遞歸處理其子樹,直至所有數據點掛載完畢。
下面以二維空間為例:

  • KD-Tree樹構建:在KD-Tree中插入點
    首先我們得確定一個根點(-6.2, 7), 第0層為x軸, 需要插入的點為(-6.3, 8.4), (-5.2, 7.1), 因為-6.3<-6.2,(-6.3, 8.4)劃分為左子節點, 而-5.2>-6.2, (-5.2, 7.1)劃分為右子節點. (-5.7, 6.3)中x軸-5.7>-6.2,所以放在(-5.2, 7.1)節點下, 接下來第1層使用y軸, 6.3<7.1, 因此放在(-5.2, 7.1)的左子節點. 同理, 如果我們想插入第五個點(7.2, 6.1), 你可以交替對比x,y軸數值, (7.2>-6.2)->(6.1<7.1)->(7.2>-5.7), 第五點應插入到(-5.7, 6.3)的右子節點C

分割點云

為了達到更好的聚類效果,我們在不同距離的區域使用不同的聚類半徑閾值

  • 首先將掃描的點云按距離劃分為5塊,對每塊分別采用一個線程聚類,提高聚類速度
  • void EuclideanCluster::segmentByDistance(const pcl::PointCloud<pcl::PointXYZI>::Ptr in,pcl::PointCloud<pcl::PointXYZI>::Ptr &outCloudPtr, std::vector<pcl::PointCloud<pcl::PointXYZI>::Ptr> &points_vector) {// cluster the pointcloud according to the distance of the points using different thresholds (not only one for the entire pc)// in this way, the points farther in the pc will also be clusteredstd::vector<pcl::PointIndices> clusterIndices;if (!use_multiple_thres_){cluster_vector(in, clusterIndices);for (auto it = clusterIndices.begin(); it != clusterIndices.end(); ++it){pcl::PointCloud<pcl::PointXYZI>::Ptr temp_cluster(new pcl::PointCloud<pcl::PointXYZI>);pcl::copyPointCloud(*in, it->indices, *temp_cluster);*outCloudPtr += *temp_cluster;points_vector.push_back(temp_cluster);}}else{std::vector<pcl::PointCloud<pcl::PointXYZI>::Ptr> cloud_segments_array(7);for (unsigned int i = 0; i < cloud_segments_array.size(); i++){pcl::PointCloud<pcl::PointXYZI>::Ptr tmp_cloud(new pcl::PointCloud<pcl::PointXYZI>);cloud_segments_array[i] = tmp_cloud;}for (unsigned int i = 0; i < in->points.size(); i++){pcl::PointXYZI current_point;current_point.x = in->points[i].x;current_point.y = in->points[i].y;current_point.z = in->points[i].z;current_point.intensity = in->points[i].intensity;float origin_distance = sqrt(pow(current_point.x, 2) + pow(current_point.y, 2));if (origin_distance < clustering_ranges_[0]){cloud_segments_array[0]->points.push_back(current_point);}else if (origin_distance < clustering_ranges_[1]){cloud_segments_array[1]->points.push_back(current_point);}else if (origin_distance < clustering_ranges_[2]){cloud_segments_array[2]->points.push_back(current_point);}else if (origin_distance < clustering_ranges_[3]){cloud_segments_array[3]->points.push_back(current_point);}else{cloud_segments_array[4]->points.push_back(current_point);}}std::vector<std::thread> thread_vec(cloud_segments_array.size());for (unsigned int i = 0; i < cloud_segments_array.size(); i++){std::promise<std::vector<pcl::PointIndices>> promiseObj;std::shared_future<std::vector<pcl::PointIndices>> futureObj = promiseObj.get_future();thread_vec[i] = std::thread(&EuclideanCluster::clusterIndicesMultiThread, this, cloud_segments_array[i], std::ref(clustering_distances_[i]), std::ref(promiseObj));clusterIndices = futureObj.get();for (int j = 0; j < clusterIndices.size(); j++){//每次聚類得出的indices為輸入點云對應的索引pcl::PointCloud<pcl::PointXYZI>::Ptr temp_cluster(new pcl::PointCloud<pcl::PointXYZI>);pcl::copyPointCloud(*cloud_segments_array[i], clusterIndices[j], *temp_cluster);*outCloudPtr += *temp_cluster;points_vector.push_back(temp_cluster);}}for (int i = 0; i < thread_vec.size(); i++){thread_vec[i].join();}} }

    bounding box

  • 聚類并計算障礙物中心,和Bounding Box
    五個點云分別使用不同的半徑閾值進行歐幾里德聚類,對聚類完以后的一個個點云簇,我們計算其形心作為該障礙物的中心,同時計算點云簇的長寬高,從而確定一個能夠將點云簇包裹的三維Bounding Box

  • 凸多邊形擬合,優化bounding box的中心和長寬

  • 計算bounding box朝向
    采用擬合出的凸多邊型的頂點計算點云輪廓最小外界矩形,是一個Box2D結構,包含最小外接矩形的中心、寬高、旋轉角度(水平軸(x軸)逆時針旋轉,與碰到的矩形的第一條邊的夾角)

  • void BoundingBox::SetCloud(std_msgs::Header header, const pcl::PointCloud<pcl::PointXYZI>::Ptr in, bool in_estimate_pose) {// extract pointcloud using the indices// calculate min and max pointspcl::PointCloud<pcl::PointXYZRGB>::Ptr currentCluster(new pcl::PointCloud<pcl::PointXYZRGB>);float min_x = std::numeric_limits<float>::max();float max_x = -std::numeric_limits<float>::max();float min_y = std::numeric_limits<float>::max();float max_y = -std::numeric_limits<float>::max();float min_z = std::numeric_limits<float>::max();float max_z = -std::numeric_limits<float>::max();float average_x = 0, average_y = 0, average_z = 0;for (int i = 0; i < in->points.size(); i++){// fill new colored cluster point by pointpcl::PointXYZRGB p;p.x = in->points[i].x;p.y = in->points[i].y;p.z = in->points[i].z;average_x += p.x;average_y += p.y;average_z += p.z;centroid_.x += p.x;centroid_.y += p.y;centroid_.z += p.z;currentCluster->points.push_back(p);if (p.x < min_x)min_x = p.x;if (p.y < min_y)min_y = p.y;if (p.z < min_z)min_z = p.z;if (p.x > max_x)max_x = p.x;if (p.y > max_y)max_y = p.y;if (p.z > max_z)max_z = p.z;}// min, max pointsminPoint_.x = min_x;minPoint_.y = min_y;minPoint_.z = min_z;maxPoint_.x = max_x;maxPoint_.y = max_y;maxPoint_.z = max_z;// calculate centroid, averageif (in->points.size() > 0){centroid_.x /= in->points.size();centroid_.y /= in->points.size();centroid_.z /= in->points.size();average_x /= in->points.size();average_y /= in->points.size();average_z /= in->points.size();}averagePoint_.x = average_x;averagePoint_.y = average_y;averagePoint_.z = average_z;// calculate bounding boxfloat length_ = maxPoint_.x - minPoint_.x;float width_ = maxPoint_.y - minPoint_.y;float height_ = maxPoint_.z - minPoint_.z;boundingBox_.header = header;boundingBox_.pose.position.x = minPoint_.x + length_ / 2;boundingBox_.pose.position.y = minPoint_.y + width_ / 2;boundingBox_.pose.position.z = minPoint_.z + height_ / 2;boundingBox_.dimensions.x = ((length_ < 0) ? -1 * length_ : length_);boundingBox_.dimensions.y = ((width_ < 0) ? -1 * width_ : width_);boundingBox_.dimensions.z = ((height_ < 0) ? -1 * height_ : height_);// pose estimationdouble rz = 0;std::vector<cv::Point2f> points;for (unsigned int i = 0; i < currentCluster->points.size(); i++){cv::Point2f pt;pt.x = currentCluster->points[i].x;pt.y = currentCluster->points[i].y;points.push_back(pt);}std::vector<cv::Point2f> hull;cv::convexHull(points, hull);polygon_.header = header;for (size_t i = 0; i < hull.size() + 1; i++){geometry_msgs::Point32 point;point.x = hull[i % hull.size()].x;point.y = hull[i % hull.size()].y;point.z = minPoint_.z;polygon_.polygon.points.push_back(point);}if (in_estimate_pose){cv::RotatedRect box = minAreaRect(hull);rz = box.angle * 3.14 / 180;boundingBox_.pose.position.x = box.center.x;boundingBox_.pose.position.y = box.center.y;boundingBox_.dimensions.x = box.size.width;boundingBox_.dimensions.y = box.size.height;}// set bounding box directiontf::Quaternion quat = tf::createQuaternionFromRPY(0.0, 0.0, rz);/** \brief convert Quaternion to Quaternion msg*/tf::quaternionTFToMsg(quat, boundingBox_.pose.orientation);currentCluster->width = currentCluster->points.size();currentCluster->height = 1;currentCluster->is_dense = true;// Get EigenValues, eigenvectorsif (currentCluster->points.size() > 3){pcl::PCA<pcl::PointXYZ> currentClusterPca;pcl::PointCloud<pcl::PointXYZ>::Ptr current_cluster_mono(new pcl::PointCloud<pcl::PointXYZ>);pcl::copyPointCloud(*currentCluster, *current_cluster_mono);currentClusterPca.setInputCloud(current_cluster_mono);eigenVectors_ = currentClusterPca.getEigenVectors();eigenValues_ = currentClusterPca.getEigenValues();}validCluster_ = true;pointCloud_ = currentCluster; }

    4.對可能的bounding_box進行合并

    void BoundingBox::getBoundingBox(std_msgs::Header header,std::vector<pcl::PointCloud<pcl::PointXYZI>::Ptr> &points_vector,autoware_msgs::CloudClusterArray &inOutClusters) {std::vector<BoundingBoxPtr> Clusters;for (int i = 0; i < points_vector.size(); i++){// pcl::PointCloud<pcl::PointXYZI>::Ptr temp_cluster(new pcl::PointCloud<pcl::PointXYZI>);// pcl::copyPointCloud(*in, it->indices, *temp_cluster);// *outCloudPtr += *temp_cluster;BoundingBoxPtr cluster(new BoundingBox());cluster->SetCloud(header, points_vector[i], inEstimatePose_);Clusters.push_back(cluster);}// Clusters can be merged or checked in here// check for mergable clustersstd::vector<BoundingBoxPtr> midClusters;std::vector<BoundingBoxPtr> finalClusters;if (Clusters.size() > 0)checkAllForMerge(header, Clusters, midClusters, clusterMergeThreshold_, inEstimatePose_);elsemidClusters = Clusters;if (midClusters.size() > 0)checkAllForMerge(header, midClusters, finalClusters, clusterMergeThreshold_, inEstimatePose_);elsefinalClusters = midClusters;// Get final PointCloud to be publishedfor (unsigned int i = 0; i < Clusters.size(); i++){if (Clusters[i]->validCluster_){autoware_msgs::CloudCluster cloudCluster;Clusters[i]->ToROSMessage(header, cloudCluster);inOutClusters.clusters.push_back(cloudCluster);}}inOutClusters.header = header; }

    實驗效果

    我這里采用了基于同心區域的區域地面分割和地面似然估計方法,地面過濾效果比我上面列舉的RANSAC,地面平面擬合(Ground Plane Fitting),linefit_ground_segmentation效果都要好
    1.安裝相應的 ros 依賴包

    sudo apt-get install ros-melodic-jsk-rviz-plugins sudo apt-get install ros-melodic-jsk-recognition-msgs sudo apt-get install ros-melodic-autoware-msgs sudo apt-get install ros-melodic-visualization-msgs

    2.啟動

    rosbag play -l kitti_2011_09_30_drive_0016_synced.bag /kitti/velo/pointcloud:=/velodyne_points roslaunch lidar_obstacle_detection lidar_obstacle_detection.launch

    launch有兩個節點,運行launch直接會打開rviz可視化

    <launch><node pkg="lidar_obstacle_detection" type="lidar_obstacle_detection_node" name="lidar_obstacle_detection_node" output="screen" /><rosparam file="$(find lidar_obstacle_detection)/config/lidar_obstacle_detection.yaml" command="load" /><!-- Start lidar_obstacle_detection.rviz --><node pkg="rviz" type="rviz" name="rviz" output = "screen" args="-d $(find lidar_obstacle_detection)/rviz/lidar_obstacle_detection.rviz" required="true" /> </launch>

    用kitti數據集,效果如下,大家可以自行根改config下的launch文件里的參數進行調試


    可視化MarkerArray輸出包括5個信息:

    • label:顯示距離信息
    • hull:凸多邊形擬合,綠色線
    • cube:藍色的實心box
    • box:紅色線條圍成的box
    • centroid:bounding_box中心

    上圖為bounding_box未加方向,加方向后,效果不太好,對于道路兩邊的區域的姿態估計顯然會不準確,可以將isEstimatePose字段設置為true,觀測添加姿態估計后的效果

    這是我業余寫的一個粗版demo,有地方是可以改進的,比如多線程獲取返回值速度不快,大家自行修改吧

    歡迎大家關注筆者,你的關注是我持續更博的最大動力

    ..

    總結

    以上是生活随笔為你收集整理的激光雷达障碍物检测与追踪实战——基于欧几里德聚类的激光雷达障碍物检测的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。