<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

面向目標檢測的圖特征增強點云采樣方法

Graph-feature-enhanced point cloud sampling for object detection

  • 摘要: 激光雷達采集的點云中,前景目標點所占比例較小,傳統無監督采樣方法難以選擇性地保留足夠多的前景點,導致部分目標信息丟失,影響基于點云的目標檢測網絡性能. 本文提出了一種圖特征增強的并行點云采樣方法,利用前/背景分類標簽進行監督,顯著提高了采樣點中前景點的比例. 與直接使用點特征進行監督的方法相比,所提出的基于圖特征的方法能夠更好地捕捉點云的局部幾何信息,適用于目標檢測網絡的淺層采樣過程. 在KITTI和nuScenes自動駕駛數據集上的實驗結果表明:本文方法采樣的前景點比例高達99%,能夠有效提取受遮擋目標和遠處目標等點云稀疏區域的特征信息,從而提高目標檢測網絡的性能. 引入該方法后,對困難情況下的車輛、行人和兩輪車的檢測平均精度分別提升了8.58%、2.27%和3.12%. 此外,該方法設計靈活,易于集成到依賴點云采樣過程的各種3D點云任務中.

     

    Abstract: Light detection and ranging (LiDAR)-acquired point-cloud data are extensive and characterized by their non-uniform density, with points typically denser near the sensor and sparser at greater distances. Efficient sampling of point clouds is crucial for reducing computational complexity while preserving critical environmental information. However, classical sampling methods, such as farthest point sampling (FPS) and random sampling, fail to adequately address the challenges posed by the imbalanced distribution of foreground and background points. Oversampling of background points or insufficient coverage of foreground regions can result in the loss of essential target information, particularly for small or distant objects, thus ultimately degrading the performance of three-dimensional (3D) object-detection networks. Although FPS has been widely adopted in many point-based detection frameworks, its sequential nature limits its efficiency and effectiveness in complex scenarios. Hence, we propose a novel graph feature augmentation sampling (GFAS) method, which leverages graph convolutional networks and supervised learning to enhance sampling efficiency and detection performance. The proposed method introduces a graph-feature-generation module that aggregates local and global features of point clouds using multilayer graph convolutions, thus enabling the extraction of rich geometric and spatial information. Additionally, it incorporates a parallel sampling mechanism that selects foreground points based on their feature scores, thereby significantly improving sampling efficiency. By utilizing foreground–background classification labels as supervision signals, GFAS ensures a higher proportion of foreground points in the sampling process, which is particularly beneficial for detecting objects. Extensive experiments are conducted on two large-scale autonomous driving datasets, i.e., KITTI and nuScenes, to validate the effectiveness of GFAS. On the KITTI dataset, GFAS achieves significant improvements in terms of average precision for car detection, with gains of 6.2%, 6.89%, and 8.58% under easy, moderate, and hard levels, respectively. Similar improvements are observed for pedestrian and cyclist detection, thus demonstrating the robustness of the proposed method across different object categories. On the nuScenes dataset, the proposed method improves car- and pedestrian-detection performance significantly, with higher precision levels by 4.2% and 8.3%, respectively, compared with the baseline model. These results highlight the strong generalizability of GFAS in large-scale and complex driving scenarios. Ablation studies reveal that GFAS significantly increases the proportion of foreground points in the sampling process, with the ratio approaching 99% in the final layers. Visualization results show that GFAS effectively concentrates sampling points on foreground objects, thus avoiding the uniform distribution issue of classical FPS methods. Additional experiments on other 3D object-detection models, such as 3D single stage object detector (3DSSD) and PointVoxel-RCNN (PV-RCNN), further validate the flexibility and scalability of the proposed method. In conclusion, this paper proposes an efficient and parallel point-cloud-sampling method. By integrating graph-feature extraction and supervised learning, GFAS not only improves sampling efficiency but also enhances detection performance, particularly for challenging scenarios. The proposed method can be easily integrated into existing point-cloud-based detection frameworks. Its ability to retain a high proportion of foreground points while maintaining computational efficiency highlights its practicality.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164