<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

基于改進YOLOv5的安全帽檢測算法

Helmet detection method based on improved YOLOv5

  • 摘要: 為了解決建筑工地、隧道、煤礦等施工場景中現有安全帽檢測算法對于小目標、密集目標以及復雜環境下的檢測精度低的問題,設計實現了一種基于YOLOv5的改進目標檢測算法,記為YOLOv5-GBCW. 首先使用Ghost卷積對骨干網絡進行重構,使得模型的復雜度有了顯著降低;其次使用雙向特征金字塔網絡(BiFPN)加強特征融合,使得算法對小目標準確率提升;引入坐標注意力(Coordinate attention)模塊,能夠將注意力資源分配給關鍵區域,從而在復雜環境中降低背景的干擾;最后提出了Beta-WIoU作為邊框損失函數,采用動態非單調聚焦機制并引入對錨框特征的計算,提升預測框的準確率,同時加速模型收斂. 為了驗證算法的可行性,以課題組收集的安全帽數據集為基礎,選用了多種經典算法進行對比,并且進行了消融實驗,探究各個改進模塊的提升效果. 實驗結果表明:改進算法YOLOv5-GBCW相較于YOLOv5s算法,算法平均精確率(IOU=0.5)提升了5.8%,達到了94.5%,檢測速度達到了124.6 FPS(每秒處理幀數),模型更加輕量化,在復雜環境、密集場景和小目標場景下檢測能力提升顯著,并且同時滿足安全帽檢測精度和實時性的要求,給復雜施工環境下安全帽檢測提供了一種新的方法.

     

    Abstract: To address the challenge of low detection accuracy in existing safety helmet detection algorithms, particularly in scenarios with small targets, dense environments, and complex surroundings like construction sites, tunnels, and coal mines, we introduce an enhanced object detection approach, denoted as YOLOv5-GBCW. Our methodology includes several key innovations. First, we apply Ghost convolution to overhaul the backbone network, considerably reducing model complexity, decreasing computational requirements by 48.73%, and reducing model size by 45.84% while maintaining high accuracy with only a 1.6 percentage point reduction. Second, we employ a two-way feature pyramid network (BiFPN) to enhance feature fusion, providing distinct weights to objects of varying scales. This empowers our algorithm to excel in detecting small targets. We incorporate a leap-layer connection strategy for cross-scale weight suppression and feature expression, further enhancing object detection performance. In addition, we introduce the coordinate attention module to allocate attention resources to key areas, minimizing background interference in complex environments. Finally, we propose the Beta-WIoU border loss function, employing a dynamic non-monotonic focusing mechanism to reduce the impact of simple examples on loss values. This enables the model to prioritize challenging examples like occlusions, enhancing generalization performance. We also introduce anchor box feature calculations to improve prediction accuracy and expedite model convergence. To validate our algorithm’s feasibility, we use a dataset of 7000 images collected by our research group featuring safety helmets in construction sites, tunnels, mines, and various other scenarios. We conduct comparisons with classic algorithms, including Faster RCNN, SSD, YOLOv3, YOLOv4, and YOLOv5s, along with algorithms from relevant literature. We employ adaptive Gamma transformation for image preprocessing during training to facilitate subsequent detection. Ablation experiments systematically investigate the contributions of each improvement module. Our experimental findings demonstrate that, compared to the YOLOv5s algorithm, our improved YOLOv5-GBCW achieves a remarkable average accuracy improvement of 5.8% at IOU=0.5, reaching 94.5% while maintaining a detection speed of 124.6 FPS(Frames per second). This results in a lighter model with faster convergence, considerably enhancing its detection capabilities in complex, dense, and small target environments while meeting the stringent requirements for helmet detection accuracy and real-time performance. This work introduces a novel approach for detecting safety helmets in intricate construction settings.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164