<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

基于Depth-YOLO的半導體鍵合引線缺陷檢測算法

Depth-YOLO-based defect detection algorithm for semiconductor bonding leads

  • 摘要: 引線鍵合作為集成電路封裝環節的關鍵步驟,其作用是將不同元器件和芯片相互連接,確保電路的正常工作,其質量檢測關乎產品良率. 針對現有鍵合引線缺陷檢測方法檢測精度和檢測效率較低的問題,本文提出一種新的缺陷檢測模型:Depth-YOLO. 首先,該模型重建了YOLOv8模型的輸入端,使模型能夠處理輸入圖像的深度信息. 其次,提出一種輸入特征增強模塊,增強模型對引線深度信息和紋理特征的提取能力. 隨后,用C2f_Faster模塊替換原YOLOv8主干網絡的C2f模塊,降低模型參數量,減少計算冗余. 接著,提出一種融合注意力機制(MDFA),增強模型對密集復雜不規則缺陷的特征提取能力,提升檢測精度. 最后,用WIoU代替原YOLOv8的損失函數CIoU,提高模型對目標檢測框的判斷準確性,加快收斂速度. 針對目前相關研究領域沒有鍵合引線公開數據集的問題,自制鍵合引線深度圖像數據集DepthBondingWire. 在自制數據集的實驗結果表明,Depth-YOLO模型相比于原YOLOv8模型mAP@0.5提升了7.2個百分點,達到了98.6%. 與其他主流目標檢測模型相比具有較高的檢測精度. 本文提出的方法可有效實現半導體鍵合引線高精度自動化檢測,并可以輻射到集成電路其他關鍵工藝的缺陷檢測.

     

    Abstract: Wire bonding, which is a critical step in integrated circuit packaging, interconnects various components and chips to ensure appropriate circuit functionality. Quality inspection directly affects the product yield rates. To address the issues of low detection accuracy and efficiency in existing bond wire defect detection methods, particularly for dense, microscale, and geometrically irregular defects, this study proposes a novel defect detection model, Depth-YOLO. The proposed framework integrates multi-modal depth features and hierarchical attention mechanisms to overcome limitations of conventional RGB-based approaches in complex industrial environments. First, the model reconstructs the input terminal of YOLOv8 architecture to process 4-channel pseudo-RGBD data, combining single-channel depth maps with three-channel normal maps derived from gradient-based geometric mapping. This enables the model to capture texture and 3D (three dimensional) spatial features that are critical for detecting defects such as wire curvature anomalies and bridge faults. Second, an input feature enhancement module (Enhance) is designed to hierarchically extract the depth and geometric information. The Enhance module employs multi-scale convolution (3×3, 5×5, and 7×7 kernels) for depth feature amplification, Sobel operators for surface gradient extraction, and dual-attention fusion (channel-spatial attention) to weight critical regions, improving depth-aware feature representation by 2.8% when compared to baseline. To optimize computational efficiency, the original C2f module in YOLOv8’s backbone is replaced with a lightweight C2f_Faster module. This modification introduces partial convolution (Partial_conv3), which processes only 25% of the input channels coupled with DropPath regularization to mitigate overfitting. The experimental results show a 10% reduction in GFLOPs while maintaining 89.8% of baseline accuracy. Furthermore, a multidimensional feature attention (MDFA) mechanism is proposed to address the diverse defect morphologies. By synergistically integrating channel-aware feature mixing (CAFM) for global dependency modeling, multi-level context attention (MLCA) for dynamic receptive field adjustment, and cross-phase context aggregation (CPCA) with asymmetric convolutions (e.g., 1×7, 7×1 kernels), MDFA achieves a 4% recall improvement on irregular defects when compared to single-attention baselines. The original CIoU loss function is replaced with Wise-IoU (WIoU) to enhance bounding box regression stability. WIoU dynamically weighs training samples based on annotation quality and reduces the gradient dominance from low-quality examples. such as RT-DETR-L (98.9% mAP@0.5 with 1.1× higher FLOPs). Ablation studies confirm the necessity of multimodality fusion: using RGB-only inputs degrades mAP@0.5 by 14.7%, whereas disabling the MDFA reduces the recall on irregular defects by 18.4%. Practical deployment tests on NVIDIA Jetson AGX Xavier show real-time inference at 18 FPS with 1.2-GB memory usage, meeting industrial throughput requirements. This methodology not only enables high-precision automated inspection of semiconductor bond wires but also provides a scalable framework for defect detection in other integrated circuit manufacturing stages, such as solder joint inspection and wafer surface analysis.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164