<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

一種用于景象匹配導航的新型圖像配準算法

Novel image registration algorithm for scene-matching navigation

  • 摘要: 高精度定位與導航技術是實現無人機自主飛行的核心要素之一,當在衛星拒止條件下,衛星導航無法進行準確定位時,景象匹配視覺導航技術因其設備結構的簡潔性和被動式定位的高精度而備受關注,當它與慣性系統結合時,能夠構建出一個高度自主且精確的導航系統. 在景象匹配系統中,最為關鍵的步驟是將實時拍攝到的圖像與預先裝載的基準圖進行精確配準. 然而,這一過程面臨著無人機高速飛行和基準圖多源性的雙重挑戰,這要求圖像配準在確保精度的同時,還必須具備快速響應和強魯棒性. 為了克服這些難題,本文提出了一種名為Dimensionality reduction second-order oriented gradient histogram(DSOG)的描述子,該描述子通過描述圖像定向梯度信息的像素特征,有效地實現了圖像特征的提取. DSOG描述子采用區域特征的特征提取策略,實現對不同傳感器采集到的圖像數據進行精準匹配,滿足飛行器在全天候的條件下實現高精度導航的需求. 在此基礎上,還設計了一種優化后的相似度度量匹配模板,該模板在頻域上對傳統的基于快速傅里葉變換的特征表示快速相似度度量算法進行了優化,減少了匹配過程中的冗余計算. 本文提出的匹配框架經過對不同類型多模態圖像的廣泛評估,實驗數據包括可見光-可見光、可見光-合成孔徑雷達、可見光-高光譜等異源圖像對,同時,將提出的算法于目前主流的圖像配準算法進行了對比,結果顯示,與當前主流方法相比,在保持匹配精度的前提下,顯著提升了計算效率,同時相比于深度學習算法,本文提出的算法無需經過大量的數據訓練即可得到實際使用所需的泛化性. 具體來說,本文提出的算法在多模態圖像的平均匹配時間僅為1.015 s,不僅滿足了無人機景象匹配導航對實時性和魯棒性的要求,而且為無人機的廣泛應用提供了強有力的技術支持.

     

    Abstract: High-precision positioning and navigation technology are crucial for the autonomous operation of unmanned aerial vehicles (UAVs), enabling them to determine their location and navigate to predetermined destinations without human intervention. In scenarios where satellite navigation is unavailable, image matching–based visual navigation technology becomes essential owing to its simple device structure and high accuracy in passive positioning. When combined with inertial systems, this technology creates a highly autonomous and precise navigation system. Compared with traditional simultaneous localization and mapping for visual navigation, which requires extensive computation for continuous point cloud mapping, scene matching ensures real-time performance without such demands. At the core of the image-matching system is the registration of real-time captured images with preloaded reference images, a task complicated by the high-speed flight of UAVs and diverse image sources. This necessitates a rapid and robust registration process while maintaining high precision. To tackle these challenges head-on, we developed a novel descriptor known as dimensionality reduction second-order oriented gradient histogram (DSOG), which is characterized by its high precision and robustness, making it ideal for image matching. It effectively extracts image features by delineating pixel characteristics of oriented gradients and uses a regional feature extraction strategy. This is advantageous over point and line features, especially when handling nonlinear intensity differences among heterogeneous images during matching, enabling precise matching of image data collected by different sensors and satisfying high-precision navigation needs under all-weather conditions for aerial vehicles. Building upon this descriptor, we have crafted an optimized similarity measurement matching template. This enhances the traditional fast similarity measurement algorithm, which uses fast Fourier transform in the frequency domain, thereby reducing computational redundancy inherent in the matching process. Our framework has been rigorously evaluated across diverse multimodal image pairs, including optical–optical, optical–SAR, and optical–hyperspectral datasets. Our algorithm has been compared with current state-of-the-art image registration methods, including traditional feature–based approaches such as DSOG, histogram of oriented phase congruency (HOPC), and radiation-variation insensitive feature transform (RIFT), as well as deep learning–based techniques such as Loftr and Superpoint. The results demonstrate that our method considerably improves computational efficiency while maintaining matching precision. Moreover, unlike deep learning algorithms that require extensive data training for generalization, our algorithm achieves the necessary level of generalization without such extensive training. In particular, our algorithm achieves an average matching time of only 1.015 s for multimodal images, meeting real-time performance and robustness requirements for UAV scene–matching navigation. Our study not only offers innovative solutions for enhancing the precision and reliability of UAV navigation systems but also carries substantial practical significance. It has broad application potential in military, civil, and commercial sectors, thereby shaping the future of autonomous navigation in the aerospace industry.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164