<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

基于小波和動態互補濾波的圖像與事件融合方法

Image and event fusion method based on wavelet and dynamic complementary filtering

  • 摘要: 本文探討了事件相機和傳統幀相機數據的融合,引入了一種旨在增強復雜光照條件下成像質量的新型融合方法. 事件相機作為創新性視覺傳感器的代表,以其高時域分辨率和極低功耗受到廣泛關注. 然而,事件相機產生的數據往往存在噪聲和特征丟失等問題. 相反,傳統幀相機具有較高的空間分辨率,但在捕捉涉及快速運動或包含廣泛動態范圍場景時表現不佳. 基于此,本文引入了一種創新方法,將離散小波變換與動態增益互補濾波相結合,有效地融合圖像和事件數據. 該方法首先通過計算圖像的信息熵評估曝光水平. 然后,采用離散小波變換來從事件數據和幀圖像數據中分離高頻和低頻細節. 隨后,該方法應用動態增益互補濾波器來實現圖像和事件數據的融合. 該方法的核心在于其能夠自適應地平衡每種數據源的貢獻,從而確保在不同條件下實現最佳的圖像重建質量. 通過利用事件相機的高頻時域信息和幀相機的高分辨率空間信息,該方法旨在克服每種傳感器固有的局限性. 這種融合不僅可以減輕事件相機數據中遇到的噪聲和特征丟失,還可以解決幀相機在捕捉高速運動和具有明顯亮度變化的場景時的缺點. 該融合方法的有效性已經在HDR Hybrid Event-Frame數據集上進行了測試,該數據集具有高動態范圍和復雜光照環境的真實場景. 實驗結果展示了該方法在提高圖像質量方面實現的改進. 與傳統圖像重建方法相比,本文方法在幾個關鍵指標上表現出色:均方誤差為0.0199,結構相似性指數為0.90,Q-score為6.07. 這些結果不僅驗證了所提出的融合方法在改善在挑戰性條件下成像質量方面的有效性,還凸顯了整合不同類型視覺數據以實現更優重建結果的潛力.

     

    Abstract: This study investigates the fusion of data from event cameras and traditional frame cameras, introducing a novel fusion approach designed to enhance image quality under complex lighting conditions. Event cameras are an innovative class of vision sensors that are known for their high temporal resolution and minimal power consumption; however, their output is often plagued by noise and feature loss. Conversely, traditional frame cameras boast commendable spatial resolution; however, they struggle to capture fast-moving scenes or scenes with a vast dynamic range. To address these challenges, the study proposes an innovative method that combines discrete wavelet transform with dynamic gain complementary filtering to fuse image and event data. The process begins by evaluating the exposure level of incoming image frames using the image entropy of a metric. Following this assessment, the discrete wavelet transform segregates the high- and low-frequency components from the event stream and frame image data. A dynamic gain complementary filter is applied to seamlessly integrate image and event data. The proposed method capitalizes on its ability to balance the contribution of each data source adaptively, thereby ensuring optimal reconstruction quality under varying conditions. By leveraging the high-frequency temporal information from event cameras and the high-resolution spatial information from frame cameras, the proposed method attempts to overcome the limitations inherent in each type of sensor. This fusion not only mitigates the noise and feature loss in event camera data but also improves the capture of high-speed movements and scenes with significant brightness variations. The efficacy of this fusion approach was rigorously tested on the HDR Hybrid Event-Frame Dataset, which includes high dynamic range and complex lighting environments in real-world scenarios. The experimental results underscored a notable improvement in image quality, outperforming traditional image reconstruction methods. Our proposed approach has demonstrated superior performance, as evidenced by its scores on several key metrics: a mean squared error of 0.0199, a structural similarity index measure of 0.90, and a Q-score of 6.07. These results not only validate the effectiveness of the proposed fusion method in enhancing imaging quality under challenging conditions but also highlight the potential of integrating disparate types of visual data to achieve superior reconstruction outcomes.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164