<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

DS-TransFusion:基于改進Swin Transformer的視網膜血管自動分割

DS-TransFusion: Automatic retinal vessel segmentation based on an improved Swin Transformer

  • 摘要: 視網膜血管的準確分割在輔助篩查各種疾病方面具有重大意義. 然而,當前流行的模型仍存在細小血管的分割不清晰,以及眼底血管分支末端與背景的對比度較低等問題. 針對這些問題,本文提出了一種全新的視網膜血管分割模型,命名為Dual Swin Transformer Fusion(DS-TransFusion). 首先,DS-TransFusion采用基于Swin Transformer的雙尺度編碼器子網絡,以提取視網膜血管的粗粒度和細粒度特征. 其次,在跳躍連接處引入了Transformer交互融合注意力(TIFA)模塊,用于豐富跨視圖上下文建模和語義依賴,同時捕獲來自不同圖像視圖的數據之間的長期相關性. 最后,在編碼器和解碼器之間,DS-TransFusion采用了多尺度注意力(MA),用于收集多尺度特征表示的全局對應關系,進一步優化模型的分割效果. 實驗結果表明,DS-TransFusion在公共數據集STARE、CHASEDB1和DRIVE上表現出色,準確率分別達到了96.50%、97.22%和97.80%,靈敏度達到84.10%、84.55%和83.17%. 實驗表明DS-TransFusion能有效提高視網膜血管分割的精度,準確分割出細小血管. 對視網膜血管分割的準確度、靈敏度和特異性都有大幅提高,與現有的SOTA方法相比具有更好的分割性能.

     

    Abstract: Retinal vascular segmentation holds significant value in medical research, playing an indispensable role in facilitating the screening of various diseases, such as diabetes, hypertension, and glaucoma. However, most current retinal vessel segmentation methods mainly rely on convolutional neural networks, which present limitations when dealing with long-term dependencies and global context connections. These limitations often result in poor segmentation of small blood vessels and low contrast between the ends of fundus blood vessel branches and the background. Addressing these issues is a pressing concern. To tackle these challenges, this paper proposes a new retinal blood vessel segmentation model, namely Dual Swin Transformer Fusion (DS-TransFusion). This model uses a two-scale encoder subnetwork based on a Swin Transformer, which is able to find correspondence and align features from heterogeneous inputs. Given an input image of a retinal blood vessel, the model first splits it into two nonoverlapping blocks of different sizes. These are then fed into the two branches of the encoder to extract coarse-grained and fine-grained features of the retinal blood vessels. At the jump junction, DS-TransFusion introduces the Transformer interactive fusion attention (TIFA) module. The core of this module is to use a multiscale attention (MA) mechanism to facilitate efficient interaction between multiscale features. It integrates features from two branches at different scales, achieves effective feature fusion, enriches cross-view context modeling and semantic dependency, and captures long-term correlations between data from different image views. This, in turn, enhances segmentation performance. In addition, to integrate multiscale representation in the hierarchical backbone, DS-TransFusion introduces an MA module between the encoder and decoder. This module learns the feature dependencies across different scales, collects the global correspondence of multiscale feature representations, and further optimizes the segmentation effect of the model. The results showed that DS-TransFusion performed impressively on public data sets STARE, CHASEDB1, and DRIVE, with accuracies of 96.50%, 97.22%, and 97.80%, and sensitivities of 84.10%, 84.55%, and 83.17%, respectively. Experimental results show that DS-TransFusion can effectively improve the accuracy of retinal blood vessel segmentation and accurately segment small blood vessels. Overall, DS-TransFusion, as a Swin Transformer-based retinal vessel segmentation model, has achieved remarkable results in solving the problems of unclear segmentation of small vessels and global context connection. Experimental results on several public data sets have validated the effectiveness and superiority of this method, suggesting its potential to provide more accurate retinal vascular segmentation results for auxiliary screening of various diseases.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164