<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

基于配準對抗生成網絡的CBCT生成偽CT圖像研究

Research on synthetic-CT generation from CBCT using Reg-GAN

  • 摘要: 在腫瘤放射治療領域,基于錐形束CT(Cone-beam computed tomography, CBCT)的圖像引導技術能有效校正患者擺位誤差并監測病灶體積變化,但因圖像固有的散射噪聲和重建偽影,限制了其在臨床上的應用,如何實現CBCT圖像的CT值快速校準,對提升診療效率具有重要意義. 本研究提出一種基于融合配準機制的改進型對抗生成網絡模型(Registration adversarial generative network, Reg-GAN),通過非配對醫學影像數據的高效映射,實現CBCT圖像向偽CT(synthetic CT, sCT)的CT值修正. 研究46例頭頸部腫瘤患者的定位CT(Planning CT, pCT)與CBCT影像(采集間隔<24 h),38例患者用于模型訓練,8例患者用于驗證與測試. 預處理階段,將pCT配準于CBCT圖像,配準后pCT作為參考圖像進行sCT的圖像質量評估. 結果顯示,CBCT與pCT的CT值差異在0~250 HU(Hounsfield unit)之間,sCT與pCT的CT值差異在?50~50 HU之間;對于軟組織和腦組織,CT差異為0 HU. 與原CBCT相比,sCT圖像的平均絕對誤差(Mean absolute error, MAE)從(52.5±26.6) HU降至(36.6±11.6) HU(P=0.041<0.05),峰值信噪比(Peak signal-to-noise ratio, PSNR)由(25.1±3.1) dB提升至(27.1±2.4) dB(P=0.006<0.05),結構相似性指數(Structural similarity index, SSIM)從0.82±0.03優化至0.84±0.02(P=0.022<0.05). sCT與pCT圖像相比,關鍵劑量學參數之間的P值均小于0.05,差異無統計學意義. 經配準對抗網絡模型生成的sCT圖像質量顯著提升,并在劑量學特性上與pCT一致性較高,為在線自適應放療的臨床實施提供了可靠的技術支撐.

     

    Abstract: In radiotherapy, although the image-guidance technique based on cone-beam computed tomography (CBCT) can effectively correct patient setup errors and monitor lesion volume changes, its inherent scattering noise and reconstruction artifacts result in distorted image grayscale values, which limits its clinical application. To achieve fast calibration of CBCT HU(Hounsfield unit) values in intra-fraction adaptive radiotherapy, we propose an adversarial generative network model called registration-enhanced generative adversarial network (Reg-GAN) based on the deformation registration mechanism, which realizes efficient calibration of CBCT images to the radiotherapy dosage by efficiently mapping unpaired medical image data to the radiotherapy dosage. Mapping achieves fast grayscale calibration of CBCT images to pseudo-CT, synthetic CT (sCT). The study included paired simulated CT, also known as planning CT (pCT), and CBCT image data from 46 patients with head and neck tumors (acquisition interval <24 h). Stratified random sampling was used to divide the dataset into training (38 cases) and validation (eight cases) groups. In the preprocessing stage, a rigid registration algorithm was applied to spatially align the pCT with the CBCT coordinate system, and voxel resampling was used to achieve spatial pixel standardization. The Reg-GAN network architecture is based on a cycle-consistent adversarial network (Cycle-GAN) and innovatively integrates a deep-learning-based multimodal alignment module to optimize the image quality through joint optimization. The Reg-GAN architecture significantly improves the robustness of the model to noise and artifacts by jointly optimizing the image generation loss and spatial deformation field constraints. Quantitative evaluation showed that by comparing the HU values of the corresponding voxels in the spatial coordinate system, the difference in HU values between CBCT and pCT was between 0 and 250 HU within anatomical structures, the difference in HU values between sCT and pCT was between ?50 and 50 HU, and the difference in HU values for soft and brain tissues was 0 HU. In contrast, the sCT generated by Reg-GAN showed significant improvement in image quality metrics over the original CBCT: Mean absolute error (MAE) decreased from (52.5±26.6) HU to (36.6±11.6) HU (P=0.041<0.05), peak signal-to-noise ratio (PSNR) increased from (25.1±3.1) dB to (27.1±2.4) dB (P=0.006<0.05), and structural similarity index (SSIM) was optimized from 0.82±0.03 to 0.84±0.02 (P=0.022<0.05). Dosimetric validation was performed using a multimodal image fusion strategy, in which pCT was used as a baseline image and sCT was rigidly aligned to map the target volume and organs at risk through deformation contouring. The dose calculation results of the treatment planning system (TPS) showed that the dose distributions and dose–volume histogram (DVH) generated by sCT and pCT maintained high consistency, and the P-values of the pivotal dosimetric parameters were all >0.05, with no statistically significant difference between them, validating the dosimetric accuracy of sCT in adaptive radiotherapy. In this study, the limitation of CBCT image grayscale distortion on dose calculation was effectively solved by the synergistic optimization of deep alignment and the generative adversarial network. The proposed Reg-GAN model not only enhances the workflow efficiency of image-guided radiotherapy but also exhibits excellent performance of the generated sCT in terms of image quality and dosimetric properties, providing reliable technical support for the clinical implementation of online adaptive radiotherapy.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164