Document Type
Article
Publication Date
12-1-2025
Abstract
BACKGROUND: Four-dimensional computed tomography (4DCT) imaging is a crucial component to lung cancer radiotherapy planning and enables CT-ventilation-based functional avoidance planning to mitigate radiation toxicity. However, 4DCT scans are frequently impaired by acquisition artifacts that corrupt downstream analyses that depend on lung segmentation and deformable image registration, such as CT-ventilation and dose accumulation.
PURPOSE: This study develops 3D deep learning models to identify phase-binning artifacts at the voxel level and a heuristic, rule-based method to identify interpolation slices within 4DCT images.
METHODS: We introduce a generator that systematically inserts synthetic phase-binning and interpolation artifacts into any artifact-free breathing phase obtained from nine different clinical 4DCT datasets to produce ground-truth data for (1) training modified nnUNet and SwinUNETR models to detect phase-binning artifacts, and (2) to determine thresholds in the rule-based detection method for interpolation artifacts. The use of multiple datasets incorporates robustness across artifact severities, lung geometries, and cancer progressions.
RESULTS: After training on generated synthetic data and several configura- tions (region-based learning masks, left–right lung separation), the nnUNet and SwinUNETR models demonstrated state-of -the-art artifact detection accuracy, averaging 0.957 ± 0.024 (95% CI: [0.956, 0.958]), with an nnUNet configura- tion achieving the highest averaged accuracy of 0.965, sensitivity of 0.805, and specificity of 0.998 when inferring artifact-affected axial slices from voxel- level predictions. By interpolating and comparing groups of slices, the accuracy, sensitivity, and specificity of the proposed interpolation detection method is 0.97, 0.97, and 0.97 on manually labeled true artifact cases. We propose a localized artifact correction method that simply replaces the predicted artifact- affected voxels with the average surrounding lung intensity value, resulting in 65% of lung segmentation masks with Dice scores greater than 0.95 (opposed to 11% of cases before correction) when applying an automatic segmentation tool and within a tight artifact-bound region. When applied to 1989 cases with true artifacts, SwinUNETR configurations tend to be more generalizable despite marginally lower performance on synthetic artifacts. We quantify this perfor- mance without ground-truth artifact masks by statistically comparing artifact properties of detected synthetic and true cases.
CONCLUSIONS: We demonstrate state-of-the-art artifact detection accuracy using 3D deep learning models trained on synthetic data and a rule-based approach configured on true data, providing interpretability by highlighting which voxel locations indicate a slice is artifact-affected. The SwinUNETR model accuracy and fast run-time have the potential to enable more targeted artifact correction methods or signal an imaging technologist when to re-scan a patient in real-time.
Recommended Citation
Cisneros, Jorge; Feldt, Nathan H.; Vinogradskiy, Yevgeniy; Castillo, Richard; and Castillo, Edward, "Detection of Phase-Binning and Interpolation Artifacts in 4-Dimensional Computed Tomography Imaging Using Deep Learning and Rule-Based Approaches" (2025). Department of Radiation Oncology Faculty Papers. Paper 224.
https://jdc.jefferson.edu/radoncfp/224
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
PubMed ID
41389065
Language
English
Included in
Investigative Techniques Commons, Oncology Commons, Radiation Medicine Commons, Theory and Algorithms Commons


Comments
This article is the author’s final published version in Medical Physics, Volume 52, Issue 12, 2025, Article number e70191.
The published version is available at https://doi.org/10.1002/mp.70191. Copyright © 2025 The Author(s).