Externally Validated Deep Learning Model for Multi-Disease Classification of Chest X-Rays

Authors

  • Weny Indah Kusumawati Universitas Dinamika Author
  • Zendi Zakaria Raga Permana Institut Teknologi Bandung Author
  • Ira Puspasari Universitas Dinamika Author

DOI:

https://doi.org/10.15294/jte.v17i2.29892

Keywords:

chest X-Ray, deep learning, external validation, medical imaging, multi-disease classification

Abstract

Accurate classification of chest X-ray (CXR) images is vital for early detection of thoracic diseases such as COVID-19, Tuberculosis, and Pneumonia, particularly in regions with limited radiological expertise. While deep learning has shown promise in CXR interpretation, many existing models rely solely on internal datasets, risking overfitting and poor generalizability. Furthermore, inadequate tuning of network architectures may limit robustness across varied imaging conditions. This study presents an externally validated deep learning framework based on Convolutional Neural Networks (CNNs) for multi-disease CXR classification. This study compared a baseline CNN with two convolutional layers against a tuned architecture with three layers across multiple image resolutions (64×64, 112×112, 224×224). The proposed model employs transfer learning with a pre-trained CNN, fine-tuned for four-class classification using a softmax output layer. Training was performed with the Adam optimizer (learning rate: 0.0001, batch size: 32) and categorical cross-entropy loss, for up to 50 epochs with early stopping. Internal validation showed the tuned model outperformed the baseline, achieving 0.97 accuracy and an F1-score of 0.89. External validation confirmed superior generalizability, with the tuned model attaining an F1-score of 0.83 and an AUC of 0.97 at 112×112 resolution, compared to the baseline’s F1-score of 0.79 and AUC of 0.94. These results highlight the potential of optimized CNN architectures as reliable, scalable tools for radiological decision support in resource-limited healthcare systems. Future work will incorporate explainable AI methods and real-world clinical validation to ensure safe, interpretable deployment.

Downloads

Published

2025-12-30

Article ID

29892

Issue

Section

Articles