Accurate classification of periodontal disease through panoramic X-ray images carries immense clinical importance for effective diagnosis and treatment. Recent methodologies attempt to classify periodontal diseases from X-ray images by estimating bone loss within these images, supervised by manual radiographic annotations for segmentation or keypoint detection. However, these annotations often lack consistency with the clinical gold standard of probing measurements, potentially causing measurement inaccuracy and leading to unstable classifications. Additionally, the diagnosis of periodontal disease necessitates exceptional sensitivity. To address these challenges, we introduce HC-Net, an innovative hybrid classification framework devised for accurately classifying periodontal disease from X-ray images. This framework comprises three main components: tooth-level classification, patient-level classification, and a learnable adaptive noisy-OR gate. In the tooth-level classification, we initially employ instance segmentation to individually identify each tooth, followed by tooth-level periodontal disease classification. For patient-level classification, we utilize a multi-task strategy to concurrently learn patient-level classification and a Classification Activation Map (CAM) that signifies the confidence of local lesion areas within the panoramic X-ray image. Eventually, our adaptive noisy-OR gate acquires a hybrid classification by amalgamating predictions from both levels. In particular, we incorporate clinical knowledge into the workflows used by professional dentists, targeting the enhanced handling of sensitivity of periodontal disease diagnosis. Extensive empirical testing on a dataset amassed from real-world clinics demonstrates that our proposed HC-Net achieves unparalleled performance in periodontal disease classification, exhibiting substantial potential for practical application.
Keywords: Clinical knowledge; Hybrid classification; Paranocmic X-ray image; Periodontal disease.
Copyright © 2024. Published by Elsevier B.V.