Activity ID
14650Expires
November 13, 2028Format Type
Journal-basedCME Credit
1Fee
$30CME Provider: JAMA Otolaryngology – Head & Neck Surgery
Description of CME Course
Importance Deep learning (DL), a subset of artificial intelligence, uses multilayered neural networks to uncover complex patterns in large datasets without manual feature engineering. Unlike traditional machine learning, DL autonomously learns hierarchical representations from raw data, offering distinct advantages for analyzing images (eg, stroboscopy) and physiologic signals (eg, cochlear implant optimization). Despite these advances, DL remains conceptually difficult for many clinicians to integrate into routine clinical practice. This narrative review sought to synthesize recent DL applications and propose a framework for their integration in otolaryngology.
Observations A total of 1422 articles (2020-2025) were screened, and 327 original research studies on DL in otolaryngology were included in the analysis. The included articles were categorized into 4 domains: detection and diagnosis (179 [55%]), prediction and prognostics (16; [5%]), image segmentation (93 [28%]), and emerging applications (39 [12%]). Proof-of-concept studies have demonstrated that DL systems can achieve acceptable diagnostic performance comparable to experts, with models accurately identifying nasopharyngeal carcinoma (92%), laryngeal malignant neoplasms (86%), and otologic pathology (>95%). Prognostic applications included survival stratification in oropharyngeal cancer and recurrence prediction in chronic rhinosinusitis. Segmentation models reliably delineated anatomical regions. Emerging uses encompassed hearing aid optimization, surgical instrument tracking, and intraoperative landmark identification. Further progress requires multi-institutional datasets, standardized acquisition protocols, and transparent, interpretable models to improve trust and clinical adoption.
Conclusions and Relevance This narrative review found that DL applications in otolaryngology show potential for improving diagnostic performance, predicting outcomes, and providing intraoperative guidance. Widespread and equitable adoption needs to be supported by harmonized, high-quality, and representative datasets, as well as the mitigation of algorithmic bias and robust model interpretability. Federated learning and explainability are emerging frameworks that support the preservation of privacy and increased clinician trust. Standardized reporting, prospective validation, human-in-the-loop models, and interdisciplinary partnerships can help balance the promise of algorithmic approaches and their clinical utility, ensuring that DL tools contribute meaningfully to patient care.
Disclaimers
1. This activity is accredited by the American Medical Association.
2. This activity is free to AMA members.
ABMS Member Board Approvals by Type
ABMS Lifelong Learning CME Activity
Allergy and Immunology
Anesthesiology
Colon and Rectal Surgery
Family Medicine
Medical Genetics and Genomics
Nuclear Medicine
Ophthalmology
Pathology
Physical Medicine and Rehabilitation
Plastic Surgery
Preventive Medicine
Psychiatry and Neurology
Radiology
Thoracic Surgery
Urology
Commercial Support?
NoNOTE: If a Member Board has not deemed this activity for MOC approval as an accredited CME activity, this activity may count toward an ABMS Member Board’s general CME requirement. Please refer directly to your Member Board’s MOC Part II Lifelong Learning and Self-Assessment Program Requirements.
Educational Objectives
To identify the key insights or developments described in this article
Keywords
Artificial Intelligence, Digital Health
Competencies
Medical Knowledge
CME Credit Type
AMA PRA Category 1 Credit
DOI
10.1001/jamaoto.2025.3911