Written by Hannah Harp
Spoon Feed
An untrained large-language model (LLM) provided accurate and overall complete answers when asked about antibiotic choice in an outpatient setting.
Alexa, can you add bubblegum flavor?
The possible applications of artificial intelligence (AI) in medicine are endless, but we’ve been a bit slower to test and adopt AI in pediatric medicine. Since antibiotic recommendations for children are dependent on weight, age, additional symptoms, previous antibiotic exposure, vaccination status, and the local antibiogram, can we expect an LLM to help us choose the appropriate antibiotic and dosing for kids?
This study evaluated whether an untrained LLM (ChatGPT-3.5) could assist in antibiotic prescribing for 13 common pediatric infections. Using standardized clinical scenarios, expert pediatric infectious disease specialists assessed AI responses for accuracy, completeness, and clinical translatability on a scale from 1 (bad) to 5 (good). Mean scores were 4.07, 3.89, and 3.96 respectively. Notably, 73.1% of expert ratings were ≥4, with highest performance in scenarios governed by clear guidelines (e.g., otitis media, pharyngitis). As the authors point out, LLMs are only as accurate as the information you feed into them. Outpatient antibiotic choice and dosing is (in general) pretty straight-forward, so I would expect exceptional accuracy for the answers to these questions.
How does this change my practice?
Overall, I found the computer’s answers to be correct and easy to apply. For the next study, I’d like to see the LLM respond to more nuanced clinical case scenarios to push the model’s limits.
Source
Use of an Untrained Large Language Model for Antibiotic Prescription in Pediatric Infectious Diseases at Primary Care Settings: A Study From the Italian Society for Pediatric Infectious Diseases. Pediatr Infect Dis J. 2025 Jun 1;44(6):e199-e202. doi: 10.1097/INF.0000000000004748. Epub 2025 May 8. PMID: 40359241
