Agency's Novel Drug Endorsement Artificial Intelligence Producing Falsified Research Studies: Allegation
The Food and Drug Administration (FDA) has introduced a new AI tool named Elsa (Electronic Language System Assistant) to aid its staff in various administrative tasks, such as summarising adverse event reports, performing label comparisons, and supporting clinical protocol reviews [1][3][4]. Early pilot results suggest that Elsa can significantly reduce tasks that previously took days to mere minutes [2].
However, concerns have been raised about the tool's potential inaccuracies, a common issue with large language models [2][3]. The FDA has not released detailed data on Elsa's accuracy in summarising scientific studies or adverse events, and the tool's real-world performance remains somewhat opaque due to limited transparency about its training and validation processes [1].
Industry experts and some FDA personnel have expressed worries about the possibility of oversimplification or misrepresentation of scientific findings [2][3]. A recent study by the Royal Society highlighted that summaries generated by similar models may lack nuance or misrepresent data, which could have downstream implications for regulatory decisions if not carefully reviewed [3].
The FDA emphasises that Elsa is intended to reduce "busywork" for human reviewers, not to replace scientific judgment, and stresses the importance of human oversight in the process [2].
Elsa's rollout is part of a broader FDA modernisation strategy, including efforts to dramatically shorten drug approval timelines—potentially from 10–12 months down to 1–2 months [2]. If Elsa produces accurate, consistent, and nuanced summaries, it could enhance both the speed and quality of reviews. However, if summaries are incomplete or misleading, there is a risk of introducing errors or oversights into the approval process [2][3].
Robert F. Kennedy Jr., the Secretary of Health and Human Services, has advocated for the use of generative artificial intelligence tools in agencies like the FDA [5]. Kennedy's Make America Healthy Again (MAHA) commission issued a report in May that was found to be filled with citations for fake studies [6]. CNN reported that the FDA's AI tool, Elsa, is generating fake studies, a phenomenon known in AI as "hallucinating" [7].
The FDA has not addressed these concerns directly, with William Maloney from the FDA's "rapid response" office stating that the information provided to CNN about the FDA's use of AI was mischaracterized and taken out of context [8]. Maloney did not address Gizmodo's questions about potential mischaracterizations or inaccuracies in CNN's report [9].
In conclusion, Elsa represents a significant step toward integrating AI into FDA workflows, with the potential to streamline drug reviews and approvals. However, its accuracy in generating study summaries is not yet fully validated, and the agency's reliance on human oversight remains essential to ensure the integrity and safety of the regulatory process [1][2][3].
- The FDA has adopted a tech tool called Elsa (Electronic Language System Assistant) to support its staff in various tasks, aiming to reduce daily tasks to minutes from days.
- Some concerns have been voiced about Elsa's potential inaccuracies, being a common issue with large language models, and the FDA has yet to release detailed data on the tool's accuracy.
- Industry experts and FDA personnel are worried about the potential oversimplification or misrepresentation of scientific findings by Elsa, which could impact the regulatory decisions.
- Elsa is not aimed to replace scientific judgment but to help reduce "busywork" for human reviewers, with the FDA emphasizing the importance of human oversight.
- The FDA's adoption of Elsa is part of a broader modernization strategy, aiming to shorten drug approval timelines from 10–12 months down to 1–2 months, with potential implications for both speed and quality of reviews.
- Following the CNN report claiming that Elsa is generating fake studies (a phenomenon known as "hallucinating" in AI), concerns have been raised, and the FDA has not yet directly addressed these issues.
- The future of AI in health-and-wellness sectors, including policies, legislations, and technological advancements in therapies and treatments, will depend on the validation of AI tools' accuracy and integrity, ensuring the safety of the regulatory process.