AI in medicine: More studies needed



Dr. Thomas Kaiser, Director of IQWiG, spoke about the opportunities and risks of AI for evidence-based medicine. / © Avoxa/Matthias Merz
AI applications are becoming increasingly adept at searching vast amounts of data for very specific information and then presenting this information to the user in the form of self-generated text or images. "The enormous advances in AI have been made possible by an explosion in computing power," Kaiser said. This explosion is not yet over, so further improvements can be expected in the near future.
AI has also long since found its way into medical research and applications. In recent years, the number of approvals for medical devices and procedures using AI by the US FDA has increased dramatically. The vast majority of these devices (77 percent) are in the field of radiology, for example, diagnostic software for evaluating imaging images.
"The problem is that it's relatively unclear why these approvals were granted," Kaiser said. The FDA website, otherwise known for its high level of transparency, provides very little information about which specific data were decisive for the approval of these devices.
The argument often made is that these devices without the AI component already existed before, meaning they are not entirely new. "But that's not always comprehensible," said the IQWiG director. Because AI often influences the decision about treatment, he says. This potentially poses a high risk.
While publications on AI in medicine have increased significantly in recent years, they include very few high-quality studies. This is only partly due to the fact that CONSORT (Consolidated Standards of Reporting Trials) criteria, a recognized quality standard for studies, were not defined for AI studies until 2022. "The opportunity to test AI in well-conducted studies exists not only in the USA and China, which are global pioneers in AI, but also here in Germany," Kaiser said. Unfortunately, this research is currently being neglected.
In his view, the problem is not AI in medicine, but the hype surrounding it. "People are so keen to have AI in routine care, forgetting that it must first be properly investigated." The same rules should apply as for any other medical procedure: first, it should be an outsider method, then testing, and only then, if necessary, establishment as state-of-the-art. Only in this way can one ensure that the use of AI does not actually do more harm than good.

pharmazeutische-zeitung