The Biden administration hasn’t decided how to handle emerging tools like chatbots that interact with patients and answer doctors’ questions – even though some are already in use. Biden is taking a step toward addressing that today with an executive order that calls on the Department of Health and Human Services to create a task force to develop a strategic plan within a year on the responsible use of AI, POLITICO’s Mohar Chatterjee and Rebecca Kern reported. State of play: The Food and Drug Administration has approved about 520 AI-enabled devices — mostly for radiology, where the technology has shown promise in reading X-rays. FDA Commissioner Robert Califf said in an August meeting he believed the agency has done well with predictive AI systems, which take data and conjecture an outcome. But many products in development are using newer, more advanced technology capable of responding to human queries — something Califf has called a “sort of scary area” of regulation. Those present even more challenges to regulators, experts said. Troy Tazbaz, the director of the FDA’s Digital Health Center of Excellence, told Daniel his agency recognizes it needs to do more. Emerging regulation: AI products made for health care use and similar to Chat-GPT, the bot that can pass medical exams, require “a vastly different paradigm” to regulate, Tazbaz explained. But the agency is still working on that. There are no regulations specifically addressing the technology, so the FDA is planning a novel system. Tazbaz believes the FDA will create a process of ongoing audits and certifications of AI products, hoping to ensure continuing safety as the systems change. Even so: There’s risk that rules that are too onerous could quash innovation that might yield benefits for patients if it can make care better, cheaper and more equitable. The FDA is taking care not to stunt the new tech’s growth, Tazbaz said, talking with industry leaders, hearing their concerns and sharing the agency's thinking. “How do we actually regulate something like that without necessarily losing the pace of innovation?” Tazbaz asked. Safety first: In the absence of new rules, doctors are rapidly deploying AI — to interpret tests, diagnose diseases and provide behavioral therapy. Products that use AI are going to market without the kind of data the government requires for new medical devices or medicines. That worries Tazbaz. “The medical community needs to effectively look at the liabilities,” he said of AI used to diagnose patients. “Would I personally feel safe? I think it depends on the use case.”
|
Comments
Post a Comment