Artificial intelligence, or health products powered by AI, are flowing into our lives, from virtual doctors apps to wearable sensors and drug store chatbots.
IBM boasted that its AI could "spread cancer". Others say that computer systems such as reading X-rays will disable the radiologist.
Nothing has been seen in my 3-plus years of studying such effective and transformative drugs as AI, "said Eric Tople, a cardiologist and executive vice president of Scripps Research in La Jolla, California. Scan the images behind the eyes and potentially perform a lot of physical work c Rahana can spend more time with patients to talk to the doctors freed, tapala said.
Even the US Food and Drug Administration, which has approved more than 5 AI products over the past five years, said "the potential for digital health is not less than revolutionary."
Yet many health industry experts fear that AI-based products may not match the hype. "Many physicians and consumer advocates fear that the tech industry is putting patients at risk by the mantra of" fixing the machine after failing fast and fixing it "and that regulators are not doing enough to protect consumers.
Early tests for AI provide a reason for caution, says Mildred Ch, a pediatrician at the Center for Biomedical Ethics in Stanford.
Systems developed in one hospital often flop when deployed to another facility, Cho said. The software used to care for millions of Americans has been discriminated against by a minority. And AI systems sometimes learn to make predictions based on factors that are less correlated with the disease than the brand of used MRI machines, whether during a blood test or when a patient was seen by a chaplain. In one case, the AI software mistakenly decided that a person suffering from pneumonia was less likely to die if they had an asthma defect that physicians could exclude from the extra care needed by asthmatic patients.
Steven Nissen, chairman of the Cleveland Clinic's cardiology, says, "It's only a matter of time before this leads to some serious health problems.
Medical AI, which is only $ 3 in the third quarter. The billion-dollar investment capital was earned, almost at the peak of "inflation expectations," according to a July report by research firm Gartner.
Checking that reality can come in the form of disappointing results when AI products are incorporated into the real world. Even Tople, author of "Deep Medicine: How Artificial Intelligence can Rebuild Healthcare Humans", acknowledges that many AI products are little more than hot air. "It's a mixed bag," he said.
Experts like Bob Kocher, partner at venture capital firm Wenrock, are even more blunt. Unless an AI system is used by a large number of patients, some risks will not be apparent.
According to a January article in the European Journal of Clinical Investigation, several tech startups publish their research in peer-reviewed journals, which allow other scientists to validate their work. Such "stealth research", described only in press releases or promotional events, often outweighs any company's success.
Although developers can boast about the accuracy of their device, experts note that AI models are mostly tested in computers not in treatment. Ron Lee, director of medical informatics for AI clinical integration at Stanford Healthcare, says using unproven software "can turn patients into unmarked guinea pigs."
AI systems that learn to detect patterns in data are often described as "black boxes" because their developers do not even know how they have reached their conclusions. Given the fact that AI is so new and many areas at risk, the field should be carefully monitored, said Pilar Osorio, a professor of law and bioethics at the University of Wisconsin-Madison.
Yet most AI devices do not require FDA approval.
The law and technology industry, adopted by Congress in 2016, exempts many types of medical software from a federal review of the final, some with fitness apps, electronic health records and tools that help physicians make medical decisions.
A December 17th AI report from the National Academy of Medicine says there is little research on whether the currently used 320,000 medical applications are actually improving health.
"None of the [AI] content provided to patients is truly effective," said Ezekiel Emanuel, a professor of medical ethics and health policy at the Perelman School of Medicine at the University of Pennsylvania. "
The FDA has long focused its attention on devices that are the biggest threat to patients. And consumer advocates recognize that some of the devices that help people calculate their daily steps require less investigation than diagnosis or treatment.
Some software developer lawyers do not bother to apply for FDA exemptions or endorsements as needed, according to a 2018 survey of the Annals of Internal Medicine.
Industry analysts say that AI developers have little interest in conducting costly and time-consuming trials. "Putting ourselves in for the rigorous evaluation of these companies is not the main concern that will be published in a peer-reviewed journal," said Buzz Allen Hamilton, principal of a technology consulting firm, and co-author of the National, reports the academy. "That's not how the US economy works" "
But Oren Atzioni, chief executive officer of Seattle's AI Institute for AI, says there is a financial incentive for AI developers to make sure their treatments are safe.
Relaxing AI Standards in the FDA
Over the past decade, the FDA has come under fire for allowing the sale of dangerous medical devices, which in the past decade has killed 5 people and injured 7.7 million through a team of international investigative journalists.
Many of these devices were cleared for use by a controversial process called the 510 (k) pathway, which allowed companies to market "moderately risky" products without requiring clinical trials as demanded, a committee of the National Academy of Medicine decided. That 510 (k) process was basically so flawed that the FDA got it The cast should be, and it should start.
Instead, the FDA is using the Greenlight AI device to process.
According to a November article in JAMA, 11 of the 14 AI products approved by the FDA in 2017 and 2018 were cleared through the 510 (k) process. None of these appear to be new clinical trials, the survey said. The FDA cleared an AI device designed to diagnose liver and lung cancer in 2018, based on a combination of imaging software approved 20 years ago. That software cleared itself up because it seemed "quite the opposite" compared to products marketed before 1976.
Today AI products cleared by the FDA are essentially "locked in" so that they do not change their calculations and results after entering the market, said Bakul Patel, director of digital health at the FDA's Center for Devices and Radiological Health. The FDA has not yet approved "unlocked" AI devices, the results of which may vary from month to month that developers cannot predict.
In the face of the flood of AI products, the FDA is examining an entirely different approach to digital device control, focusing on evaluating companies, not products.
The FDA's pilot "pre-certification" program, launched in 2017, is designed to "reduce market access time and cost for software developers", pushing the "minimum possible" system. FDA officials say they want to keep up with AI software developers, who update their products more frequently than traditional carriers like X-ray machines.
Scott Gottlieb, 25, when he was FDA Commissioner, said that his approach to innovative products "needs to be ensured that it does not impede innovation."
Under the plan, the FDA will pre-certify companies that have "demonstrated a culture of quality and organizational enthusiasm," giving them the opportunity to provide less information about the device.
Pre-certified companies can then release the devices to a "streamlined" review or an FDA review. Once the products are on the market, companies will be responsible for protecting their own products and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, Fitbit, Samsung, Johnson & Johnson, Peer Therapeutics, Phosphorus, Roche, Tidepool and True Life Sciences.
High-risk products, such as software used in pacemakers, will still receive a comprehensive FDA evaluation. "We certainly want to harm patients," Patel said, adding that clear devices can be retrieved through pre-certification as needed. "There are still a lot of guards."
But research has shown that even low- and medium-risk devices have been restored because of a serious risk to patients, said Diana Zuckerman, president of the National Center for Health Research.
Johnson and Johnson, for example, recalled hip implants and surgical mesh.
"The honor system is not a regulatory system," said Jesse Ehrenfeld, chair of the physician group's board of trustees. Sense in an October letter to the FDA.
When the good algorithm gets worse
Some AI devices are tested more carefully than others.
An AI-powered screening tool for diabetic eye disease was studied in 900 patients in 10 primary care offices before being approved in 2018. The manufacturer has worked with the FDA for eight years to get the rights to the IDX Technologies product, said Michael Abrams, founder and executive chairman of the company. .
This test, marketed as IDX-DR, screens for patients with diabetic retinopathy, which is a leading cause of blindness and refers to ophthalmologists in high-risk patients who have a specific diagnosis.
IDX-DR is the first "autonomous" AI product that can make screening decisions without a doctor. The company is now installing it at primary care clinics and grocery stores, where it can be managed by employees with a high school diploma. Abramsh's company has taken the unusual step of purchasing liability insurance for a patient's injury.
Yet some AI-based innovations aimed at improving care have the opposite effect.
For example, a Canadian company has developed AI software to predict Alzheimer's risk based on their statement. The predictions were more accurate for some patients than for other patients. Frank Rudzick, associate professor of computer science at the University of Toronto, says: "Difficulty in finding the right word may be due to unfamiliarity with English rather than cognitive impairment.
Physicians at Mount Sinai Hospital in New York hoped that AI could use their chest x-ray to predict which patients were at risk for pneumonia. Although the system accurately predicted the X-ray shot at Sinai Mount, the technology flipped while testing images taken at other hospitals. Finally, the researchers realized that the computer had learned to distinguish one of the hospital-carrying chest x-rays from the hospital-carried chest x-rays with the patients taken in the radiology department. Patients who are very ill treated tend to use a portable chest x-ray to leave the room, so it is not surprising that these patients were at higher risk of lung infection.
Deepmind, a Google-owned company, has developed an AI-based mobile app that predicts which hospitalized patients will develop acute kidney failure for up to 48 hours. A blog post on the DeepMind website describes a system used in a London hospital as a "game changer". But the AI system also created two false alarms for each accurate result, according to a July study by Nature. Saurabh Jha, an assistant professor of radiology at the University Hospital of Pennsylvania, said it could explain why patients' kidney function has not improved. Any benefit of early detection of serious kidney problems can be compounded by high levels of "overdiagnosis," in which the AI system has identified borderline kidney problems that do not require treatment, Jha said. Google had no comment in response to Jha's decision.
False positives can harm patients by ordering non-invasive tests or withholding recommended treatment, Jha said.
As these studies show, software labs with impressive results can make founders do real-time testing, says Stanford's Ch. Because diseases are more complex and the healthcare system is much more dysfunctional than many computer scientists expect.
Many AI developers open electronic health records because of the large amount of detailed data Ch, but these developers often do not know that they are building on a deeply broken system. Electronic health records were created for billing, not for patient care, and were incorrect or lost data.
A KHN investigation published in March sometimes found fatal flaws in patients' medication lists, lab tests and allergies.
Considering the risks involved, physicians need to take steps to protect their patients' interests, says Vikas Sioni, a cardiologist and nonprofit Lown Institute president who advocates for greater access to healthcare.
"Although it is the job of entrepreneurs to think big and take risks," says Sioni, "it is the doctors' job to protect their patients."
Kaiser Health News (KHN) is a nonprofit news service that covers health related topics.This is the Kaiser Family Foundation's independent program, which is not affiliated with Kaiser Permanente at all.