You can already find it in some emergency rooms — and soon we’ll see it in every aspect of health care.
Artificial intelligence in health care carries huge potential, according to experts in computer science and medicine, but it also raises serious questions around bias, accountability and security.
“I think we’re just seeing the tip of the iceberg right now,” said Yoshua Bengio, a computer scientist and professor at the University of Montreal, who was recently awarded the Turing Award, often called the “Nobel Prize” of computing.
Bengio is one of the pioneers of deep learning, an advanced form of AI, which he believes will advance health care. In deep learning, a computer is fed data, which it uses to make assumptions and learn as it goes — much like our brain does.
Scientists are already using AI to develop medical devices. At the University of Alberta, researchers are testing an experimental bionic arm that can “learn” and anticipate the movements of an amputee.
Last year, the U.S. Food and Drug Administration (FDA) approved a tool that can look at your retina and automatically detect signs of diabetic blindness.
And it is expected that AI will soon affect all aspects of health care.
The ability to disseminate huge amounts of information quickly will have a big impact on medical diagnoses, and medical experts believe pathology, dermatology and radiology will likely be the first to see these changes.
“All these images right now are processed by people who painstakingly have to look at all the details and check for problems. Machines will do that in a very systematic way and they can be trained to be as good or better than doctors or technicians at these tasks,” said Bengio.
Faster ER service
But he believes machine learning can go beyond that.
“Designing new drugs can take 15 years and cost billions and billions of dollars,” he said. “There will soon be ways to streamline that process.”
At Humber River Hospital in northwest Toronto, AI is speeding up perhaps the most frustrating part of a patient’s experience: the emergency room.
In the hospital’s control centre, powerful computers are now accurately predicting how many patients will arrive in the emergency department — two days in advance.
The software processes real-time data from all over the hospital — admissions, wait times, transfers and discharges — and analyzes it, going back over a year’s worth of information. From that, it can find patterns and pinpoint bottlenecks in the system.
Fix those bottlenecks and you might end up with more satisfied patients, as well as achieve a better bottom line.
“If you add up all those tiny delays — how long it takes to see your doctor, how long you’re waiting for your bed to be cleaned, how long you’re waiting to get up to your room — if you measure all of those things and can shorten each one of them, you can start saving a lot of money,” said Dr. Michael Gardam, chief of staff at Humber River Hospital.
According to Gardam, it’s working: patients are now moving through the system faster, allowing the hospital to see an average of 29 more patients a day.
But many big questions still remain about the use of AI in health care.
For machines to learn, they need vast amounts of information. Since that initial data comes from humans, some of that information can be tainted by personal bias — especially if the algorithm isn’t fed a diverse dataset.
“In dermatology, you take a look at a number of different photographs or slides of moles. If you happen to be pale-skinned, some of the machine learning associated with that imagery is great. If you’re darker-skinned, it’s not,” said Dr. Jennifer Gibson, a bioethicist at the University of Toronto.
She’s not against the integration of AI in health care, but warns that anything involving big data, profit-driven companies and health care should be heavily regulated.
“In our hunger for more data, in order to power these tools, we may be introducing a form of surveillance within our society — which is not really the intended goal, but might happen accidentally,” Gibson said.
Gardam doesn’t share those concerns; he believes humans — not machines — will remain in control.
“It’ll still be a long time before we fully accept information coming from a computer system, telling us what the diagnosis is,” he said. “Humans are still going to be reviewing it until we’re very comfortable we’re not missing something.”
Some governments aren’t waiting for that to happen. In the U.S, the FDA recently announced that it is developing a framework for regulating self-learning AI products used in medicine.
In a statement to CBC News, Health Canada said it also engaging with national, international, industry, academic and government stakeholders “to discuss the challenges and opportunities in regulating current and emerging AI technologies in health care.”
This content was originally published here.