News, info and tips for living with multiple sclerosis

Does artificial intelligence have a place in your doctor’s office?

Does artificial intelligence have a place in your doctor’s office?

Is artificial intelligence (AI) intelligent enough to help with a multiple sclerosis (MS) diagnosis? Apparently so, and more.

In the U.K. a project named AssistMS is studying whether AI can be used to detect and highlight changes on brain MRIs. An algorithm called icobain ms is said to be able to to detect lesions in the brain, measure brain volume, and report on how each changes over time. “Neurologists will be able to get a much more accurate idea of how each patient’s disease course is progressing and, in turn, to recommend the best possible treatment for that person,” according to Rachel Horne, an MS patient and the patient and public involvement lead for AssistMS.

A study review published in Mach in the journal Multiple Sclerosis and Related Disorders predicts that “with advances made in AI, the way we monitor and diagnose our MS patients can change drastically.” These researchers looked at 38 studies covering 5433 people. Of these, 2924 were people with MS and the rest were healthy controls. Diagnostic tools that included Magnetic Resonance Imaging, Optical Coherence Tomography, serum and cerebrospinal fluid examinations, and observations of movement functions were compared with AI. In many of the studies reviewed, AI was 100% correct in detecting signs of MS and many of the studies reported at least 75% accuracy.

Proceed with caution with artificial intelligence

It seems we can use artificial intelligence in the process of diagnosing, and possibly treating, MS but does that mean we should use it? In the March 27, 2023 issue of JAMA Network, David A. Dorr, MD, Laura Adams, MS, and Peter Embí, MD, suggest we look carefully before we leap.

These doctors caution that AI has “great potential, but early implementations have demonstrated the potential for harm, failure to perform, and furtherance of inequity.” They worry about reviews that, they say, show nearly all algorithms still fail to achieve substantial gains over what a human can do. They worry, too, that in complex AI systems it takes a lot of investigation to determine how the AI model arrived at its results.

A headline on an April 15, 2023 article on Bloomberg.com puts it more bluntly: “We’re Not Ready to Be Diagnosed by ChatGPT.” Opinion Columnist Faye Flam writes that some doctors are already experimenting to see if artificial intelligence can diagnose patients and choose treatments. She worries that “whether this is good or bad hinges on how doctors use it.”

“It may act like it cares about you, but it probably doesn’t. ChatGPT and its ilk are tools that will take great skill to use well — but exactly which skills aren’t yet well understood,” Flam writes.

AI ethics and standards are needed

Dorr, Adams and Embi caution that healthcare ethics, oaths and standards need to be followed. To accomplish this they propose a Code of Conduct for AI in Health Care. It would outlines rules, norms, and expectations of behaviors for people who develop, test, validate, implement, and use algorithms. The also suggest that it be comprehensive, easy to understand, and include a way of putting core values into practice in every stage of the algorithmic life cycle.

That’s a pretty tall order to fill. I wonder if anyone has asked ChatGPT to write one.

By the way, have you checked out my book “The Multiple Sclerosis Toolbox“? I promise it was written by a human.

(This post was first published as my column on the MS News Today website.)

(Image by Gerd Altmann from Pixabay)