For all the reasons others are sharing in comments, as well as what I consider to be the most important reason, NO to AI for health decisions!! No! No! No!
All that would do would be to lock in Allopathic medicine, aka "evidence-based medicine" (EBM) as the one and only officially and legally recognized system of health that exists. To the exclusion of other systems of health that are FAR superior with less risk to their patients. Like Ayurveda. A system of health primarily practiced in India, thousands of years in practice, where their practitioners look down their noses (rightfully) at allopathy as being dangerous and ineffective. And Traditional Chinese Medicine practitioners. Homeopaths - the dominant health system in the US until Rockefeller and his Bernays/Flexner associates made petrochemical-based "medicine" the only one permitted in the US - even while Rockefeller himself kept his own personal homeopathic doctor bedside until his last breath in his mansion.
Naturopathy, Herbalism, all of the holistic health systems are much gentler, safer and more effective than allopathy at healing pretty much any condition a person experiences. Because they focus on the healing person, not treating the condition. Iatrocide, death-by-doctor, is the 2nd leading cause of death in this nation. Allopathic doctors. A system of health where AI sorts through only allopathic medicine research, studies, reports, academia, etc because that system is the only one studied and calculated into 1's and 0's formats an AI would draw from will be fundamentally flawed from inception. But granted unimpeachable status that law, authorities would defer to would be a catastrophe for humanity.
AI cannot capture art. Healing is an art. And healing is by its very nature driven by nature, not man. Energies and systems of the earth, frequencies, elements of our Creator and creation are beyond our ability to comprehend, even though we imagine we have the power to dominate and control nature. Allopathy and medical "science" defies man's control, is the product of dangerous hubris of man who imagines otherwise. AI is a tool of man's invention that serves to validate man's systems and ideas of power over nature when reality is the other way around.
Our collapsing and repugnant state of medicine is the inevitable result of the *system* of medicine itself. Allopathy fails at its most fundamental point, the premise it's based on: man controls nature. With poisons and butchery. The only area of health that it is superior in involves that butchery process, cutting, slicing in order to sew rips, breaks and tears to our bodies that happen after car accidents, stabbings and bullets tear our flesh and organs and break our bones. That's it. That's all allopathy does well. If AI can be used to study the most effective ways to heal those injuries then have at it. For the rest of our health keep those damn death machines away from us!
Justin, As someone who spends extensive time in medical AI, and has since Shortliffe, this paper and your conversation miss the point. For pre-canned situations like these, any generative AI will look at 1,000,000 priors and come up with an average answer which will likely be close to the truth 88% of the time, which is consonant with what this study shows.
This has been going on for a long time. From AI round one (Shortliffe/Feigenbaum, in the late 1970s) to round two (Weed) to later rounds (I was part of the initial Watson testing which was round six), the "AI will save medicine" cohort has been trying and failing. Now we are in round nine. We will have the same end result.
The generative/LLM/Deep Learning engines of today are essentially the same as the first engines -- there are larger training sets and far more iterations, but exactly all the same limitations. Generative AI uses neural networks to compute CORRELATIONS -- NOT INTELLIGENCE. There is no intelligence in these tools whatsoever. Because there is no way for these engines to ascertain "truth," all outputs are a probabilistic stab based on word/data association numbers and how others wrote about similar situations. There is no ability to trace logic or to back-trace. This is foundational to the technology. The results may be impressive and are designed to LOOK like intelligence. But they are not. Examples are legion. But black swans, hallucinations, and other known issues with this technology will persist. RAG and other tools may limit some of the edge conditions -- but the issues cannot be eliminated.
This leads to the most important observation about any generative AI -- IT IS UNSAFE TO ASK ANY QUESTION TO WHICH YOU DO NOT ALREADY KNOW THE ANSWER. Anyone who has used these tools has discovered that they can ask a question, know when they get the answer that it is wrong, re-ask the question and sometimes then get the right answer. But if you are not smart/informed enough to recognize an answer is wrong, you may proceed with the wrong information with, in the case of health care, potentially fatal results. As Weed discovered, this is foundationally why these tools are relatively not-useful to practitioners who actually know things. Either you know the answer already and dealing with the AI's mix of right and wrong is just a waste of time, or you know you do not know enough to discern the right answer so you refer the patient. Nothing has changed in these measurements for 30 years.
But these pivotal points on generative AI entirely miss the two key points on medical AI which is why it has never succeeded and will not as envisioned by most. First, every patient is their own science experiment (as I always tell my patients). The fact that a bunch of people with similar symptoms reacted in a particular way to a particular diagnosis or treatment is contributory to one's analysis, but may or may not have anything to do with the patient in front of you. This is a truism in health care which is why there are doctors and nurses. (Early in my career I published one of the earliest computer simulation pieces on open heart surgery showing that using an "average patient" approach killed 10% of all open-heart patients. Only a per-patient approach could address this, and this approach is now uniformly used worldwide.)
Second, something that succeeds at the 90% level (harking back to the aforementioned open heart situation) is permanently unacceptable in health care. Repeated studies have shown that this is about the maximum (even forgetting the individual variability issues) performance of any generative AI. Recently an AI company that claimed it could do better with medical records had to sign a consent letter with the Texas DA for fraud -- it cannot do better.
So combining the fact that the answer must be known to you before you ask the question (which means the asking is generally a waste of time), the 90% top-out on correctness, and the fact that the individualization of medical care is the foundation of all care, this is just not a tool to be used this way. It might be great for appointment scheduling or for further frustrating patients looking for help -- but making "AI doctors" is not coming via this mechanic.
P.S. There are other variations of things called AI (like Cognitive AI) that might have a chance of making headway since they have a "truth" anchor -- but these are not part of the generative AI conversation. If you are interested in the AI space and its limitations, the following article is illuminating: https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc
This will be the end of proper medical care. No AI will ever violate protocol they can program the idiot box and it will give remdisver irregardless of any human intervention and you too will die. The AI will be protected because it followed protocol. Sort of like the Nazi guard who was following orders and yet we hunted them down into their 90’s to punish them for following protocol. So for me it’s a big NO. And I am old enough that I won’t have to see it. But you’re right it’s coming and you thought healthcare was bad now. Just wait.
Who pays when a patient suffers because of bad advise? Doctors have to buy insurance for mal-practice. Depending on who owns the premises, liability can fall on the hospital where the business was conducted.
This is the problem with pretty much all tech developments that we interface with from the last decade. Another example is self driving car.
AI theoretically could be useful IF AND ONLY IF it is trained on EVERY piece of data in existence. Not on cherry picked data that the decision-makers decide is true because of consensus. That applies to Medical AI, or any other type. Why is that so? Because of human biases. Just look at some of the early AI with search engines. Their answers demonstrate a ridiculous political bias that not coincidentally mirrors the political bias of the leading individuals of the organization that programs and "teaches" the AI. For example, who would pay to develop a medical AI. Big Pharma and others like them. Right from the start the AI would be massively biased towards profitable treatments, i.e. drugs and procedures, and would rationalize away non-profitable free preventative lifestyle changes, and off patent yet effective medications. Look at the Covid fiasco. The "experts" designed hydroxychloroquine trails with known toxic doses, ivermectin was banned, masks and mRNA injections mandated, all by people who benefitted immensely off such decisions. Should the writings of these people be "teaching" the AI what is true?
The other day I was telling someone about when I was monitoring an online Christie's auction, and was quite taken, for some reason, when Hicks' Peaceable Kingdome came up on the block. A painting I was familiar with. So I Googled Hicks Christie auction, or something like that, and the first thing on the page was labeled as an AI result. Three lines. First, the auction I observed, the painting went for over six million dollars, setting a record for paintings of that nature, Second, I was told the painting was auctioned a few years later, went for $1.75 million or so. Third - another auction, it went for $4 million plus, and I was told this was a record for the artist. Line one said the painting went for over six million, so how could that be? Same painting, Peaceable Kingdom. And I am to think AI should been trusted with my health. Nope.
Justin, I hear your frustration with my profession. Can’t argue that aspect and why you would search for something better. I’m a solo practice primary care house call doc in the Chicago area. I spend way too much time with each of my patients, and can’t imagine other than maybe whittling down mile-long medication lists how I could add AI in the context of my visits. I think my EMR has offered an option. I have reflected on the “medicine” I practice at a given visit, and it seems the greatest value of my visits is the visit itself. I like to think AI can’t replicate that.
And for the record, the whole “pandemic” episode has sorely shaken my faith in my profession. Even my own little brother at one point opined that he trusts the mayor of his far-left suburb more than me on those medical issues. Harrumph.
And if we had a medical AI, it would have been programmed to avoid Ivermectin and anything else the government/insurance companies/Pharma did not want us to be aware of.
AI is just a bunch of data, AI is just a bunch of lines of codes, written and enhanced by folks with an agenda. May be a good agenda, may be a bad agenda, but there is/will be an agenda. Having AI take the place of doctors is a cop-out and an attempt to escape liability, IMO. Nothing we say here will affect whatever it is that the big money in health care wish to do, but seeing AI as anything but a biased research tool is, IMO, foolish indeed. But then, I think talking-to chatbots is ludicrous, so AI is not something I look up to - it should be what it is, a tool, not the decisionmaker.
AI is not so great in the legal profession. AI was tasked with writing legal memoranda to the court and it made up cases with citations that did not exist. Maybe it will improve with time but my advice to young lawyers is to do your own research and writing. No one likes being called out in court for incompetence. And that is the least bad outcome.
The AI "won" because the whole procedure was... artificial. "Participants had 60 minutes to review up to six clinical vignettes adapted from established diagnostic reasoning exams." That's not how good Medicine works in real life. A good doctor will not only have adequate knowledge, they will have experience and empathy dealing with people, they will know how to connect emotionally with the patient, inquire about things that seem unrelated, and that will give them insights to make better diagnosis and to actually HEAL people. Unfortunately, I know doctors that actually do seem like your "AI doctors"!
From my experience with doctors, the problem isn’t insufficient computers, it’s too much time futzing with the keyboard and not enough time engaging with the patient. If doctors actually took the time to listen to the patient and think about what they’re being told it would be different.
Were I a doctor I would be humiliated at being outperformed by a computer program at a skill I spend much money and many years trying to learn. I would ask myself what I was doing wrong. But few will.
For all the reasons others are sharing in comments, as well as what I consider to be the most important reason, NO to AI for health decisions!! No! No! No!
All that would do would be to lock in Allopathic medicine, aka "evidence-based medicine" (EBM) as the one and only officially and legally recognized system of health that exists. To the exclusion of other systems of health that are FAR superior with less risk to their patients. Like Ayurveda. A system of health primarily practiced in India, thousands of years in practice, where their practitioners look down their noses (rightfully) at allopathy as being dangerous and ineffective. And Traditional Chinese Medicine practitioners. Homeopaths - the dominant health system in the US until Rockefeller and his Bernays/Flexner associates made petrochemical-based "medicine" the only one permitted in the US - even while Rockefeller himself kept his own personal homeopathic doctor bedside until his last breath in his mansion.
Naturopathy, Herbalism, all of the holistic health systems are much gentler, safer and more effective than allopathy at healing pretty much any condition a person experiences. Because they focus on the healing person, not treating the condition. Iatrocide, death-by-doctor, is the 2nd leading cause of death in this nation. Allopathic doctors. A system of health where AI sorts through only allopathic medicine research, studies, reports, academia, etc because that system is the only one studied and calculated into 1's and 0's formats an AI would draw from will be fundamentally flawed from inception. But granted unimpeachable status that law, authorities would defer to would be a catastrophe for humanity.
AI cannot capture art. Healing is an art. And healing is by its very nature driven by nature, not man. Energies and systems of the earth, frequencies, elements of our Creator and creation are beyond our ability to comprehend, even though we imagine we have the power to dominate and control nature. Allopathy and medical "science" defies man's control, is the product of dangerous hubris of man who imagines otherwise. AI is a tool of man's invention that serves to validate man's systems and ideas of power over nature when reality is the other way around.
Our collapsing and repugnant state of medicine is the inevitable result of the *system* of medicine itself. Allopathy fails at its most fundamental point, the premise it's based on: man controls nature. With poisons and butchery. The only area of health that it is superior in involves that butchery process, cutting, slicing in order to sew rips, breaks and tears to our bodies that happen after car accidents, stabbings and bullets tear our flesh and organs and break our bones. That's it. That's all allopathy does well. If AI can be used to study the most effective ways to heal those injuries then have at it. For the rest of our health keep those damn death machines away from us!
Justin, As someone who spends extensive time in medical AI, and has since Shortliffe, this paper and your conversation miss the point. For pre-canned situations like these, any generative AI will look at 1,000,000 priors and come up with an average answer which will likely be close to the truth 88% of the time, which is consonant with what this study shows.
This has been going on for a long time. From AI round one (Shortliffe/Feigenbaum, in the late 1970s) to round two (Weed) to later rounds (I was part of the initial Watson testing which was round six), the "AI will save medicine" cohort has been trying and failing. Now we are in round nine. We will have the same end result.
The generative/LLM/Deep Learning engines of today are essentially the same as the first engines -- there are larger training sets and far more iterations, but exactly all the same limitations. Generative AI uses neural networks to compute CORRELATIONS -- NOT INTELLIGENCE. There is no intelligence in these tools whatsoever. Because there is no way for these engines to ascertain "truth," all outputs are a probabilistic stab based on word/data association numbers and how others wrote about similar situations. There is no ability to trace logic or to back-trace. This is foundational to the technology. The results may be impressive and are designed to LOOK like intelligence. But they are not. Examples are legion. But black swans, hallucinations, and other known issues with this technology will persist. RAG and other tools may limit some of the edge conditions -- but the issues cannot be eliminated.
This leads to the most important observation about any generative AI -- IT IS UNSAFE TO ASK ANY QUESTION TO WHICH YOU DO NOT ALREADY KNOW THE ANSWER. Anyone who has used these tools has discovered that they can ask a question, know when they get the answer that it is wrong, re-ask the question and sometimes then get the right answer. But if you are not smart/informed enough to recognize an answer is wrong, you may proceed with the wrong information with, in the case of health care, potentially fatal results. As Weed discovered, this is foundationally why these tools are relatively not-useful to practitioners who actually know things. Either you know the answer already and dealing with the AI's mix of right and wrong is just a waste of time, or you know you do not know enough to discern the right answer so you refer the patient. Nothing has changed in these measurements for 30 years.
But these pivotal points on generative AI entirely miss the two key points on medical AI which is why it has never succeeded and will not as envisioned by most. First, every patient is their own science experiment (as I always tell my patients). The fact that a bunch of people with similar symptoms reacted in a particular way to a particular diagnosis or treatment is contributory to one's analysis, but may or may not have anything to do with the patient in front of you. This is a truism in health care which is why there are doctors and nurses. (Early in my career I published one of the earliest computer simulation pieces on open heart surgery showing that using an "average patient" approach killed 10% of all open-heart patients. Only a per-patient approach could address this, and this approach is now uniformly used worldwide.)
Second, something that succeeds at the 90% level (harking back to the aforementioned open heart situation) is permanently unacceptable in health care. Repeated studies have shown that this is about the maximum (even forgetting the individual variability issues) performance of any generative AI. Recently an AI company that claimed it could do better with medical records had to sign a consent letter with the Texas DA for fraud -- it cannot do better.
So combining the fact that the answer must be known to you before you ask the question (which means the asking is generally a waste of time), the 90% top-out on correctness, and the fact that the individualization of medical care is the foundation of all care, this is just not a tool to be used this way. It might be great for appointment scheduling or for further frustrating patients looking for help -- but making "AI doctors" is not coming via this mechanic.
P.S. There are other variations of things called AI (like Cognitive AI) that might have a chance of making headway since they have a "truth" anchor -- but these are not part of the generative AI conversation. If you are interested in the AI space and its limitations, the following article is illuminating: https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc
This will be the end of proper medical care. No AI will ever violate protocol they can program the idiot box and it will give remdisver irregardless of any human intervention and you too will die. The AI will be protected because it followed protocol. Sort of like the Nazi guard who was following orders and yet we hunted them down into their 90’s to punish them for following protocol. So for me it’s a big NO. And I am old enough that I won’t have to see it. But you’re right it’s coming and you thought healthcare was bad now. Just wait.
The insurance companies will program the AI. Enough said. You are correct.
One word for why the 92% number won't pan out.
Liability.
Who pays when a patient suffers because of bad advise? Doctors have to buy insurance for mal-practice. Depending on who owns the premises, liability can fall on the hospital where the business was conducted.
This is the problem with pretty much all tech developments that we interface with from the last decade. Another example is self driving car.
AI theoretically could be useful IF AND ONLY IF it is trained on EVERY piece of data in existence. Not on cherry picked data that the decision-makers decide is true because of consensus. That applies to Medical AI, or any other type. Why is that so? Because of human biases. Just look at some of the early AI with search engines. Their answers demonstrate a ridiculous political bias that not coincidentally mirrors the political bias of the leading individuals of the organization that programs and "teaches" the AI. For example, who would pay to develop a medical AI. Big Pharma and others like them. Right from the start the AI would be massively biased towards profitable treatments, i.e. drugs and procedures, and would rationalize away non-profitable free preventative lifestyle changes, and off patent yet effective medications. Look at the Covid fiasco. The "experts" designed hydroxychloroquine trails with known toxic doses, ivermectin was banned, masks and mRNA injections mandated, all by people who benefitted immensely off such decisions. Should the writings of these people be "teaching" the AI what is true?
The other day I was telling someone about when I was monitoring an online Christie's auction, and was quite taken, for some reason, when Hicks' Peaceable Kingdome came up on the block. A painting I was familiar with. So I Googled Hicks Christie auction, or something like that, and the first thing on the page was labeled as an AI result. Three lines. First, the auction I observed, the painting went for over six million dollars, setting a record for paintings of that nature, Second, I was told the painting was auctioned a few years later, went for $1.75 million or so. Third - another auction, it went for $4 million plus, and I was told this was a record for the artist. Line one said the painting went for over six million, so how could that be? Same painting, Peaceable Kingdom. And I am to think AI should been trusted with my health. Nope.
Justin, I hear your frustration with my profession. Can’t argue that aspect and why you would search for something better. I’m a solo practice primary care house call doc in the Chicago area. I spend way too much time with each of my patients, and can’t imagine other than maybe whittling down mile-long medication lists how I could add AI in the context of my visits. I think my EMR has offered an option. I have reflected on the “medicine” I practice at a given visit, and it seems the greatest value of my visits is the visit itself. I like to think AI can’t replicate that.
And for the record, the whole “pandemic” episode has sorely shaken my faith in my profession. Even my own little brother at one point opined that he trusts the mayor of his far-left suburb more than me on those medical issues. Harrumph.
And if we had a medical AI, it would have been programmed to avoid Ivermectin and anything else the government/insurance companies/Pharma did not want us to be aware of.
AI is just a bunch of data, AI is just a bunch of lines of codes, written and enhanced by folks with an agenda. May be a good agenda, may be a bad agenda, but there is/will be an agenda. Having AI take the place of doctors is a cop-out and an attempt to escape liability, IMO. Nothing we say here will affect whatever it is that the big money in health care wish to do, but seeing AI as anything but a biased research tool is, IMO, foolish indeed. But then, I think talking-to chatbots is ludicrous, so AI is not something I look up to - it should be what it is, a tool, not the decisionmaker.
AI is not so great in the legal profession. AI was tasked with writing legal memoranda to the court and it made up cases with citations that did not exist. Maybe it will improve with time but my advice to young lawyers is to do your own research and writing. No one likes being called out in court for incompetence. And that is the least bad outcome.
The AI "won" because the whole procedure was... artificial. "Participants had 60 minutes to review up to six clinical vignettes adapted from established diagnostic reasoning exams." That's not how good Medicine works in real life. A good doctor will not only have adequate knowledge, they will have experience and empathy dealing with people, they will know how to connect emotionally with the patient, inquire about things that seem unrelated, and that will give them insights to make better diagnosis and to actually HEAL people. Unfortunately, I know doctors that actually do seem like your "AI doctors"!
From my experience with doctors, the problem isn’t insufficient computers, it’s too much time futzing with the keyboard and not enough time engaging with the patient. If doctors actually took the time to listen to the patient and think about what they’re being told it would be different.
Were I a doctor I would be humiliated at being outperformed by a computer program at a skill I spend much money and many years trying to learn. I would ask myself what I was doing wrong. But few will.