top of page

The AI Healthpocalypse- We are all Henrietta Lacks now...


Many radiologists fear AI will replace them. Dr. Geoffrey Hinton thought so, and he certainly knows more about AI and computers than the average radiologist. Few seem to think about the larger dangers for their patients right in front of their face today.

Predictably, everyone has an opinion. There are doctors completely against deploying AI who clearly have not thought about the dangers of doing nothing. If we stay stuck in the status quo where AI is not used very much, as populations age, health systems will crumble. But in addition to understanding AI tools as the potential salvation of medicine, it's hard not to see the potential for such tools to fail the most vulnerable patients. Which is why I'm in total shock we, as patients and citizens, are handing the raw material for AI over to private companies with an interest in profit.

The average patient has no idea how algorithms now encoded into computers influence their health. Worse, they are quite likely to believe that the rapid deployment of AI will only improve everyone's health. The public's overvalued ideas about AI may start with 'automation bias' write large, a term originally meant to encapsulate “The tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct” (Parasuraman & Riley, 1997) But the rosy picture of technology- as saviour that companies hire marketers to paint for them is a carefully crafted facade.

I've looked at the real picture from all sides, and I AM SCARED. I've been in on discussions at a particularly swank law firm over whether caste in India could be written into algorithms about health insurance. After all, it doesn't take a genius to see that untouchables live shorter more miserable, less healthy lives...but it takes an algorithm so you can avoid responsibility for insuring them. The woman who dragged me into that painful meeting presented a public face of wanting to help humanity and change the world for the better- like every other start-up leader. Interestingly, not one single start-up executive I've ever met talked about wanting to make some cold hard cash, and put it in a bank account- presumably a Swiss one.

Making a profit while serving humanity is possible- but AI actually makes it hard to serve ALL of humanity. I think back to medical school when a classmate of darker skin than myself (I'm black, but he was darker) complained of a dermatologist dismissing his skin problems. I'll never know if the dermatologist dismissed his problem due to the documented lack of examples of skin disease in dark individuals in dermatology textbooks, or just the plain old garden variety racism of some Euro-American doctors who presumes any black patient is a problem. I do know that the statistical pattern with skin cancer in which blacks on average face delayed diagnosis and worse treatment outcomes is typical of many diseases. AI might only make such problems of health disparities in racial and ethnic minorities worse because 'Smart AI' programs create a feedback loop which can make them even less likely to diagnose problems in populations where doctors miss them. The technicalities of how they accomplish this begin with 'deep learning.' We all know deep learning learns from a data set that humans somehow pick and sort even if only by algorithms written by humans.

So as I've been watching more and more public private partnerships creep into the healthcare space, I've become rather irate. I seem to read or hear about a new such partnership one every day now- and they remind me of watching a naïve good-looking 20 something marry a richer manipulative older person- and who hasn't been to such a wedding a few times? Except in this weird wedding, I, as patient didn't necessarily want to be invited to the wedding, but I'm stuck there in the middle of it all. I'm convinced that without my consent my health data has been anonymized and lumped with that of millions of other fellow citizens and sold to the highest bidder. As of today I've contacted more than one computer programmer who writes code for AI in healthcare specifically based off of results from my tax funded health coverage organization. The code is not open source (I can't see the programming behind the programs). The data sets are not available to me, the mere tax payer and patient...and I only know about some of these algorithms already running because I'm extremely curious. In some sense we are ALL Henrietta Lacks now...


bottom of page