Art vs Science: Will future medical students kowtow to Artificial Intelligence Doctors?
In an era where technology reigns supreme, Elli Izrailov considers how medical students come to terms with the presence of Artificial Intelligence.
Artificial Intelligence (AI) for the past 80 years had been left mostly in the realms of Science Fiction. Since the early 2010s to the present, this impossible Sci-Fi concept has slowly crept into a non-fictitious reality. For better or for worse, AI exists and plays an important role in the year 2020. In Medicine alone, the AI DeepMind (a Google subsidiary) has teamed with numerous organisations to diagnose eye diseases1, predict the complex three-dimensional structures of proteins2, anticipate when a patient is about to have an acute kidney injury3, and soon potentially even detect breast cancer4. The IBM AI – Watson has been used to determine whether a diabetic patient is about to have a hypoglycaemic episode with 90% accuracy in a 2-hour window5. Even more impressive, IBM Watson for Oncology has been used to assist oncologists by scouring thousands of research papers to find quality personalised treatments for cancer patients6
If Watson was able to construct a treatment regimen which expanded on, and in some cases, was even better than what the treating clinicians proposed, well why have a doctor at all?
Last year in an ethics class my class was presented with the following hypothetical. If Watson was able to construct a treatment regimen which expanded on, and in some cases, was even better than what the treating clinicians proposed, well why have a doctor at all? That hypothetical precipitated a 3 hour debate which formulated many ideas expressed in this piece, and had me thinking for weeks.
Will the evolution of AI lead to a revolution in medicine? Will we see AI doctors in a hundred, or even fifty from now? Concerningly, will medical students even be needed in this future?
While a large portion of Medicine is the study of pathology, physiology, anatomy, and pharmacology, it takes more than the sum of these to be a good doctor.
There is an Art to being a good doctor. Universities across the globe make it abundantly clear that to be a good doctor, one must be able to empathise, establish rapport, and trust that ‘gut instinct’ to diagnose. Therefore, the answer to the question should be simple; as AI cannot establish rapport, empathise with patients, and take every positive aspect which makes a ‘good doctor’, then the medical practitioners of the future should rest assured. Even though AI can learn the Science of Medicine, the Art of Medicine should be well beyond their reach. Haha, jks, unless?
Breaking everything down into its most fundamental units, anything which is ‘art’ which is complex and unexplainable, can be disassembled into ‘science’ which is simple and explainable.
If we look at riding a bike, for the newcomer it seems like a daunting task. How the hell are you supposed to manage multi-tasking in such a number of ways while not falling over? You reading this would think it’s not too hard, in fact it’s very easy now you know how, but to the ten year old, learning to ride a bike for the first time without training wheels it seems near impossible. For me, my dad got me to just drift on a bike trying to hold my balance for ten seconds. Once I could achieve that, riding a bike wasn’t a problem. Every time I tried to get on my bike, well it was like riding a bike.
Why should it be so unrealistic to see working AI treat people? Unless it is the people who are the problem?
And we as people develop similar skills growing up. We learn how to talk to others, how to behave nicely, how to persuade, coerce, or manipulate, how to deal with people behaving negatively towards us, how to sympathise and empathise, how to support and care. We learn these skills as children until (ideally) we become adults and (ideally) become functioning and (ideally) liked members of society.
So why should it be any different for AI?
AI can learn – there are many different types of learning algorithms for it to utilise. And with millions of interfaces and billions of interactions, then why should it sound so unrealistic for AI to one day be able to empathise, sympathise, build rapport and so on? Why should it be so unrealistic to see working AI treat people? Unless it is the people who are the problem?
Another hypothetical situation: you call your GP and try and book in an appointment for THAT day. It’s urgent. The receptionist says your GP that you’ve been seeing for years is booked out, but there are two alternatives. You can go and see another GP at the clinic at the end of the day, or you can see an AI doctor right now.
While an adventurous few may think ‘Why the hell not?’ when presented with such a choice, most others would like to see a real-life doctor. A study by the Union Bank of Switzerland Evidence Group found that of 8,000 people surveyed, only 17% would be comfortable with flying on a fully automated plane, with that number rising to 30% when told that the trip would be far cheaper. Now flying a plane thirty thousand feet from the ground and getting an urgent check-up aren’t necessarily comparable. It’s like apples and oranges. But in this case, we’re still talking about how well you would trust AI with your safety like how we compare the fruit.
At university we learn that people want comfort from a person they see regularly. Even if AI could replicate all the steps of reassurance and support to a T, we would know it’s just a replication, a copy.
Some people today want to see a doctor quickly, get their medical certificate, and then present it to work so they get their sick leave. Others may want a speedy prescription or referral, and in those cases, maybe AI doctors would be perfect for them. But when you’re looking for a GP, one who you see on a regular basis and you build a connection with overtime, well AI just can’t beat the real deal. At university we learn that people want comfort from a person they see regularly. Even if AI could replicate all the steps of reassurance and support to a T, we would know it’s just a replication, a copy. A very fancy copy, but still. That knowing would lead to a rejection, and any attempt to simulate authenticity could never be accepted as bona fide compassion. So in the event that AI was a reality, and there were specific artificial intelligences for treating patients, its success is not guaranteed. Meaning that perhaps there is still a place for medical students to continue to exist?
Perhaps in the future there may be room for AI in Medicine. Perhaps artificial intelligence can find its place for those who are in need and otherwise cannot access or make use of the flaws in the present system. In Australia at the very least, AI may close the wide disparities in health between rural and remote parts of the country and their larger metropolitan counterparts.
And in regards to AI being rejected by the system as a whole, well it must be noted that progressively liberal ideas have continually seeped their way into society over the centuries. While we may initially see a denunciation of AI, it will inevitably, eventually find a place in the world. As for now, medical students can rest assured they can still wake up at the crack of dawn to head to ward rounds – the near future will still have a place for them.
Playing devil’s advocate, I’ve thought of a number of ways that AI can be used in the future, because let’s face it with coronavirus taking me off placement, I’ve had a lot of spare time to think:
1) AI Can provide medical certificates for sick leave for mild cases of cold, flu, or stomach bug.
2) AI Can be used as a bulk-billing service so it can act as a cheaper alternative.
3) AI, if under-utilised, can be used as an urgent point of call when human doctors are occupied and triage accordingly.
4) AI would be more likely to fully abide by confidentiality and not make any errors in unintentionally disclosing private information in a consultation.
5) AI would be less likely to be impacted by personal problems, fatigue, emotional taxation and therefore be less likely to make a mistake.
6) AI could be better at diagnosing and coming up with treatment plans?
Overtime, the normalisation of AI in Medicine will become just that, normal. It surely has its potential uses and it may even do a better job than man in the diagnosis and treatment of people. It may take some time before AI can be accepted into the community but why not? But will doctors be COMPLETELY replaced by AI? Unlikely. Will there be a place for AI doctors … today? Certainly not. Tomorrow? Slim chance? But the potential is clear.
Elli (he/him/his) is currently a fourth year postgraduate medical student at Monash University, who likes to stay in and read, and is feeling very grateful that now he is being encouraged to do so #thankscoronacrisis
Support Et Cetera
Et Cetera is maintained by unpaid student editors and volunteers. Despite their hard work, there are ongoing costs for critical website maintenance and communications. Et Cetera is not linked to any specific university, and as such, is unable to access funding in the way most campus publications are able to.
Given our primary audience is university students, we appreciate not all of our readers are in a position to contribute financially.
This is why Et Cetera's survival relies on readers like you, who have have enjoyed, or been challenged, by our work. We appreciate every dollar that is donated.
Please consider supporting us via our PayPal, by clicking the button below: