Written by Clay Smith
Why Read This?
Why feature artificial intelligence? My goal with this post is to inspire you. My hope is this would be a catalyst to change our patients’ lives for the better. So with that goal in mind, let’s dig in. We will cover the basics of how AI works, medical uses, our role as the human, ethical pitfalls, and how it will help us and impact our patients in the ED.
The Cognitive Revolution
You’ve heard of the Industrial Revolution. Through machines, we now have superhuman strength. Artificial intelligence is considered by some to mark the Cognitive Revolution. This week we covered two articles that dealt with artificial intelligence; one used machine learning, the other deep learning with an artificial neural network. The Cognitive Revolution has reached the ED. As I thought more about this, I started to wonder how much AI will change our practice in the near future and ran across this excellent article in the New Yorker that got me thinking – A.I. vs M.D. First of all, I throw around terms like AI, machine learning, deep learning, and artificial neural networks like I understand them. But do I?
What is A.I.?
If you’re like me, you need a simple explanation of artificial intelligence, machine learning, and deep learning.
Artificial intelligence – a computer that can mimic human thinking and learning. Example: Autonomous cars use visual input to brake and steer and use GPS input to navigate.
Machine learning – a computer program that can take raw data, analyze it, and use various algorithms to make a prediction or an answer. The computer can be supervised or unsupervised to chew through data until it can spit out the best answer. Example: Netflix has an algorithm; when I watch Napolean Dynamite, it predicts I will like Nacho Libre, which is true!
Deep learning – this allows computers to process even more complex problems. The gist is that a mass of raw data is input, analyzed, and the program refines itself to optimize a specific output. The “learning” happens when the computer gets the wrong output based on the raw data and uses this to refine the process until it gets it right. It is a subset of machine learning that uses layers of algorithms in an artificial neural network to make a conclusion or prediction. The difference from machine learning is that it can use the information to improve and learn. Example: a computer can take an image of a nevus and determine if it is a malignant melanoma based on being “trained” with >100,000 prior images of various skin lesions.
How Does It Work?
There are different kinds of “thinking” computers. IBM’s Deep Blue has every possible chess move and strategy programmed in, and it can calculate each potential move and always make the best one. This worked to beat human chess masters. But this is intelligence by brute force; it’s “if this, then that” to the max.
An artificial neural network (ANN) is different. It is literally wired with interconnections, like a brain, but much simpler.
As an example, let’s say we want a machine to classify apple vs orange. At first, the raw image pixels would be input (lots of data): orange vs red, round edges, smooth vs pitted, small stem vs little green nub, etc. The ANN would then determine the most important differences, pool the best data, and — voila! — render an output: apple or orange. In actual deep learning terms, these steps (nodes) are called convolution, activation, pooling, and fully-connected. The computer would first be trained with labeled images. The power of an ANN is that wrong outputs, saying an apple was an orange, are back propagated to the earlier nodes in the ANN to refine them by changing the weights and biases at each node. For example, let’s say an early ANN node said “round” was weighted heavily in determining if the image was an orange, but in fact it was an apple. Via back propagation, the ANN would train that node to weight “round” as less important the next time around. This process is repeated over and over until the ANN can predict apple vs orange reliably and do so with unlabeled images.
The Google ANN for the melanoma study is open source, by the way, if you want to train your own image classifier.
Current Medical Uses
There are literally thousands of current medical applications for AI. Here are some highlights.
- In the Human Dx project, a clinician can upload information about a case and get back a range of diagnostic possibilities and treatment options. See this article in AAMC.
- AI is used for cardiac risk prediction, microscopic leukemia diagnosis, radiology interpretation, genomics, mental health, surgery, and more. See this article in Futurism.
- Skin cancer diagnosis was studied, as mentioned above, and the computer beat dermatologists at diagnosing melanoma.
Role of the Human Doctor
All of this begs the question, if an AI image processor can make a melanoma diagnosis or other dermatologic diagnosis better than a doctor, what use is the doctor? The best way I found to describe our role in the era of AI was in the New Yorker article mentioned in the introduction, in an interview with the senior author of the AI- melanoma study, Dr. Thrun:
“‘Did the phone replace the human voice? No, the phone is an augmentation device. The cognitive revolution will allow computers to amplify the capacity of the human mind in the same manner. Just as machines made human muscles a thousand times stronger, machines will make the human brain a thousand times more powerful.’ Thrun insists that these deep-learning devices will not replace dermatologists and radiologists. They will augment the professionals, offering them expertise and assistance.”
- The other thing an AI machine can’t do is answer the “why” questions. You have that rash – but why? Was there a new soap? Were you under stress? Were you swimming in chlorine? As the author watched a dermatologist at work, “The most powerful element… was…not mastering the facts of the case, or perceiving the patterns they formed. It lay in yet a third realm of knowledge: knowing why… The algorithm can solve a case. It cannot build a case.”
- It takes a human doctor to make sense of the data from AI and how to act on it. It also takes a human touch to assuage fear and give courage and hope. It can’t cry with a loved one when her husband of 55 years just died. It can’t talk nervous parents into a full sepsis workup on their ill, febrile newborn baby. It can’t talk an anxious patient out of a CT that is clearly not indicated. A computer can’t roll with a wriggling 3 year old while sewing a topically anesthetized facial laceration, distracting them with Paw Patrol, and blowing bubbles. BAM!! Take that, AI!
One of the biggest concerns about AI is that the machines must be trained. The problem is, much of the training is done with pre-existing data. Sometimes the bulk of pre-existing data has been collected on white men. Could we be training machines to help white men, while women or other minorities benefit less? Could racial bias creep into the Human Dx Project? Could large databases with genomic information help minorities less because they are not represented in the DNA bank? Could machine learning algorithms be programmed to cheat (i.e. VW emissions) and make quality metrics look better than reality? Could AI clinical decision support be skewed to favor one drug manufacturer over another? These ethical issues are one reason I chose to feature this topic. Ignorance is not an option here. This is happening.
In a NEJM op-ed, they note, “Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes.” These are important issues outlined in a Forbes article and NEJM Perspective piece.
AI – Effects on the ED
How might artificial intelligence affect our experience in the ED?
Already, a British app is reducing ED visits by using AI to screen patients before coming in to determine the appropriate venue for care based on the information entered.
Vanderbilt made a program for Alexa to help patients determine if the have the flu.
And here, I let my imagination run wild. Some of these are already happening, are in production, or are possible based on my reading. Others are from my own brainstorming, but I doubt someone has not thought of them all before. Anyway, here is my list of ways AI and other smart technology will inevitably change the ED.
- Predicting ICU admits and preventing need for rapid response
- Predicting stroke/TIA risk
- Predicting cardiac outcomes/ chest pain risk stratification
- Seeing rare ECG patterns (i.e deWinter, Brugada, etc)
- PE risk prediction
- Reading CTs for us
- Alexa-like listening and writing our note for us (Yay! Someone please develop this…seriously)
- Listening for “red flags” in the history we may have missed
- Monitoring hand washing
- Visually scanning a patient for subtle injury
- “Watching” and “listening” at the front desk and learning to triage
- Infrared cameras to find bleeding or prevent pressure sores
- Remotely monitoring elderly and preventing falls
- Predicting trauma, sepsis, and other mortality
- Determining “futility” or need for palliative care
- Burn TBSA assessment
- Robotic IV starts
- Robotic ETT
- Early warning of deterioration on monitors
- Transporting patients with smart beds that drive themselves
- Medication safety, drug interactions, cross checking genomics with new prescriptions
- Febrile infant risk stratification
- ED metrics, staffing, wait time and cueing predictions
- Video/audio waiting room monitoring and surveillance for decompensating patients
- Patient facial recognition upon arrival
- Virtual personal AI-assisted apps to aid in diagnosis
- Refining prehospital STEMI for EMS
- Prehospital “Google glass” stroke assessment by the computer vs teleneurology
- Disaster triage and sorting
- Online EMS redirection by acuity/mechanism/injury/vitals/census/specialty
- Self check-out for flu/strep similar to a grocery store kiosk
Becoming More “Intelligent”
There is virtually no aspect of Emergency Department care in which AI could not play a pivotal role in the future. We have all been using AI for a while. Examples include: Google Maps, ride-sharing apps, commercial flights (the plane’s autopilot), email spam filters, mobile deposits, credit card fraud prevention, Facebook, Snapchat, Instagram, online shopping, voice-to-text, Alexa, and on it goes. These came from this article. The great thing is, the best AI works in the background, and you don’t realize you’re using it. It just makes your life better. As with all good things, there is a dark underbelly. Elon Musk has been sounding the alarm over AI for years. And the same holds true for AI in medicine. Ethical issues abound.
But would we go back to the days before penicillin, MRI, or videolaryngoscopes? I don’t think so. The tsunami of AI in medicine has already hit. We need to understand it if we are to use it for the good of people. And we need to get ahead of it to safeguard against the ethical pitfalls. My sincere hope is that this article will help one reader to think of something new — to look back on this weekend’s feature article as a catalyst that propelled them forward for the benefit of patients all over the world. Probably that’s too lofty, but maybe… My brainstorm in Nashville came up with dozens of applications for AI in about 30 minutes of thinking. What might you come up with?