top of page

Will the real slim shady please stand up:When AI in medicine goes seriously wrong,who is responsible

Technology, some might argue, has advanced to the point where it’s possible that it’s become reckless. A robot performing a doctor’s job may seem to be the safer choice. Assistive devices indeed make surgery easier, more precise, and safer—that is, when it works the way we want it to. According to the BBC, A.I. doctors killed 144 patients and injured more than 1,000 people in a 14-year period in the US. It doesn’t seem like that big of a number, but I believe that every life in this world matters. The way that A.I. is gradually clawing its way through healthcare raises a grave question: who should be responsible when this device malfunctions?



There’s no easy answer, states Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University. At any point in the process of applying AI in healthcare, from planning to information and implementing, errors are possible. "This is a big mess," says Lin. "It’s not clear who would be responsible."

The AI applications in the medical field often include devices working with humans, which further makes it more confusing to determine who would be liable if the device were to mess up because the details of why an error or accident happens really matter.

An important example of an A.I.-powered device currently being used in this field is IBM’s Watson. Watson is being used to perform various tasks in healthcare that were previously performed only by doctors. Let’s explore to what extent robots can replace doctors.

What can these devices accomplish and what can they destroy?

A.I. is replacing doctors by taking X-rays and scans and performing diagnoses of patients’ symptoms. A.I. is also being utilised in psychology, where robots are programmed to counsel patients. Devices have also been designed to perform sensitive surgical procedures. We can therefore definitely say that the role played by A.I. in medicine is going to rise exponentially in the future.

Because medicine is not an exact science, it is possible that Watson or any other device can make errors which eventually result in injury to patients. Another question arises here: Is it ethical to let a robot do a human’s job? Do you know that there’s a possibility that the robot or device could mess it up? You’ll know the answer to that in the next few paragraphs.

According to open access text’s article titled "Some ethical and legal consequences of the application of artificial intelligence in the field of medicine,"

"A specific example of A.I. Driven Diagnosis saving life was reported from Japan, where doctors saved a woman’s life using A.I. to diagnose a rare form of cancer that they had not detected after many tests on their patient. The 60-year-old woman had not responded to treatment for a cancer diagnosis by her doctors. In desperation, they supplied an A.I. system with huge volumes of clinical cancer case data. The system took 10 minutes to diagnose a rare form of leukaemia that had eluded her doctors, who had relied on standard tests. "

But, of course, not everything is well in the wonderland that is the intersection of robots and medicine. P.S. I would suggest listening to Wonderland by Taylor Swift. I love that song, especially the bridge!

A surgical robot named "The Da Vinci Surgical System" has been the centre of attention, and not for the right reasons. It has been the subject of many lawsuits. The following injuries have been reported up until now: sepsis, vascular injuries, burns, and excessive bleeding.

We can see that these are two sides of the same coin that is A.I. in healthcare. A human cannot mimic the stability of a robot hand, however much we try. Whenever you include technology in a field that relies partly on human emotions, you’re bound to observe changes that you may not like. And you’re also bound to observe changes that make you happy, like the efficiency of a robot.

In a normal doctor-patient scenario, if something happens to the patient, the doctor is fully responsible for that. But when we introduce a machine into this mix, there are now possibly four liable parties-

  • The patient who consented to having the surgery performed by the robot in the first place

If the patient went along with a robotic device performing the respective surgery, it’s obvious that he/she should be

  • The human surgeon who oversees the robot

  • The company that manufactures the robot

  • The specific engineer who designed it

In the case of an injury, it becomes really hard to concentrate all the blame on one party, as all the parties involved influenced the end result somehow. Reducing mistakes made by robotic devices should be a priority for healthcare institutions that use these devices. Robots are definitely a course of action, but there should always be protocols in case something goes wrong.

A dilemma indeed

It’s very unclear what the future will be for A.I. in healthcare. Who knows, maybe in some years, becoming a medical robot could be a career choice. Legal systems around the world will soon be faced with radical decisions relating to A.I. systems. Everything has its pros and cons (you should know that by now).

We’ll never get rid of all the problems surrounding the use of A.I. in different fields, but we can attempt to make these problems negligible. A.I., like all technology, works very differently in the lab than in the real world.

But people still being wary of the use of A.I. in healthcare is justified when there’s no specific entity who takes the accountability for the mistakes. "Medicine is still evolving. It’s part art and part science," says Lin. "You need both technology and humans to respond effectively."

But would you ever trust a robot to perform surgery on you?


bottom of page