Saturday, September 23, 2023

How AI can be used to make life and death decisions

Must read

Shreya Christina
Shreya has been with for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

By the 2000s, an algorithm had been developed in the US to identify recipients of donated kidneys. But some people weren’t happy with the way the algorithm was designed. In 2007, Clive Grawe, a Los Angeles kidney transplant candidate, told a room full of medical experts that their algorithm was biased against older people like him. The algorithm was designed to allocate kidneys in a way that maximized the number of life years saved. This benefited younger, wealthier and whiter patients, Grawe and other patients argued.

Such bias in algorithms is common. What is less common is that the designers of those algorithms agree that there is a problem. After years of consulting with laymen like Grawe, the designers found a less biased way to maximize the number of years saved, including by considering general health in addition to age. An important change was that the majority of donors, often people who died young, would no longer be matched only with recipients in the same age category. Some of those kidneys could now go to older people if they were otherwise healthy. As with Scribner’s committee, the algorithm still wouldn’t make decisions that everyone would agree with. But the process by which it was developed is harder to criticize.

“I didn’t want to sit there and give the injection. If you want it, press the button.”

Philip Nitschke

Nitschke also asks difficult questions.

Nitschke, a former physician who burned his medical license after a years-long legal dispute with the Australian Medical Board, stands out for being the first person to legally administer a voluntary lethal injection to another human being. In the nine months between July 1996, when the Northern Territory of Australia passed a law legalizing euthanasia, and March 1997, when Australia’s federal government overthrew it, Nitschke helped four of his patients commit suicide.

The first, a 66-year-old carpenter named Bob Dent, who had suffered from prostate cancer for five years, explained his decision in an open letter: “If I kept a companion animal in the same condition I am in, I would be prosecuted.”

Nitschke wanted to support his patients’ decisions. Still, he was uncomfortable with the role they were asking him to play. So he made a machine to take his place. “I didn’t want to sit there and give the injection,” he says. “If you want it, press the button.”

The machine wasn’t much to look at: it was essentially a laptop connected to a syringe. But it has served its purpose. The Sarco is a repeat of that original device, which was later acquired by the Science Museum in London. Nitschke hopes an algorithm that can perform a psychiatric assessment is the next step.

But there’s a good chance that hope will be crushed. Creating a program that can assess one’s mental health is an unresolved and controversial issue. As Nitschke himself points out, doctors disagree on what it means for common sense to choose to die. “You can get a dozen different answers from a dozen different psychiatrists,” he says. In other words, there is no common foundation on which to build an algorithm.

More articles


Please enter your comment!
Please enter your name here

Latest article