For more than a decade, researchers have wondered whether artificial intelligence could help predict what incapacitated patients might want when doctors must make life-or-death decisions on their behalf.
It remains one of the most high-stakes questions in health care AI today. But as AI improves, some experts increasingly see it as inevitable that digital “clones” of patients could one day aid family members, doctors, and ethics boards in making end-of-life decisions that are aligned with a patient’s values and goals.
Ars spoke with experts conducting or closely monitoring this research who confirmed that no hospital has yet deployed so-called “AI surrogates.” But AI researcher Muhammad Aurangzeb Ahmad is aiming to change that, taking the first steps toward piloting AI surrogates at a US medical facility.
Sure, if it has power of attorney, which is a dumb thing to grant AI.
Before even considering that, I’d need to know it has my best interests at heart. I don’t trust AI to serve me over the corporations that made it, but I’m a bit older. Younger generations trust AI more. I don’t think that’s great but it’s their decision. It probably couldn’t be me though.
My other question is, if there’s a chance it would go against my wishes, is it really a copy of me? Shouldn’t a copy of me do what I want? That’s the real test, I think.
I’d kind of want to meet this AI. But even given all the advancements in AI, how can it be a perfect copy if it doesn’t have all my memories and know all my secrets?
It's not possible to grant Power of Attorney to anyone or anything who doesn't meet the standard to sign a contract, e.g. they must be of sound mind. An LLM doesn't have a mind to be sound of in the first place, so it can't meet the standard.