‘Fredbot’ is one example of a technology known as chatbots of the dead, chatbots designed to speak in the voice of specific deceased people. Other examples are plentiful: in 2016, Eugenia Kuyda built a chatbot from the text messages of her friend Roman Mazurenko, who was killed in a traffic accident. The first Roman Bot, like Fredbot, was selective, but later versions were generative, meaning they generated novel responses that reflected Mazurenko’s voice. In 2020, the musician and artist Laurie Anderson used a corpus of writing and lyrics from her late husband, Velvet Underground’s co-founder Lou Reed, to create a generative program she interacted with as a creative collaborator. And in 2021, the journalist James Vlahos launched HereAfter AI, an app anyone can use to create interactive chatbots, called ‘life story avatars’, that are based on loved ones’ memories. Today, enterprises in the business of ‘reinventing remembrance’ abound: Life Story AI, Project Infinite Life, Project December – the list goes on.
These apps and algorithms are part of a growing class of technologies that marry artificial intelligence (AI) with the data that people leave behind. These technologies will become more sophisticated and accessible as the parameters and popularity of large language models increase and as personal data expands into the seeming permanence of the cloud. To some, chatbots of the dead are useful tools that can help us grieve, remember, and reflect on those we’ve lost. To others, they are dehumanising technologies that conjure a dystopian world. They raise ethical questions about consent, ownership, memory and historical accuracy: who should be allowed to create, control or profit from these representations? How do we understand chatbots that seem to misrepresent the past? But for us, the deepest concerns relate to how these bots might affect our relationship to the dead. Are they artificial replacements that merely paper over our grief? Or is there something distinctively valuable about chatting with a simulation of the dead?
Although chatbots have been around for a long time, chatbots of the dead are a relatively new innovation made possible by recent advances in programming techniques and the proliferation of personal data. On a basic level, these chatbots are created by combining machine learning with personal writing, such as text messages, emails, letters and journals, which reflect a person’s distinctive diction, syntax, attitudes and quirks. There are various ways this combination can be achieved. One resource-intensive method involves creating a new chatbot by training a language model on someone’s personal writing. A technically simpler method involves instructing a pretrained chatbot, like ChatGPT, to utilise personal data that is inserted into the context window of a conversation. Both methods enable a chatbot to speak in ways that resemble a dead person by ‘selectively’ outputting statements the person actually wrote, ‘generatively’ producing novel statements that bear some resemblance to statements the person actually wrote, or some combination of both.
Chatbots can be used on their own or combined with other forms of AI, such as voice cloning and deepfakes, to create interactive representations of the dead. When provided with the right data, many companies and platforms now have the technical capacity to generate a conversational AI version of your deceased loved one. In the future, these chatbots will likely become more common and sophisticated, involving much more than just text. Like human mediums and Ouija boards, these bots appear to meet one of our deepest desires: to speak with the dead once again.
Many critics view this technological endeavour as an especially abject form of death denial. A common objection to these bots is that emotionally vulnerable users may become so invested in their interactions that they will conflate their chatbot with the deceased person, or lose sight of the fact that the person is gone. As the philosopher Patrick Stokes puts it in Digital Souls (2021), we may ‘become so used to avatars of the dead that we accept and treat them as if they’re the dead themselves.’ This sort of worry suggests, as Weizenbaum feared, that the salutary potential of chatbots is based in delusional thinking.
Another worry relates to chatbots’ lack of inner lives. Critics, like the philosopher Shannon Vallor in The AI Mirror (2024), argue that there is something defective about emotional bonds with entities that cannot reciprocate affection or interest, about love that is kept alive by ‘the economic rationality of exchange’ rather than a more precarious ‘union of loving feeling and action’. And these emotionally one-sided relationships create a risk of over-reliance, social isolation and exploitation. This risk is especially stark given that chatbots of the dead may be produced by companies with a financial incentive to manipulate users and maximise engagement. A chatbot of the dead that is highly monetised (unlock your chatbot’s ‘caring’ traits for a small fee!) or gamified (chat every day to level up!) could be a dangerous tool in the hands of an unscrupulous corporation.'
A discerning user should view chatbots similarly. Depending on how they are designed, chatbots can represent a person in many ways. The assumption that a chatbot delivers – or seeks to deliver – an authoritative replication of a deceased person makes as much sense as the assumption that an actor’s portrayal of a historical figure in a drama represents the sole faithful depiction of that figure. Just as the inspiration for a historical character may come from various sources, our personal data flows from different aspects of our identities. People do not really speak in a single voice: most people are different on social media than they are in text messages, for example. There is no one way to design a chatbot ‘actor’ because there is no best or definitive perspective on a person, no best or definitive fictional world in which to encounter someone’s legacy. There are countless useful, informative, intriguing, funny, strange, beautiful perspectives that a chatbot ‘actor’ might stage, just as there are countless ways a human actor can play a role, a writer compose a memoir, or a portraitist paint a picture.