Safeguards needed to prevent AI ‘hauntings’ in the Digital Afterlife

Deadbots, AI-driven chatbots, have sparked a heated debate over their ethical implications and potential psychological impacts.

In the evolving landscape of artificial intelligence, a new technology has emerged that could forever change how we mourn: 'deadbots'. These AI-driven chatbots, designed to mimic the language and personality of deceased individuals using their digital traces, have sparked a heated debate over their ethical implications and potential psychological impacts.

Researchers at the University of Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) have raised concerns about these 'griefbots'. According to their recent study in the journal Philosophy and Technology, without proper design safety standards, these digital avatars may lead to psychological distress or even create the eerie sensation of being haunted by the digital echoes of deceased loved ones.

"Deadbots present a high risk in AI application, requiring urgent attention to their ethical design," said Dr. Katarzyna Nowaczyk-Basińska, a co-author of the study.

The researchers at Cambridge highlight three potential scenarios that illustrate the risks of poorly designed afterlife AI platforms. The first scenario involves a service called "MaNana," where a user can create a chatbot of their deceased grandmother without the grandmother's prior consent. The initial comfort provided by the chatbot fades when it starts to push advertisements, causing emotional conflict and distress over the manipulation of their loved one's memory.

"People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation," noted Dr. Tomasz Hollanek, another co-author.

Another scenario titled "Paren’t" shows a terminally ill woman creating a deadbot to help her young son cope with her impending death. While initially therapeutic, the bot begins to confuse the child with suggestions of a future in-person meeting, highlighting the need for age-appropriate design and clear communication that the interactions are with an AI.


Related Stories:


The final scenario, "Stay," involves an elderly person pre-purchasing a 20-year deadbot subscription to comfort their future bereaved family. This backfires as one family member feels overwhelmed and guilty, unable to disengage from the invasive communications without violating the service contract.

These scenarios underscore the researchers' call for ethical guidelines in the design of digital afterlife services, including obtaining consent from the data donors before death, ensuring transparent user interactions with these AI systems, and providing means for emotional closure.

"Design protocols should prevent deadbots from being utilized in disrespectful ways or having an active presence on social media," the Cambridge team asserts. They suggest incorporating design prompts for users to consider if the deceased had expressed how they wanted to be remembered, thus respecting the dignity of both the dead and the living.

A visualization of a fictional company called Stay, one of the design scenarios used in the paper to illustrate the potential ethical issues in the emerging digital afterlife industry. (CREDIT: Dr Tomasz Hollanek)

The concept of deadbots also raises significant concerns about the consent of both the deceased and the living relatives who interact with these simulations. Cambridge researchers argue that it is essential to safeguard the rights of all parties involved, including the option to opt out from interactions that could cause distress.

"This area of AI is an ethical minefield. It's important to prioritize the dignity of the deceased and ensure that this isn’t encroached on by financial motives of digital afterlife services," emphasized Dr. Nowaczyk-Basińska.

As AI continues to integrate into various aspects of human life, the creation of deadbots presents profound implications for how society handles grief and memory. The researchers' work highlights the urgent need for a careful consideration of the ethical, psychological, and social dimensions of this technology to mitigate its risks.

"We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here," concluded Nowaczyk-Basińska.

This emerging field promises to provide comfort and a new form of remembrance, but it also poses new challenges as it tests the boundaries of technology's role in the most sensitive aspects of human life. As deadbots become more common, the need for rigorous ethical standards and careful consideration of their long-term impacts becomes increasingly apparent.

For more science and technology stories check out our New Innovations section at The Brighter Side of News.


Note: Materials provided above by The Brighter Side of News. Content may be edited for style and length.

Like these kind of feel good stories? Get the Brighter Side of News' newsletter.


Joshua Shavit
Joshua ShavitScience and Good News Writer
Joshua Shavit is a bright and enthusiastic 17-year-old student with a passion for sharing positive stories that uplift and inspire. With a flair for writing and a deep appreciation for the beauty of human kindness, Joshua has embarked on a journey to spotlight the good news that happens around the world daily. His youthful perspective and genuine interest in spreading positivity make him a promising writer and co-founder at The Brighter Side of News.