Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
A South Korean mother got to see her dead daughter again thanks to virtual reality. A chatbot designed to simulate a recently deceased person may seem like a helpful way to mourn, but new research shows that so-called deadbots can have devastating effects. Photo: MBC

Back from the dead online: how AI chatbots that simulate lost loved ones might cause ‘unwanted digital hauntings’

  • If you could speak to a dead loved one again, would you? AI chatbots offer people that opportunity – but new research shows this can have devastating effects
  • The chatbots – known as deadbots – risk causing psychological harm to the bereaved, a researcher warns

Artificial intelligence (AI) chatbots that simulate the language and personalities of dead people risk distressing loved ones left behind through “unwanted digital hauntings”, a researcher has warned.

A study from Cambridge University in the UK suggested that the AI chatbots – known as “deadbots” – need design safety protocols to prevent them from causing psychological harm.

Some companies are already offering services that allow a chatbot to simulate the language patterns and personality traits of a dead person using the digital footprint that they have left behind, according to the research.

AI ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence describe the area as “high risk”.

A man uses AI technology to “resurrect” his late grandmother, triggering controversy online. Photo: Baidu

Co-author Dr Tomasz Hollanek, from the Leverhulme Centre, said: “It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations.

“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI re-creations of those they have lost.

“The potential psychological effect, particularly at an already difficult time, could be devastating.”

The study, published in the journal Philosophy and Technology, highlights the potential for companies to use deadbots to advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you”.

The researchers say that when the living sign up to be virtually re-created after they die, the resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally “stalked by the dead”.

Dr Tomasz Hollanek is a co-author of the study.

Even those who take initial comfort from a deadbot may get drained by daily interactions that become an “overwhelming emotional weight”, the study’s authors argue, yet they may also be powerless to have an AI simulation suspended if their now-dead loved one signed a lengthy contract with a digital afterlife service.

Study co-author Dr Katarzyna Nowaczyk-Basinska said: “Rapid advancements in generative AI mean that nearly anyone with internet access and some basic know-how can revive a dead loved one.

“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.

“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner.

“The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

A Cambridge University study suggested that the AI chatbots – known as deadbots – need design safety protocols to prevent them from causing psychological harm. Photo: Getty Images
The researchers say that platforms offering to re-create the dead with AI for a fee already exist, such as Project December, which started out harnessing GPT models before developing its own systems, and apps including HereAfter.

Similar services have begun to emerge in China, according to the study.

Hollanek said people “might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation”.

He said that ways of “retiring deadbots in a dignified way should be considered”, which “may mean a form of digital funeral”.

“We recommend design protocols that prevent deadbots being utilised in disrespectful ways, such as for advertising or having an active presence on social media,” he added.

The researchers recommend age restrictions for deadbots and call for “meaningful transparency” to ensure users are consistently aware that they are interacting with an AI.

They also called for design teams to prioritise opt-out protocols that let users terminate their relationships with deadbots.

Nowaczyk-Basinska said: “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.”

Post