Humans Could End Up Being ‘Haunted’ By AI ‘Ghosts’ Of Dead Loved Ones, Researchers Warn

Concept image of an AI wearing a digitized human face

iStockphoto


At what point will we collectively decide that we need to slow down on all of the advancements we are making in the field of artificial intelligence (AI)?

Numerous extremely smart people have already issued several pretty terrifying warnings about AI, but it hasn’t seemed to have any affect at all.

The latest warning about artificial intelligence comes to us via a recent study conducted by AI ethicists at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence.

Their research, published in the journal Philosophy and Technology, warns that without proper safety protocols, humans could end up being “haunted” by AI “ghosts” of dead loved ones.

Despite the concept coming straight out of a fictional episode of Black Mirror, the researchers’ concerns about these Deadbots or Griefbots are very real.

“Artificial intelligence that allows users to hold text and voice conversations with lost loved ones runs the risk of causing psychological harm and even digitally ‘haunting’ those left behind without design safety standards,” the AI ethicists stated in a press release about their study.

In their paper, they outline three design scenarios they classify as being “high risk.”

1. Companies could use Deadbots to “surreptitiously advertise products to users in the manner of a departed loved one.”

“When the living sign up to be virtually re-created after they die, resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally ‘stalked by the dead,'” the researchers explained.

2. Deadbots could “distress children by insisting a dead parent is still ‘with you.'”

For example, a terminally ill woman could leave behind a Deadbot to help her young child with the grieving process.

“While the Deadbot initially helps as a therapeutic aid, the AI starts to generate confusing responses as it adapts to the needs of the child, such as depicting an impending in-person encounter,” the researchers wrote.

3. People who may take initial comfort from a Deadbot may get “drained” by the daily interactions with their deceased relative.

The final scenario explored by the study – a fictional company called “Stay” – shows an older person secretly committing to a Deadbot of themselves and paying for a twenty-year subscription, in the hopes it will comfort their adult children and allow their grandchildren to know them.

After death, the service kicks in. One adult child does not engage, and receives a barrage of emails in the voice of their dead parent. Another does, but ends up emotionally exhausted and wracked with guilt over the fate of the Deadbot. Yet suspending the Deadbot would violate the terms of the contract their parent signed with the service company.

God forbid your Deadbot ends up dating another Deadbot and they have relationship issues.

“Rapid advancements in generative AI mean that nearly anyone with internet access and some basic know-how can revive a deceased loved one,” said study co-author Dr. Katarzyna Nowaczyk-Basińska.

“This area of AI is an ethical minefield. It’s important to prioritize the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.

“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

Douglas Charles headshot avatar BroBible
Before settling down at BroBible, Douglas Charles, a graduate of the University of Iowa (Go Hawks), owned and operated a wide assortment of websites. He is also one of the few White Sox fans out there and thinks Michael Jordan is, hands down, the GOAT.