The rise of artificial intelligence has raised questions about the ethics and regulation of using technology to artificially ‘resurrect’ the dead.

Katarzyna Nowaczyk-Basińska, a researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, told Science News that companies already exist, such as Project December, that use data collection and AI to mimic loved ones who are no longer alive. She co-authored a research article to “analyze potential negative consequences of adopting generative AI solutions in the digital afterlife industry.”

The article considered three scenarios to describe problems that could arise. One scenario examined consent. In that scenario, an older person secretly signs up for a 20-year subscription to aid in the grieving process for his adult children.

CLICK HERE TO GET THE DALLAS EXPRESS APP

“And now just imagine that after the funeral, the children receive a bunch of emails, notifications, or updates from the re-creation service, along with the invitation to interact with the bot of their deceased father,” Nowaczyk-Basińska said, per Science News. “[The children] should have a right to decide whether they want to go through the grieving process in this way or not. For some people, it might be comforting, and it might be helpful, but for others not.”

The researchers also presented a scenario where AI technology could exploit product placement. In that scenario, a “deadbot” — an AI-enabled digital representation of a deceased individual — is asked about a recipe, but instead of providing the recipe, the bot offers a food delivery service.

“Our concern is that griefbots might become a new space for a very sneaky product placement, encroaching upon the dignity of the deceased and disrespecting their memory,” Nowaczyk-Basińska said, per Science News.

The final scenario examined the impact the technology could have on minor children. This scenario involved a terminally ill mother who created a “griefbot” — a virtual presence that enables “communication” with the deceased based on the digital footprint that the person left behind — to help her 8-year-old son through the grieving process. The scenario saw that, over time, the bot could provide confusing information as it learns to respond to the boy.

“I think we could go even further and use this … in the near future to even conceal the fact of the death of a parent or the other significant relative from a child,” Nowaczyk-Basińska said, according to Science News. “And at the moment, we know very, very little about how these technologies would influence children.”

“We argue that if we can’t prove that this technology won’t be harmful, we should take all possible measures to protect the most vulnerable. And in this case, that would mean age-restricted access to these technologies.”

“Re-creation service providers today are making totally arbitrary decisions of what is acceptable or not,” Nowaczyk-Basińska concluded. “And it’s a bit risky to let commercial entities decide how our digital death and digital immortality should be shaped. People who decide to use digital technologies in end-of-life situations are already in a very, very difficult point in their lives. We shouldn’t make it harder for them through irresponsible technology design.”

Author