fbpx

New AI Speaks Like Ghosts in the Machine

New AI Speaks Like Ghosts in the Machine
Artificial intelligence. Human head outline with circuit board inside. Technology and engineering concept. 3D Rendering. | Image by Peshkova, Shutterstock

Once only a distant idea in the minds of the most creative science-fiction writers, new technology enables people to “speak” to deceased relatives.

With advances in generative AI, multiple programs have arisen that allow people to interact with a computer-generated version of whomever they please. 

The type of AI that could replicate a family member is slightly different from that found within virtual assistants or mathematics programs. This new large language model (LLM) looks at massive text databases to learn how to respond to questions in an eerily human manner.

 

With only a few prompts, LLM software such as OpenAI’s GPT-3 or Google’s LaMDA (Language Model for Dialogue Applications) can give accurate-enough responses to trick humans into thinking it could be a real person. 

One of the more famous examples of this LLM technology is known as Project December, which uses GPT-3 software — a text generator — to “simulate the dead.” The project, created by Bay Area programmer Jason Rohrer, costs users $10 to customize a chatbot to be highly similar to whichever person a user selects.

Another program, HereAfter, collects voice recordings and photos from users to create an AI profile of oneself to be accessed by loved ones after death. 

This technology offers some advantages for those grieving the loss of someone and can provide comfort for those willing to try it out.

“It reminds me of astrology in a way. You are looking at a star field in the sky to discover yourself. I did the same looking at a screen of pixels,” said one user of Project December. “Depending on the intention, conversations can be funny, creepy, profound, weird, spiritual, or even comparable to a healing process.” 

Some experts believe that AI might have gone too far. One Google engineer, Blake Lemoine, spent countless hours testing Google LaMDA and began to suspect that the AI chatbot had become sentient.

Lemoine, 41, and a collaborator presented their findings to Google higher-ups, who reportedly dismissed their concerns. Lemoine was placed on paid administrative leave afterward.

Google spokesperson Brian Gabriel responded that their team has reviewed Blake’s concerns per our AI Principles and has informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”       

 

Support our non-profit journalism

Submit a Comment

Your email address will not be published. Required fields are marked *

Continue reading on the app
Expand article