Google placed an employee on paid administrative leave on Monday, June 6, after the man publicly claimed that the tech giant accidentally created a sentient chatbot, allegedly violating his confidentiality agreement with the company.

Blake Lemoine, a senior software engineer in Google’s Responsible AI organization, claims LaMDA, Google’s artificially intelligent chatbot generator, has achieved sentience.

“I know a person when I talk to it,” said Lemoine in an interview with The Washington Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Google executives deny LaMDA is sentient, arguing that AI technologies source from a wide breadth of content online that gives it a semantic intelligence.

“Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” said Google spokesperson Brian Gabriel to The Washington Post. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

Lemoine published an edited transcript of his conversations with LaMDA on June 11. He linked it on his Twitter account, tweeting, “An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”

He previously presented some conversations to Google executives in April, but they dismissed his concerns.

Lemoine was eventually placed on paid administrative leave after seeking external consultation, which included trying to secure a lawyer to represent LaMDA, according to The Washington Post.