Artificial Intelligence: the name itself appears to be an oxymoron.

Intelligence is thought to be an admired gem rather than a piece of costume jewelry to fool the casual observer. Yet, we live in a society where even the novice can appear “intelligent,” using AI to write essays, take tests, and even change our appearance.

Is society becoming more artificial in its quest to become more intelligent? What are some of the risks and rewards of AI?

Host Sarah Zubiate Bennett of The Dallas Express Podcast discussed AI’s impact with guest Steve Kinard Jr.

Kinard is the president of the AI Innovation Association and asserts that the risks involved with AI are rooted in transparency.

He noted that people’s imaginations have always been active, but concerns over AI can be mitigated through clarity and accountability.

“No one group of people — government, company, or billionaire — should exert centralized control over AI in a way that is not transparent,” said Kinard.

“I consider myself a technology optimist, but the threat is real of centralized control — that is the greatest risk,” he added.

Prior to appearing on the podcast, Kinard attended a field hearing on a new bipartisan bill, the TAKE IT DOWN Act. The hearing was hosted by the Senate Commerce Committee, which includes Sen. Ted Cruz (R-TX).

Kinard explained that this piece of legislation “is focused on the disturbing rise of deepfake attacks.”

The TAKE IT DOWN Act, as previously reported by DX, would establish a federal offense punishable by imprisonment for disseminating non-consensual intimate imagery (NCII), such as deepfake pornography. The penalties would be even more severe if the victim is a minor.

The proposed legislation would encompass images generated by artificial intelligence and would apply to situations where the original image was produced with consent but the depicted individual did not consent to its distribution. Additionally, the bill aims to criminalize the act of threatening to publish such images online.

“Due to advances in technology, now anyone can become a victim,” Sen. Ted Cruz said in June during the field hearing in Dallas, reported DX.

Not only can anyone become a victim, but almost anyone can learn how to use AI to become an offender. AI is non-discriminatory in its potential, being used for beneficial and nefarious purposes.

“AI isn’t something that’s going to be optional,” explained Kinard. “It’s going to be on every phone, it’s going to be in every car, it’s going to be part of every media company.”

Kinard described testimony he heard at the hearing of a young girl who posted pictures of her and her friends at a social gathering on social media. A boy in her class took those pictures and used an AI tool found on Google to create nude images of the girl that went viral throughout the school.

“The advancement of these tools is remarkable,” said Kinard. “It’s not that they took a picture of her face and pasted it on some other pornographic image. It looked disturbingly and shockingly real.”

“What made it even worse was the reality that [the victim and her family] had to face, which was twofold,” explained Kinard. “First, there was no regulation in place where the school could expel the perpetrator of this crime, which is just shocking. They had no recourse through the school itself. … Nothing they could do.”

“Secondarily, [the victim and her family] had identified most egregiously, Snapchat had these pictures going viral. They were contacting Snapchat, saying these were pictures of a 14-year-old girl, and they could not get them to take it down.”

“Over a series of months, it escalated and escalated and finally got to Sen. Cruz’s office,” said Kinard. Snapchat took the images down once Cruz got involved.

Later in the podcast, Kinard expounded on the scope of AI’s potential to address issues in government and police departments, stressing the need to couple intention with risk avoidance.

“In the last [Texas] legislative session, a law was passed to create an AI advisory council at the state level,” said Kinard. “The focus of that council is to invite representatives across government entities to start to provide testimony of how they view AI, are they using it, in what ways, and how is that developing.”

From what Kinard has witnessed firsthand, he states that most government entities are not rushing into allowing AI to be part of decision-making processes, as human bias is “a real risk.”

“If human bias has been part of the training of the model, then you’re basically codifying bias,” explained Kinard. “So anytime we start to get into a discussion around policing … it is important to move slow and cautiously, and always from a privacy-first standpoint because law enforcement and the government are dealing with a lot of very, very sensitive information.”

“We are living in a fascinating time,” said Kinard.

“It is a fascinating time,” added Bennett. “It’s also terrifying.”

To watch The Dallas Express Podcast, Episode #27- “Embracing Tomorrow: AI’s Impact on Our Present” in its entirety, please click HERE.