U.S. financial regulators have set their sights on the growing proliferation of artificial intelligence platforms, like OpenAI’s ChatGPT. The nascent technology is already being leveraged to produce powerful algorithms, reported The Atlanta Journal-Constitution.

“There’s this narrative that AI is entirely unregulated, which is not really true,” said Ben Winters, senior counsel for the Electronic Privacy Information Center, per the AJC. “They’re saying, ‘Just because you use AI to make a decision, that doesn’t mean you’re exempt from responsibility regarding the impacts of that decision. This is our opinion on this. We’re watching,’” he said.

Financial institutions have already experienced fines related to automated systems. The Consumer Finance Protection Bureau (CFPB) said it had leveled fines against banks using the approach which resulted in wrongful home foreclosure, for example.

One challenge for regulators is the technological know-how needed to navigate the rapidly evolving industry. According to Rohit Chopra, director of the CFPB, his agency has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists and others to make sure we can confront these challenges,” per the AJC.

The Equal Employment Opportunity Commission, the Department of Justice, the CFPB, and the Federal Trade Commission, are all allocating resources to manage AI, say representatives from each group.

CLICK HERE TO GET THE DALLAS EXPRESS APP

“One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,” Chopra said, reported the AJC. “In other cases, we’re looking at how our fair lending laws are being adhered to when it comes to the use of all of this data,” he said.

When financial institutions deliver an adverse credit decision, they must be able to explain it, according to the Fair Credit Reporting Act and Equal Credit Opportunity Act. With AI, however, the decision-making can become opaque. In those instances, according to regulators, algorithms should not be used.

“I think there was a sense that,’ Oh, let’s just give it to the robots and there will be no more discrimination,’” Chopra said, per the AJC. “I think the learning is that that actually isn’t true at all. In some ways the bias is built into the data.”

EEOC Chair Charlotte Burrows has similar concerns over employment, stressing there will be enforcement against AI use that filters out applicants with disabilities, for example.

At a conference held earlier in May, a top lawyer of OpenAI suggested that the industry help spearhead the regulatory framework.

“I think it first starts with trying to get to some kind of standards,” Jason Kwon, OpenAI’s general counsel, said during a tech summit in Washington, D.C., per the AJC. “Those could start with industry standards and some sort of coalescing around that. And decisions about whether or not to make those compulsory, and also then what’s the process for updating them, those things are probably fertile ground for more conversation.”

The head of OpenAI, Sam Altman, even suggested that a government-led AI regulatory and licensing body be formed. Government oversight “will be critical to mitigate the risks of increasingly powerful” artificial intelligence, warned Altman.

Author