Leading tech industry titans held a private meeting with the U.S. Senate this week to discuss some of the potential risks surrounding artificial intelligence.

For months, senators have been working to develop legislation to rein in AI, and the White House has been working on an executive order, according to NPR. AI has grown increasingly more advanced, yet efforts to regulate it have been relatively slow.

As a result, two open letters have been addressed to lawmakers this year urging for the deceleration of machine learning projects and the development of governance systems to mitigate any possible risk to humanity by AI, as previously reported by The Dallas Express.

Industry heavyweights like OpenAI CEO Sam Altman, former Google Vice President Geoffrey Hinton, and SpaceX CEO Elon Musk have signed the letters.

Musk was among several experts from the tech industry invited to speak to senators on September 13 in what has been referred to as the first AI Insight Forum.

CLICK HERE TO GET THE DALLAS EXPRESS APP

He has been one of the leading proponents of AI regulation and the establishment of a new federal agency dedicated to the technology.

“The consequences of AI going wrong are severe so we have to be proactive rather than reactive,” Musk told reporters after the meeting, according to NBC News. “The question is really one of civilizational risk.”

Also in attendance at the meeting were former Microsoft chief Bill Gates, Meta founder and CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, and more.

In his prepared remarks for the meeting, Zuckerberg stressed the importance of the tech industry and government working together to find ways to leverage AI to maximize benefits and minimize risks.

“New technology often brings new challenges, and it’s on companies to make sure we build and deploy products responsibly,” he explained, according to the Independent. “This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that.”

Tech giants have been engaged in developing how the government might do that, both with the recent meeting and by signing the White House’s voluntary pledge to do more testing, reporting, and research on potential AI risks. The pledge also involves disclosing when AI has developed content and allowing the public to report problems with AI systems.

So far, Google, Microsoft, Meta, Amazon, OpenAI, Anthropic, Inflection, Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability have signed the commitment, according to NPR.

More AI forums to brainstorm legislative action are on the horizon. The latest forum organized by Senate Majority Leader Chuck Schumer (D-NY) and Sens. Mike Rounds (R-SD), Todd Young (R-IN), and Martin Heinrich (D-NM) was said to have yielded positive results.

“We got some consensus on some things. … I asked everyone in the room, does government need to play a role in regulating AI, and every single person raised their hand, even though they had diverse views,” Schumer told reporters, according to NBC News. “So that gives us a message here that we have to try to act, as difficult as the process might be.”

Author