fbpx

AI Poses Extinction Risk, Experts Claim

AI Poses Extinction Risk
Artificial Intelligence Processor | Image by Bounyong Koulavong/Shutterstock

An open letter sounding the alarm about the potentially extinction-level threat posed by advanced forms of artificial intelligence (AI) has garnered hundreds of signatures from prominent individuals in the industry.

Released by the nonprofit Center for AI Safety on May 30, the letter calls for policymakers to raise the risk of extinction from AI to the same level of consideration as other far-reaching problems like nuclear war and pandemics.

Among the 350 signatories, several top tech executives, computer science academics, and industry pioneers stand out:

  • Geoffrey Hinton, former Google vice president and AI pioneer, recently left his position to speak more freely about the risks that rapidly advancing technology presents to society, as The Dallas Express reported.
  • Sam Altman, CEO of OpenAI, has been vocal about disinformation, cyberattacks, and the supposed existential threat of AI-powered technology.
  • Demis Hassabis, CEO of DeepMind Technologies, told Time Magazine that AI was on the brink of creating instruments capable of causing great and widespread destruction to humankind.
  • Yoshua Bengio, an award-winning Canadian researcher, is one of several so-called “Grandfathers of AI” to express regret for his work, comparing himself to the researchers behind the atomic bomb.

In March, an open letter signed by Elon Musk and others called for all large-scale machine learning projects to be paused for six months. By decelerating progress, signatories hope to develop governance systems to mitigate any possible AI risk to humanity.

Speaking before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, Altman said, “My worst fears are that we [the tech industry] cause significant harm to the world … If this technology goes wrong, it can go quite wrong.”

“We want to work with the government to prevent that from happening,” Altman added.

Lawmakers in Congress are currently working to develop legislation to address some of the concerns surrounding AI.

Senate Majority Leader Chuck Schumer (D-NY) has been working on a framework for regulating AI, and Sen. Michael Bennet (D-CO) is trying to create a task force to examine AI policies and their impact on people’s civil liberties, as well as to flag any oversight deficiencies.

Still, Bengio has called lawmakers’ attempts to regulate AI and prevent its adoption by bad actors “sluggish,” according to the Financial Post.

For instance, AI-driven bots can be used to portray humans online.

“The users need to know that they’re talking to a machine or a human. Accounts on social media and so on need to be regulated so we know who’s behind the account — and it has to be human beings most of the time,” said Bengio, the Financial Post reported.

While most industry leaders agree that AI needs some kind of legal guardrail, not everyone expects humankind to be eclipsed by AI.

Speaking with USA Today, Michael Hamilton, co-founder of the risk management firm Critical Insight, claimed that AI would not become self-aware anytime soon and would only develop as much as humans allow.

Support our non-profit journalism

Submit a Comment

Your email address will not be published. Required fields are marked *

Continue reading on the app
Expand article