Experts warn that AI could lead to human extinction. Are we taking it seriously enough?

Editor’s note: A version of this article first appeared in the Reliable Sources newsletter. Sign up here for the daily overview of the evolving media landscape.

CNN –

extinction of humanity.

Think about it for a second. Really think about it. The annihilation of humanity from planet earth.

This is what leading industry leaders are desperately sounding the alarm about. These technologists and academics keep hitting the red panic button and doing everything they can to warn of the potential dangers artificial intelligence poses to the very existence of civilization.

On Tuesday, hundreds of leading AI scientists, researchers and others – including OpenAI chief Sam Altman and Google DeepMind chief Demis Hassabis – once again voiced their deep concern for the future of humanity and signed a single-sentence open letter to the public To unequivocally express the risks that the rapidly advancing technology brings with it.

“Reducing the risk of extinction from AI should be a global priority alongside other societal risks such as pandemics and nuclear war,” read the letter, which was signed by many of the industry’s most respected figures.

It couldn’t be easier or more urgent. These industry leaders are literally warning that the coming AI revolution should be taken just as seriously as the threat of nuclear war. They urge policymakers to put in place some guard rails and set basic regulations to defuse primitive technology before it’s too late.

Dan Hendrycks, the executive director of the Center for AI Safety, named the situation “It’s reminiscent of nuclear scientists warning about the technologies they were developing.” Robert Oppenheimer observed, “We knew the world would never be the same.”

“There are many ‘important and urgent risks from AI’, not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks and weapon use,” continued Hendrycks. “These are all important risks that need to be addressed.”

And yet it seems that the grim message these pundits are desperately trying to convey to the public doesn’t cut through the noise of everyday life. AI experts may sound the alarm, but the level of concern – and in some cases outright terror – they harbor about the technology is not conveyed to the masses by the news media with the same urgency.

Instead, news organizations broadly treated Tuesday’s letter – like every other warning we’ve seen in recent months – as just another headline mixed in with a wealth of stories. Some major news organizations didn’t even publish an article about the terrifying warning on their websites’ home pages.

In a way, it’s eerily reminiscent of the early days of the pandemic, before the widespread panic, closures, and congested emergency rooms. Newsrooms kept one eye on the increasing threat posed by the virus and published reports of the virus slowly spreading around the world. But by the time you fully realized the serious nature of the virus and grounded yourself in the very essence it resided in, it had already practically turned the world upside down.

With AI, there is a risk that history will repeat itself, with even more at stake. Yes, news organizations are reporting on the evolving technology. However, given the open possibility of a threat to the planet, there was a serious lack of urgency.

Perhaps that’s because it can be difficult to come to terms with the idea that a Hollywood-style sci-fi apocalypse could become a reality, that advancing computer technology could reach escape velocity and decimate people from existence. However, it is exactly what the world’s leading experts are warning against.

It’s much easier to avoid uncomfortable realities, put them on the backburner and hope that over time the problems will just sort themselves out. But often that’s not the case — and it seems unlikely that growing concerns about AI will resolve themselves. In fact, given the rapid pace at which technology is evolving, it’s far more likely that concerns will actually become more apparent over time.

As Cynthia Rudin, computer science professor and AI researcher at Duke University, told CNN on Tuesday, “Do we really need more evidence that the negative effects of AI could be as large as nuclear war?”