News & Analysis

Advanced AI and Extinction Risk

While influencers of all hues are talking about the apocalypse there are others who believe that this is all an elaborate bluff

Doomsday predictors had warned long ago that imparting human intelligence to machines would result in the extinction of our species. This was seen as science fiction, especially since cinema picked up the theme ardently. Now, global scientists, academics, CEOs and public figures have come together to announce that fiction could soon turn into fact. 

And grabbing the eyeballs at the helm is none other than OpenAI boss Sam Altman and DeepMind boss Demis Hassabis. They have been joined by a virtual who’s who of the scientific community, the tech world and many others from various fields in drafting a rather dramatic statement urging global attention on an existential crisis involving artificial intelligence. 

Doomsday predictions that serve their purpose

The statement is hosted on the website of a not-for-profit called Center for AI Safety (CAIS) and is cryptic to the point of being foreboding: 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

The CAIS, which says its mission is to “reduce societal-scale risks from artificial intelligence”, appears to be linking the risk of advanced AI to those posed by a nuclear apocalypse or the destruction of the planet through nature’s depredations. It asks policymakers across countries to focus attention on mitigating what they perceive as “doomsday” level risks. 

The signatories, who also include veteran AI scientist Geoffrey Hinton, MIT’s Max Tegmark, Skype co-founder Jaan Tallinn, musician Grimes and podcaster Sam Harris, say that mitigating the risk of extinction caused by AI should stand alongside the global priorities such as nuclear war, climate change and prevention of pandemics of the Covid-19 variety. 

The website goes on to add that the statement is “succinct” due to the fact that the signatories want to share their concerns about the severe risks posed by advanced AI that could get drowned in the debate of other risk elements from AI such as privacy issues, its misuse in legal and policy matters and many more use cases already in the realm of discussion. 

Speaking of which, we already had an instance of AI arguing a case in a courtroom resulting in Texas Judge Brantley Starr making it mandatory that the attorney appearing in the court must attest that  no portion of the filing was drafted by generative artificial intelligence and if was so, that it has been vetted by a human being. 

What’s the flip side of the story?

American media writer Douglas Rushkoff had an interesting twist to this entire narrative. In a recent commentary, he says “It took me a while to put my finger on what bothered me so much about the recent open letter from various AI titans and experts calling for a six-month moratorium on AI development (basically, anything beyond current industry leader Chat-GPT4’s state of learning). 

“Then I got it: they are essentially saying “hold me back!” As if what they have is so powerful, so dangerous, that they need us to restrain them for all of our own good. Like a would-be street fighter depending on his friends to “restrain” him lest he take the other guy apart, it’s just a form of bluffing,” Rushkoff says. 

One look at some of the hysterical headlines has obviously distracted our attention from taking a long hard look at the existing challenges. There’s the tools’ free use of copyrighted data to train AI systems without consent, the scraping of personal data violating privacy, lack of transparency among AI giants on data used to train these tools. The list is already quite exhaustive. 

Rushkoff says it as it is. “Sure, there’s a possibility that these language-model-based query systems could one day be developed into something like artificial intelligence. But that’s not what we’re looking at here. So far, all we’ve got are programs that string together words into the most likely sensical combinations, based on all the strings of words they’ve been fed previously. They are not thinking, or even using basic logic. They’re a user-friendly web interface.”

What’s more, they’re even more inaccurate than Google, he notes while pointing out that asking ChatGPT whether a pound of feathers or five pounds of lead weights more, gets the wrong answer as the AI tool only looks at sentences referring to the idea that a pound of feathers weigh the same as a pound of lead. “It’s not using any sort of math or logic to answer a question – It is just pulling up the most probable string of words. It’s not even as smart as Wikipedia on a bad day,” he says. 

Which brings us to the reality of the situation. There are commercial motivations for AI giants to route regulatory attention into a future theater of action depicted as doomsday. This draws most lawmakers away from fundamental competition and antitrust challenges in the present to some future where generative AI destroys humanity. 

But, more of this at a later date. Meanwhile, just fasten your seatbelts and watch the show as moneybags make an attempt to subvert yet another innovation to fill their coffers. 

Leave a Response