A Third Of AI Researchers Think AI Could Cause "Catastrophic" Outcomes On Par With Nuclear War This Century

Is it a good sign when 36 percent of a field think it might end in catastrophe?

James Felton

James Felton

Senior Staff Writer

clockSep 22 2022, 09:23 UTC
A woman with glasses, overlaid with a Minority Report-style computer terminal.
The survey shows splits in the field towards AGI. Image credit: metamorworks/

A survey of scientists and researchers working in artificial intelligence (AI) has found that around a third of them believe it could cause a catastrophe on par with all-out nuclear war. 

The survey was given to researchers who had co-authored at least two computational linguistics publications between 2019–2022. It aimed to discover industry views on controversial topics surrounding AI and artificial general intelligence (AGI) – the ability of an AI to think like a human – plus the impact that people in the field of research believe AI will have on society at large. The results are published in a preprint paper that has not yet undergone peer review.


AGI, as the paper notes, is a controversial topic in the field. There are big differences in opinion on whether we are advancing towards it, whether it is something we should be aiming towards at all, and what would happen when humanity gets there. 

"The community in aggregate knows that it’s a controversial issue, and now (courtesy of this survey) we can know that we know that it’s controversial," the team wrote in their research. Among the (pretty split) findings was that 58 percent of respondents agreed that AGI should be an important concern for natural language processing at all, while 57 percent agreed that recent research had driven us towards AGI.

Where it gets interesting is how AI researchers believe that AGI will affect the world at large.


"73 percent of respondents agree that labor automation from AI could plausibly lead to revolutionary societal change in this century, on at least the scale of the Industrial Revolution," the researchers wrote of their survey.

Meanwhile, a non-trivial 36 percent of respondents agreed that it is plausible that AI could produce catastrophic outcomes in this century, "on the level of all-out nuclear war". 

It's not the most reassuring thing when a significant proportion of a field believes it could lead to humanity's destruction. However, in the feedback section, some respondents objected to the phrasing of "all-out nuclear war", writing that they "would agree with less extreme phrasings of the question". 


"This suggests that our result of 36% is an underestimate of respondents who are seriously concerned about negative impacts of AI systems," the team wrote.

Though (perhaps with good reason) wary about potential catastrophic consequences of AGI, researchers overwhelmingly agreed that natural language processing has "a positive overall impact on the world, both up to the present day (89 percent) and going into the future (87 percent)."

"While the views are anticorrelated, a substantial minority of 23 percent of respondents agreed with both Q6-2 [that AGI could be catastrophic on par with an all-out nuclear war] and Q3-4 [that NLP has an overall positive impact on the world]," the researchers wrote, "suggesting that they may believe NLP’s potential for positive impact is so great that it even outweighs plausible threats to civilization."


Among other findings were that 74 percent of AI researchers believe that the private sector is too heavily influencing the field, and that 60 percent believe the carbon footprint of training large models should be a major concern for NLP researchers.

The paper is published on pre-print server arXiv

  • tag
  • future,

  • AI,

  • science and society,

  • artificial inteligence