
Survey Reveals Doubts Over AI Achieving Human-Level Intelligence | Image Source: www.nature.com
PHILADELPHIE, Pennsylvania, March 9, 2025 – Artificial Intelligence (AI) systems have made significant progress in recent years, ranging from chatbots to image generators. However, a new survey of AI researchers reveals a deep scepticism as to whether the expansion of current AI models will lead to AI, the hypothetical point where AI coincides or exceeds human cognition. The conclusions, revealed at the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI), highlight the growing concerns about the limitations of neural networks and the ethical implications of the implementation of the AGI.
Can you extend the current AI systems to AGI?
For years, progress in IA has largely depended on the size and complexity of machine learning models. This approach has led to progress in the treatment of natural language and generic IA. However, according to the IAA survey, more than 75% of respondents believe that the mere development of existing models will not be sufficient to achieve IMA. This scepticism is even stronger with regard to the neural networks, the backbone of the current AI: 84% of the experts interviewed said that the neural networks alone are unlikely to reach human intelligence.
Francesca Rossi, IBM’s AI researcher and President of the AAAI, wonders if AGI should be the goal. “I don’t know if human intelligence is the right goal. AI must support human growth, learning and improvement, not replace us.”
This sentiment reflects a broader debate about AI’s role in society—should researchers strive for machines that think like humans, or should AI development focus on augmenting human intelligence?
What other approaches could lead to IMA?
The AAAI report suggests that the alternative paradigms of AI beyond neural networks deserve more attention. One such approach is the symbolic IV, sometimes referred to as the “old-fashioned IV”, which codifies logic and reasoning in IV systems rather than relying solely on statistical learning. More than 60% of the researchers interviewed believe that the integration of symbolic AI into neural networks is necessary to achieve human reasoning.
“The neuronal approach is there to stay,” says Rossi, “but to evolve in the right way, it must be combined with other techniques. The report urges governments, universities and private companies to diversify AI research rather than focusing primarily on large neural networks.
Should IMA even be a priority?
Although IGA is often described as the “santo Gregal” of AI research, the survey reveals that most experts do not consider it a top priority. Only 23% believe that the achievement of AI should be the main focus of AI development. Instead, over 75% argue that AI research should prioritize construction systems with a profile responsible for the benefits of risk.
Some experts even advocate the development of the AGI until the company can ensure the safety of these systems. Approximately 30% of respondents agree that IMA research must stop until there are clear control mechanisms for humanity. However, it would be difficult to stop because AI research is based on a combination of business interests and international competition.
Anthony Cohn, an AI researcher at Leeds University and a member of the AAAAI, argues that stopping IMA research is unrealistic. “I don’t think it’s practical to do this – companies will do it even if the research organizations have stopped funding it. Besides, I don’t think AGI is as imminent as many people think.”
His perspective suggests that AI safety efforts should focus on responsible development rather than outright bans.
AI and national security: a geopolitical race?
Beyond technical concerns, the global race to dominate the development of the IA has intensified. According to the New York Times, former White House advisor AI Ben Buchanan pointed out that IGA could have profound economic, military and intelligence implications. He compared the geopolitical importance of AI with past technological revolutions funded by the United States Department of Defense, including the Internet and GPS.
The US government is particularly focused on maintaining AI supremacy over China. Export controls that limit advanced AI chips to reach Chinese companies are a key element of this strategy. However, some argue that these restrictions could increase tensions and accelerate China’s pressure for AI autonomy.
Is IMA even feasible with current technology?
A separate study of researchers from Tsinghua University and Renmin University of China, according to Ars Technica, suggests that IGA could be much further than expected. The study proposes a new reference point called “survival game”, which tests artificial intelligence systems in learning test and terror. Current AI models are fighting significantly in these tests, without adapting and finding solutions independently.
Researchers estimate that the achievement of IGA would require neuronal networks with 1026 parameters – five orders of magnitude greater than all neurons in the combined brain of humanity. Such a system would be so costly that, even if Moore law continued for 70 years, the necessary equipment would be financially and logically inaccessible.
“While current AI systems can perform predefined tasks well, they fight significantly when faced with problems that require continuous testing and error,” said Jingtao Zhan, PhD student at Tsinghua University. This suggests that the current AI trajectory cannot lead directly to IMA, at least not without significant technological progress.
These results are consistent with the scepticism expressed by AI experts in the AAAI survey. Despite the rapid progress of the generating AI, the path to human intelligence remains uncertain.
As the debate on the feasibility and timeliness of IMA continues, one thing is clear: AI research must reconcile innovation with ethical considerations. Whether IMA is in decades or not, AI will continue to shape the future of work, safety and daily life.