
Why AI Experts Trust AI—and The Public Doesn’t | Image Source: arstechnica.com
WASHINGTON, D.C., 3 April 2025 – As artificial intelligence (AI), it deepens the fabric of daily life, from the way we look for information to how we ask for jobs or consume media, a new report from the Research Centre Pew illuminates the chassis between how AI experts and the general public perceive this transformation. Conducted through two parallel surveys, a survey of more than 5,400 American adults and another drawing by a group of 1,013 IA professionals, this research shows the contrasting levels of optimism, confidence and understanding surrounding the technology that shapes our future.
According to Pew, while almost three-quarters (73%) of AI experts believe that artificial intelligence will have a positive impact on workplace performance over the next 20 years, only 23% of the general public share this view. In more general terms, 56% of experts believe that AI will have something or a very positive effect in the United States as a whole. The public figure is 17 per cent.
Why compare public and expert opinions on AI?
Brian Kennedy, Senior Research Scientist at Pew, focused on science and society, explains the logic behind this dual approach to security: “While public opinion is compared to that of experts, it helps to report on the benefits and risks of IA. According to Kennedy, Pew’s previous studies had documented the growing awareness – and concern – of the Americans at AI. What was missing, however, was how these evolutionary feelings were consistent or contradicted with those who design, study or regulate CEW systems.
To respond, Pew created a carefully selected group of IA experts, using the names of 21 major IA conferences that took place in 2023 and 2024. This list covers a number of topics, including AI ethics, its social impact, technical development and commercial applications. The inclusion of mini-conferences and affinity group meetings aimed at integrating under-represented voices, including women and people of colour, into a male-dominated domain.
Who are the AI experts in this study?
Defining an expert in artificial intelligence is not a small feat. In the Pew study, the term included people who were actively involved in the technical development of IA, such as machine learning or processing of natural languages, as well as those who focused on the implications and applications of IA in areas such as health, finance, ethics and policy. The researchers at Pew reviewed lists of conference authors and presentation files, followed by contact information and finally analyzed only those based in the United States to ensure alignment with the public panel.
The result was a diverse sample in vocational training, but not statistically representative of all AI experts, as Kennedy openly admits. This contrasts with the public panel, which uses rigorous random sampling and statistical weighting to reflect American demography.
How often do Americans interact with AI?
This issue presents a fundamental disconnection in AI literacy. An amazing 79% of AI experts believe that Americans interact with AI “almost constantly or several times a day.” Only 27% of American adults perceive their interactions with AI as common. This perception gap suggests that much of AI’s presence – emerging in search engines, referral systems, fraud detection tools and voice assistants – goes unnoticed by the average user.
“People find AI more often than they realize,” says Kennedy. From specific ads that follow us online to predictive text in email applications, AI is already there, often operating invisiblely in the background. However, many Americans still consider it a futuristic or abstract force, not something that constitutes their options or opportunities at the moment.
Chatbots: Help or Hipe?
Chatbots like ChatGPT have become the most visible face of AI for many Americans. However, even here, the division between expert sentiment and the public is tenuous. The Pew survey found that almost all AI experts (98%) used a chatbot, and 61% found the experiment “very” or “extremely” useful. Among the public, only one third used a chatbot, and only 33% found it in a similar useful way.
Interestingly, public skepticism is not only rooted in experience. Despite growing awareness, 72% say that at least they heard about chatbots, a large proportion of users said that these tools were not “too useful” or “not useful”. This is more than double the dissatisfaction rate reported by experts (21% versus 9%).
Who feels in control of the influence of AI?
Concerns about autonomy and the Agency dominate the responses of both groups. According to Pew, 59% of U.S. adults say they have “small control or no control” over how IV is used in their lives. Among the experts, this number falls slightly to 46 per cent, but remains a high concern for those on the eve of this technological change.
Perhaps most revealing is that the majority of experts in both groups (57% of experts, 55% of adults) expressed a desire to better control how IV is integrated into their personal and professional lives. Uncertainty is greater among the public – 26% say they don’t know how much control they want – compared to only 4% of the experts.
This demand for enhanced surveillance highlights growing concern about the nature of the AI “black box”. Even those who build the systems admit that there are limits to predictability and explanation, especially with large language models and neural networks.
Why are the most optimistic experts as much as the public?
Part of the answer is exposure and understanding. Experts, immersed in the development of IV and its capabilities, tend to see its benefits: efficiency, scalability, innovation. According to the report, 76% of AI experts believe that technology will benefit them personally over time, compared to only 24% of the public. Only 15% of experts expect personal injury from IV, while 43% of American adults expect to be negatively affected.
However, experts’ optimism is not unbridled. In-depth interviews with the survey revealed nuanced concerns about ethics, prejudice and oversight. However, experts often see these problems as solvent challenges and not existential threats. On the contrary, the public is increasingly observing IV through job loss, misinformation and surveillance.
Q: What do Americans fear most about AI?
A: Concerns include job displacement, deep difficulties, political misinformation and algorithm bias. According to Pew, 64% of American adults believe that AI will reduce employment opportunities over the next two decades.
Q: Do AI experts share these concerns?
A: Yes, but to a lesser extent. Only 39% of experts believe that IA will result in a reduction in overall employment. They recognize that certain functions, such as ATMs or journalists, may decrease, but anticipate new functions.
Q: Is there a gender gap in how AI is viewed?
A: Absolutely. Women are more sceptical than men, both among experts and among the general public. Among IA professionals, 67% of women want more control over IA in their lives than 54% of men. Among the public, only 12% of women believe that IA will have a positive impact on the United States, compared to 22% of men.
Representation and Equity in IA Design
Another dimension that highlights public apprehension is perceived bias in AI development. Three quarters of the experts say that men’s views are adequately represented in the design of AI. Only 44% say the same for women. The general public echoes this concern, with many doubts as to how AI systems reflect different opinions.
This has serious implications, especially as AI is involved in hiring, law enforcement, credit digitization and health care decisions. An algorithm formed with biased data can perpetuate or even amplify existing inequalities. As noted in previous Pew studies, underrepresented communities remain cautious about who builds these tools and the assumptions developed in their design.
What’s next for AI governance?
If one topic unites IA experts and the American public, it is a common concern that regulation cannot keep pace with technological change. Both groups fear that the oversight of the AI government is too lax rather than too aggressive. The complex, global and rapidly evolving nature of artificial intelligence systems makes effective policy formulation a daunting challenge.
However, surveys indicate a consensus: most people, regardless of their experience, mean more about how AI affects their lives. The message to decision-makers is clear. While IA redefines everything from media to medicine, public confidence will depend not only on innovation, but also on transparency, accountability and inclusion.
And perhaps more importantly, the ongoing dialogue between experts, daily users, ethics and legislators is essential to chart a path that embraces the potential of AI without sacrificing human values in the process.