Only 40% of firms globally are investing to make AI systems trustworthy through governance, explainability and ethical safeguards, even though organizations prioritizing trustworthy AI are 60% more likely to double ROI of AI projects.
Among those reporting the least investment in trustworthy AI systems, generative AI was viewed as 200% more trustworthy than traditional AI (like machine learning), despite the latter being the most established, reliable and explainable form of AI.
These are from new research done by IDC and commissioned by SAS. The study involved a survey with 2,375 respondents, most of which represent firms with more than 1,000 employees. They were based in North America, Latin America, Europe, the Middle East and Africa, and Asia-Pacific region.
“Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy,” said Kathy Lange, research director of the AI and Automation Practice at IDC.
“As AI providers, professionals and personal users, we must ask: Generative AI is trusted, but is it always trustworthy? And are leaders applying the necessary guardrails and AI governance practices to this emerging technology?”
Overall, the study found the most trusted AI deployments were emerging technologies, like generative AI and agentic AI, over more established forms of AI.
Almost half of respondents (48%) reported “complete trust” in GenAI, while a third said the same for agentic AI (33%). The least trusted form of AI is traditional AI – less than one in five (18%) indicated complete trust.
Even as they reported high trust in generative AI and agentic AI, survey respondents expressed concerns, including data privacy (62%), transparency and explainability (57%), and ethical use (56%).
Meanwhile, quantum AI is picking up confidence quickly, even as the technology to execute most use cases has yet to be fully realized. Almost a third of global decision makers say they are familiar with quantum AI, and 26% report complete trust in the technology, despite real-world applications still in the early stages.
The study showed a rapid rise in AI usage – particularly generative AI, which has quickly eclipsed traditional AI in both visibility and application (81% vs. 66%). This has sparked a new level of risks and ethical concerns.
Across all regions, IDC researchers identified a misalignment in how much organizations trust AI versus how trustworthy the technology truly is. While 78% of organizations claim to fully trust AI, only 40% have invested to make systems demonstrably trustworthy through AI governance, explainability and ethical safeguards.
The research also showed a low priority placed on implementing trustworthy AI measures when operationalizing AI projects. Among respondents’ top three organizational priorities, only 2% selected developing an AI governance framework, and less than 10% reported developing a responsible AI policy.
However, deprioritizing trustworthy AI measures may be preventing these organizations from fully realizing their AI investments down the road.
Researchers divided survey respondents into trustworthy AI leaders and trustworthy AI followers. Leaders invested the most in practices, technologies and governance frameworks to make their AI systems trustworthy – and appear to be reaping rewards.
Those same trustworthy AI leaders were 1.6 times more likely to report double or greater ROI on their AI projects.
The study identified three major hurdles preventing success with AI implementations: weak data infrastructure, poor governance and a lack of AI skills.
Nearly half (49%) of organizations cite data foundations that are not centralized or nonoptimized cloud data environments as a major barrier.
This top concern was followed by a lack of sufficient data governance processes (44%) and a shortage of skilled specialists within their organization (41%).














