Asana: 64% of employees believe AI agents are unreliable, calling for more ‘training, clarity, and guardrails’
Two-thirds of employees don’t have faith in AI agents, but 76% view them as a “fundamental” to the future of work, according to Asana’s new Global State of AI at Work 2025 report.
News in Brief
Asana’s new Global State of AI at Work 2025 provides new insights as to how AI agents are viewed by employees.
Although there is a distinct lack of trust, employees still view them as an integral part of the workforce.
In an exclusive conversation with Mark Hoffman, Work Innovation Lead at Asana Work Innovation Lab, UNLEASH explores the report in greater depth.
Is your workforce stuck between ambition and reality?
According to the Global State of AI at Work 2025 report from San Francisco-based software company Asana’s Work Innovation Labs, employees are expecting AI to complete 32% of their workload in the next 12 months, and 41% within three years.
However, only 25% say they’re ready to do so today.
This shows a disparity between what employees hope to use AI for, and what is currently possible.
Mark Hoffman, Work Innovation Lead at Asana Work Innovation Lab gives his exclusive take on the research.
AI agents in action
64% believe AI agents are unreliable, according to Asana’s latest research.
Perpetuating this issue, the data found that the average organization has failed to put any accountability in place for AI agents’ mistakes.
When mistakes do arise, there’s discrepancies about who should take accountability for the failing of AI agents. Over a third (39%) believe that no one is responsible, however 20% name the end user, 18% blame IT teams, and 9% point the finger at the agent’s creator.
Yet, almost three quarters of employees polled reported using AI agents, with 76% viewing them as a “fundamental” to how work will be completed in the future.
“This year’s most surprising finding is the gap between adoption and trust,” comments.
“While 74% of UK workers are already using AI agents, nearly two-thirds worry about them being unreliable – and over half are concerned they’ll share incorrect information.
“This distrust highlights how quickly AI has entered the workplace – and why building confidence is now the critical next step to unlocking its full potential.”
However, the report also highlights that there is a lack of clarity from organizations, as few have clear guardrails in place. In fact, only 10% have clear ethical frameworks for agents, 10% have deployment processes, and 9% review employee-created agents.
Access to AI tools isn’t enough. Employees are clear about what they need: training, clarity, and guardrails,” Hoffman adds.
“Yet most organizations haven’t delivered – stalling adoption at a surface level, compounding ‘AI debt,’ and leaving teams frustrated instead of empowered”.
Despite 63% of employees believing that accuracy should be a top metric among AI agents, only 18% of businesses measure errors, causing issues with trust, reliability, and quality of work.
Additionally, these mistakes are likely to cause 79% of organizations to rack up an “AI debt,” leaving businesses with unreliable systems, poor data quality, and weak oversight.
To mitigate this, Asana highlights the importance to training employees correctly on the AI systems, with 82% stating it’s “essential”. In contrast, only 32% or organizations offer it.
Concluding, Hoffman says: “HR leaders are uniquely positioned to close this trust gap – by redesigning workflows, putting clear governance in place, and helping employees build the skills to work effectively with AI agents.
“Organizations that make these investments are already moving beyond pilots and starting to see real impact.”
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!
Senior Journalist, UNLEASH
Lucy Buchholz is an experienced business reporter, she can be reached at lucy.buchholz@unleash.ai.
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields