Responsible AI: When Should AI Lead, Support, or Step Aside in Hiring?
HR Leaders reflect on responsible AI at scale at a recent UNLEASH Roundtable, convened under Chatham House Rules.
Roundtable Insights
Around half of organizations now use AI for recruiting activities like job-description writing, resume screening, and candidate matching.
But AI brings real risks that need human oversight: A large portion of HR pros (about 4 in 10) cite algorithmic bias as a top concern with AI systems, and many organizations are still struggling with AI fairness, transparency, and ethical governance.
As AI capabilities rapidly expand in HR, the most successful organizations aren’t asking “Can AI do this?” but rather “Should AI do this?”
At a recent closed-door, Chatham House–rule roundtable, senior HR leaders came together to confront one of the most pressing questions shaping the modern talent landscape: What does ‘responsible AI’ actually look like in hiring, and when should AI lead, support, or step aside?
Across industries, leaders agreed: the experimentation era is over.
The playground phase of piloting tools in isolated pockets has given way to a new moment: one where organizations must step back, evaluate their foundations, and define a clear AI strategy before scaling.
You Can’t Build AI on Broken Foundations
Many of the leaders around the table described the same dilemma: intense excitement paired with operational reality.
Some organizations are already testing AI at scale (screening hundreds of thousands of applications with automation, mapping interview questions to competency frameworks, and using LLMs to infer skills from unstructured data).
Yet even the most advanced teams issued a clear warning: AI can’t sit atop outdated systems, inconsistent processes, or legacy organizational structures.
Before embedding intelligent technology across the talent lifecycle, HR needs a stable foundation of data infrastructure, job architecture, role clarity, and governance.
Treating AI as core infrastructure requires more than buying tools. It demands ownership, guardrails, and a shared understanding of what “good” looks like.
Ownership, Accountability and the ‘Culture Gap’
Throughout the discussion, leaders emphasized the growing disconnect between technical capability and cultural readiness.
HR is clearly facing:
- Fear and territorial behavior (“What does this mean for my job?”)
- Uncertainty about the real value AI brings
- Low levels of AI literacy among HR teams and hiring managers
- Blurred lines of accountability (“Who is responsible for outcomes?”)
The group agreed: AI transformation is as much cultural as it is technical.
The unknown triggers anxiety, especially when people worry that the work they’ve “always been known for” could now be automated.
But they were clear. AI isn’t here to replace people. It’s here to amplify human capability, especially in the moments where judgment matters.
Rethinking Roles Through Skills and Tasks
AI is accelerating the shift from job-centric models to a task- and skills-based approach.
Several organizations are already breaking down roles to determine:
-
What should AI automate?
-
Where should AI act as a co-pilot?
-
Where must humans interpret, contextualize, or decide?
Emerging ideas around agentic AI are enabling HR teams to infer skills, model future workforce needs, and advise the business faster than ever.
But complexity is growing with leaders juggling competing priorities, limited resources, and technical challenges created by “machines talking to machines.”
As one individual commented, in this new reality, “HR must become both the poets and the plumbers”, a take on Brene Brown’s new book.
HR needs to set a vision for AI while building the operational plumbing required to make that vision work.
Guardrails, Regulation, and the Trust Equation
A central theme was trust.
With increasing pressure, clearly seen in the media, around DEI, employment law, transparency, and model validation, HR leaders stressed the need for:
-
Clarity on when, how, and why AI is used
-
Transparent communication with candidates and employees
-
Acknowledgment that AI is a thinking partner, not a hidden substitute
-
Proof that humans remain the final decision-makers
As one leader put it: “AI should make you more human.”
When HR over trusts technology, it risks major failures. When it overcontrols, innovation stalls.
The balance comes from embracing AI’s contribution without giving up accountability.
Candidates Are Already Using AI, and so HR Has to Adapt (If Not Already!)
Another increasing pressure point: candidates are using AI to write resumes, prepare for interviews, and even generate answers during assessments.
Some leaders’ initial reaction has been to “shut it down”.
But the group agreed: the goal isn’t to block candidates from using AI. It’s to upskill interviewers so they can better assess real capability.
That means stronger interviewer training, more consistent frameworks, and clarity about what AI-augmented candidate behavior means for fairness.
Scaling AI Responsibly Starts With Clear Intent
The roundtable closed with a reminder: AI is only valuable when it solves a real business problem.
Without clear intent (better speed, stronger insights, fairer decisions, or higher-quality hiring), AI becomes noise, complexity, or wasted investment.
Leaders emphasized the importance of:
-
Using low-code use cases to demonstrate early value
-
Prioritizing modular, reusable AI solutions
-
Aligning AI investments to specific workforce outcomes
-
Preserving human oversight, empathy, and connection
We’re all on the same ocean, but in different boats,” one participant noted.
Every organization is moving toward AI, but at different levels of maturity, readiness, and risk appetite.
AI Should Support, Not Replace, Human Judgment
AI will change workflows, roles, and expectations. It will automate many tasks HR has historically owned.
But it cannot replace the human elements that matter most, such as empathy, context, accountability, and nuanced decision-making.
Forward-thinking HR leaders see AI as a powerful thinking partner: a tool that surfaces insights faster, removes administrative noise, and frees HR to focus on the interactions that truly count.
Responsible AI isn’t about letting the machine take the lead.
It’s about knowing when AI should support, when it should step aside, and when humans must stay firmly in control.
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!
Head of UNLEASH Labs, UNLEASH
Abigail is dedicated to connecting HR buyers with the technology and tools they need to succeed.
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields