AI in hiring: The growing trust gap between employers and job candidates
“AI has already transformed recruitment in profound ways, but this transformation comes with new responsibilities,” writes Gartner’s Jamie Kohn in this exclusive UNLEASH OpEd. Here’s how to close the trust gap and reap the recruitment rewards.
Expert Insight
AI is the core of modern hiring, but it is also eroding trust between employers and candidates.
In this exclusive UNLEASH OpEd, Gartner HR Senior Director of Research Jamie Kohn shares her top tips on how to close the trust gap around AI in hiring.
Hint, transparency is key!
Artificial Intelligence (AI) has quickly become a cornerstone of the modern recruitment process.
Its capacity to handle vast amounts of data, identify patterns, and streamline decisions has made it an invaluable tool for organizations looking to fill roles faster and more efficiently.
Simultaneously, job seekers are using AI to enhance their applications and gain a competitive edge.
However, this widespread adoption is not without consequence.
The growing role of AI in hiring has exposed a deep and widening trust gap between employers and job candidates, one that threatens to upend the delicate balance of the recruitment ecosystem.
Gartner research has revealed the complexity of this tension.
On the one hand, candidates are wary of how AI evaluates their applications, questioning its fairness, accuracy, and impartiality.
On the other, employers are grappling with the rise of AI-driven candidate fraud – where job seekers leverage technology not just to enhance applications, but in some cases to misrepresent their qualifications or identities altogether.
This dual challenge has forced recruiters to ask a fundamental question: How can they harness the benefits of AI while maintaining trust, fairness, and transparency in the hiring process?
The candidate perspective: Mistrust of AI in the recruitment process
For job candidates, the rapid integration of AI into recruitment has sparked significant unease.
A March 2025 Gartner survey revealed that only 26% of candidates trust AI to evaluate them fairly, even though more than half believe AI is used to screen their application materials.
This disparity highlights a growing mistrust of AI, rooted in concerns about bias, dehumanization, and a lack of transparency.
Many candidates worry that AI systems treat them as mere data points rather than as individuals with unique skills and experiences.
Unlike human recruiters, who can account for nuances and context, AI is often perceived as inflexible. The fear that an algorithm might unfairly disqualify them – whether due to errors, incomplete data, or biased programming – is a persistent concern for job seekers.
In addition, there is a sense that AI-driven processes strip away the personal connections that are central to building trust during recruitment.
Beyond these worries about fairness and personalization, candidates are also skeptical of the legitimacy of the hiring process itself.
The March survey revealed that only half of candidates trust the jobs they are applying for are real – anxiety fueled in part by the prevalence of ‘ghost jobs’, where employers post roles with no real intention to hire.
This uncertainty, layered on top of widespread concerns about economic instability and the risk of layoffs, has already made candidates more cautious about committing to offers.
While acceptance rates have dropped sharply, from 74% in 2023 to just 51% in 2025, according to June 2025 Garner research, AI’s growing role in hiring risks further eroding trust.
Candidates, increasingly discerning in their job searches, hesitate to commit to organizations that fail to meet their expectations for fairness, clarity, and transparency.
Yet, candidates are not merely passive players in this evolving landscape.
AI has become a significant tool for job seekers as well.
39% of candidates admit to using AI during the application process, leveraging it to crafty tailored cover resumes, personalized cover letters, and polished writing samples, according to a December 2024 Gartner survey.
While these tools can help candidates stand out in competitive job markets, they also complicate matters for recruiters, who face significant challenges in discerning authentic skills from AI-generated embellishments.
As candidates adopt AI to represent themselves, the boundary between truth and augmentation becomes increasingly blurred, further deepening the trust divide.
The employer perspective: Managing fraud without breaking trust
For employers, the rise of candidate fraud has become an urgent concern.
Gartner predicts that by 2028, one in four candidate profiles could be fake.
Candidates are increasingly deploying AI not just to refine their applications but to intentionally misrepresent themselves.
The risks of candidate fraud extend far beyond making a bad hire.
Fraudulent candidates can pose cybersecurity threats, jeopardize compliance, and harm overall business operations.
These risks have prompted many organizations to adopt stronger anti-fraud measures, such as advanced identify verification, anomaly detection, and background checks.
However, these efforts come with their own challenges: overly invasive measures or opaque safeguards can inadvertently erode trust further and alienate legitimate candidates.
Bridging the trust gap
As AI becomes increasingly intertwined with the hiring process, earning and maintaining candidates’ trust requires transparency.
Organizations must clearly explain how AI is being used in recruitment and provide assurances that the process is fair, accurate, and free of bias.
Candidates also need clarity around acceptable and unacceptable uses of AI during the application process, such as generating resume content or interview responses – clear boundaries that help foster a more respectful and balanced dynamic between employers and job seekers.
For candidates, trust depends on fairness, transparency, and a sense of being seen as individuals rather than just another data set.
For employers it lies in their ability to mitigate fraud risks without alienating the very talent they wish to attract.
Building this trust is no small feat, but it is essential to creating a hiring ecosystem that benefits both sides.
As AI becomes increasingly intertwined with the hiring process, earning and maintaining candidates’ trust requires transparency.
Organizations must clearly explain how AI is being used in recruitment and provide assurances that the process is fair, accurate, and free of bias.
Candidates also need clarity around acceptable and unacceptable uses of AI during the application process, such as generating resume content or interview responses – clear boundaries that help foster a more respectful and balanced dynamic between employers and job seekers.
For candidates, trust depends on fairness, transparency, and a sense of being seen as individuals rather than just another data set.
For employers it lies in their ability to mitigate fraud risks without alienating the very talent they wish to attract.
Building this trust is no small feat, but it is essential to creating a hiring ecosystem that benefits both sides.
The presence of human oversight remains critical to addressing the trust gap.
Candidates are far more comfortable applying for jobs when employers integrate human interaction into key stages of the hiring process.
In-person interviews, for example, provide candidates with reassurance that their applications will be evaluated holistically rather than reduced to data points.
While AI may assist in narrowing down pools of applicants, human judgment lends credibility to final hiring decisions and fosters a sense of fairness.
At the same time, fraud prevention efforts must become more sophisticated to account for the rising complexity of candidate misrepresentation.
Employers can embed fraud detection technologies, such as identity verification and anomaly alerts, across recruitment systems to identify risks early and mitigate potential harm.
By focusing on system-wide validation rather than invasive tactics, organizations can ensure that fraud detection remains robust without undermining trust.
By prioritizing transparency, maintaining human oversight, and designing fraud prevention tools that respect candidates’ dignity, employers can position AI as a trusted tool rather than a source of friction.
Ultimately, those organizations that prioritize both fairness and innovation in recruitment will gain a competitive edge in an increasingly complex hiring landscape.
AI has already transformed recruitment in profound ways, but this transformation comes with new responsibilities.
The employers who succeed in bridging the gap between innovation and trust will emerge as leaders – not only in adopting new technologies but in shaping the future of talent acquisition itself.
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!
Senior Director of Research, Gartner HR
Jamie Kohn is senior research director for Gartner HR, leading research strategy for heads of talent acquisition.
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields