Lenovo: Four ways HR leaders should safeguard their organizations from AI-powered cyber attacks
Cyber attacks are a top concern for 61% of leaders, meaning better usage policies and AI education is needed, according to a new report. Lenovo’s Rakshit Ghura shares more exclusively with UNLEASH.
News in Brief
Lenovo, a $69 billion-revenue multinational tech company, recently released its Work Reborn report, which looked into how businesses can mitigate AI threats as the workplace transforms digitally.
The report found that although AI agents are creating a new class of threat, only 60% of leaders feel prepared to manage these attacks.
In an interview, Rakshit Ghura, Vice President & General Manager, Lenovo Digital Workplace Solutions gives UNLEASH his exclusive insights.
AI is not only changing the way the workplace operates, but it’s also impacting how businesses should respond to cybersecurity.
As more organizations are embracing the advantages of AI, they’re also being met with a fast-evolving threat that can be easily overlooked: cyberattacks.
Lenovo’s global study of CIOs, Work Reborn, found that although leaders are optimistic about AI’s potential, more than half are not yet prepared for the risks it brings.
To gain a deeper understanding of how cyberattacks may impact businesses, as well as what HR leaders can do to prepare for these threats, UNLEASH spoke exclusively to Rakshit Ghura, Vice President & General Manager of Lenovo Digital Workplace Solutions.
Identifying cyberthreats to the workplace
As we embrace a new AI-driven world, Ghura insists that “new risks call for new approaches” to security, as traditional defenses, such as static training and antivirus, are therefore no longer enough.
Ghura explains that AI-generated attacks can “mutate, mimic legitimate behavior, and bypass legacy safeguards,” meaning leaders must rethink their security architecture from the ground up.
Lenovo’s report found that almost half (48%) of IT leaders report feeling ‘very’ or ‘somewhat’ confident in their ability to manage AI risks, with 70% recognizing that employee misuse of AI is a major risk.
What’s more, AI agents are believed to be creating an entirely new class of insider threat, which 60% of leaders aren’t prepared to manage, and less than 4 in 10 are leaders confident in their ability to manage either of these internal risks.
“To close this gap, organizations need clear AI use policies and employee education on AI security,” Ghura explains.
While AI promises significant productivity gains, it also introduces new security risks. IT leaders are concerned about cybercriminals using AI to amplify attacks – threats that evolve in real time and are harder to detect.
“Therefore, organizations must implement continuous monitoring capable of identifying abnormal behavior as it emerges.”
The report also found that there is a significant confidence gap in defenses.
Almost two thirds (61%) of IT leaders cite AI-powered cyberattacks as a top concern, with only 31% expressing they are equipped to respond. To keep pace, AI-powered threats must be countered with AI-powered defenses.
This shows that IT leaders may recognize that data protection and vulnerability management are critical, still over half admit that their current systems are insufficient to address AI-driven threats.
Therefore, without a clear understanding of these risks, it is almost impossible to accurately assess readiness to effectively mitigate vulnerabilities.
Ghura highlights: “This lack of confidence is compounded by skills shortages, legacy systems, and limited budgets, all of which leaders cite as barriers to effective AI-powered cybersecurity.
“While no leader has ever been dismissed for overinvesting in security, many have lost their roles following major breaches.
“A trusted partner can help close this gap, empowering organizations to build stronger, more resilient security.
How do these security changes impact HR leaders?
To help HR leaders better safeguard their organization, Ghura pinpoints four ways in which leaders can act: education, policies, audits, and rewards.
Firstly, HR leaders must ensure that employees are educated on the new security risks, particularly as Gen AI fuels sophisticated phishing, voice, and video impersonation.
HR teams must go beyond standard awareness training,” he says. “Employees should be equipped to recognize deepfakes, AI-enabled social engineering, and the risks of inputting sensitive data into public AI tools.”
To support and guide this, HR leaders should also establish clear AI usage policies, as they play a pivotal role in shaping organizational norms. Ghura explains that by establishing “explicit, enforceable AI usage guidelines,” that cover everything from employee access to data-sharing boundaries, is “key” to fostering trust and compliance.
Finally, Ghura touched upon this importance of auditing access rights – describing it as “critical.”
He adds: “If AI agents are not given controlled access rights, they can quickly undermine internal data protection measures. To minimize risk, HR leaders should ensure that both AI systems and employees can only access the data necessary for their roles.”
Likewise, they need to ensure that the rewards of AI outweigh the risks. He concludes: “When implemented properly, the benefits AI brings to organizations far exceed the risks.”
With this in mind, is your organization well equipped to navigate this new era of AI security?
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!
Senior Journalist, UNLEASH
Lucy Buchholz is an experienced business reporter, she can be reached at lucy.buchholz@unleash.ai.
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields