The New EU AI Act: A game changer for HR?
Harvard University’s Paola Cecchi Dimeglio has been advising the EU on AI and big data – in her expert opinion, here’s everything HR leaders need to be aware of the newly introduced EU AI Act.
Expert Insight
The EU's AI Act is here - and it is not something that organizations, and particularly HR teams, can overlook.
There's lots of potential risk, but also significant opportunities, from the Act for HR leaders, not just in the EU, but further afield.
In this exclusive UNLEASH OpEd, Harvard Faculty Chair and AI expert Paola Cecchi Dimeglio shares her top tips for grappling with this new AI regulation.
The European Union’s adoption of the Artificial Intelligence Act (AI Act) on March 13, 2024, is a landmark moment for regulating AI.
This legislation sets stringent rules on data quality, transparency, human oversight, and accountability; it will impact how businesses deploy AI.
For HR leaders, who are often responsible for implementing and overseeing AI systems within their organizations, understanding the implications of this directive is crucial.
Having worked in the field of big data and AI for over two decades and advising both private companies and public entities such as governments and EU on AI policies and governance in the US and Europe, I can assure you that the significance of this regulatory milestone should not be underestimated.
Getting it right from the start is crucial.
Let’s dig in.
Understanding the EU AI Act
The EU’s AI Act includes specific provisions for generative and general-purpose AI, reflecting the evolving landscape of AI applications.
The Act defines AI systems by two key characteristics: AI systems operate with varying levels of autonomy and infer from the input they receive to generate outputs such as predictions, content, recommendations, or decisions.
This differentiates AI systems from traditional software, where outputs are pre-determined by strict algorithms.
This broad, technology-neutral definition aims to ensure the AI Act remains adaptable and relevant.
The directive categorizes AI systems into four tiers: unacceptable risk, high risk, limited risk, and minimal risk.
This ensures that AI systems deployed in the EU are safe, transparent, and respectful of fundamental rights.
Here’s a quick overview of the four categories of AI risk:
1. Unacceptable risk:
AI systems that threaten safety, livelihoods, and rights are prohibited. Examples include AI-driven toys that encourage dangerous behavior in children or systems used by employers to exploit worker vulnerabilities.
2. High risk:
AI systems with significant impact on people’s lives are subject to stringent requirements. This includes AI used in critical infrastructure, healthcare, and employment processes.
High-risk systems must meet strict criteria for data quality, transparency, human oversight, and robustness.
For instance, an AI system used to screen job candidates must ensure it does not discriminate against any group and that its decision-making process is transparent and understandable.
3. Limited Risk:
AI systems in this category require transparency but are subject to less stringent controls.
Examples include AI chatbots interacting with customers, which must inform users they are engaging with AI and provide a way to contact a human operator if needed.
4. Minimal Risk:
AI systems posing minimal, or no risk are not subject to specific regulatory requirements.
Examples include AI-powered video games or spam filters. Providers are encouraged to adopt voluntary codes of conduct to ensure ethical use.
The EU AI Act has significant extraterritorial effects, with fines of up to €35 million or 7% of global annual revenue for any breaches.
This means it isn’t just relevant for organizations based in the EU, it also impacts any companies worldwide that operate within the EU market.
The Act’s provisions apply to AI developers, importers, distributors, and deployers, making it a critical concern for all HR departments operating all around the world.
The impact of the EU AI Act on HR
The AI Act’s potential to transform HR initiatives within organizations is profound.
Here are key areas where the directive intersects with HR objectives:
1. Bias detection and mitigation:
The AI Act emphasizes minimizing bias in AI systems.
For HR leaders, this means ensuring AI tools used within their organizations comply with the directive’s requirements for data quality and bias mitigation.
AI can detect and address biases in hiring practices, compensation structures, and performance evaluations, prompting human oversight to correct disparities.
Tools like Syndio or Beqom for pay analysis and IDEA (Intelligent Data-Driven Evaluation Analytics) or Culture Amp for continuous performance management can help identify and correct disparities, ensuring fair treatment across different demographics.
2. Diverse candidate sourcing:
AI-powered recruitment tools can significantly enhance the diversity of candidate pools.
The AI Act’s transparency requirements mean AI systems must clearly communicate their functioning, helping remove biases from the hiring process.
Tools like HireVue or Manatal for recruitment anonymize resumes by stripping out personal information, ensuring hiring decisions are based on skills and experience rather than demographics.
3. Inclusive communication and accessibility:
The AI Act promotes the use of AI to foster inclusive communication.
AI-driven tools for automatic translation, transcription, and summarization can break down language barriers and make communication more accessible to employees with disabilities.
HR leaders should ensure any of these tools in use in their organizations comply with the AI Act and are used ethically.
4. Personalized learning and development:
AI can provide personalized learning experiences that cater to diverse employee needs.
AI-driven learning platforms can tailor training programs to individual learning styles and career development paths, creating a more inclusive environment where all employees have equal growth opportunities.
HR leaders must ensure these platforms adhere to the AI Act’s transparency and data quality regulations.
Strategic planning for HR around the EU AI Act
For HR leaders, navigating the compliance landscape of the AI Act may be tricky, but it is paramount.
Here are some steps I recommend you consider:
1. Conduct a comprehensive audit: Assess current AI systems for compliance with the AI Act’s requirements. Identify high-risk systems and implement necessary changes to mitigate risks.
2. Develop transparent policies: Ensure that all AI-driven decision-making processes are transparent and that employees understand how these systems function and their impacts.
3. Engage in continuous monitoring: Regularly audit AI systems for biases and discriminatory outcomes. Consider third-party audits to ensure unbiased assessments.
4. Stay informed: Keep abreast of emerging AI regulations and adapt compliance strategies accordingly. The AI Act is expected to be supplemented by additional EU legislation, particularly in areas such as employment and copyright.
To conclude, the AI Act represents a significant regulatory milestone with far-reaching implications for DEIB and the responsibilities of HR leaders.
By fostering transparency, minimizing bias, and promoting inclusive practices, the directive offers a framework that supports the ethical deployment of AI.
For HR leaders, proactive compliance with the AI Act is not just a legal obligation, it is an opportunity to ensure they meet the standards set by the EU while promoting a fair and inclusive workplace.
Faculty Chair
Cecchi Dimegli is the Faculty Chair ELRIWMA at Harvard University & Founder of People Culture Data Consulting Group
-
Topics
Future of Work
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields
Apply to Contribute
"*" indicates required fields
Sponsor Inquiry
"*" indicates required fields