As artificial intelligence continues to reshape the workplace, employers must navigate a shifting legal and regulatory landscape. Our Employment Law & Benefits team explores the rise of AI in human resources, covering applications in recruitment, performance management, and employee monitoring. They also outline practical insights on managing legal risks, such as AI-driven discrimination and excessive monitoring, while highlighting upcoming EU regulations that will impact HR practices from 2025 onwards.
The rise of AI in HR: current applications and legal considerations
AI has been quietly transforming HR processes for years, but recent advancements in generative AI have thrust these technologies into the spotlight. From recruitment to performance management, AI is reshaping how organisations approach human capital management.
Recruitment and hiring
AI-powered tools are increasingly used in various stages of the recruitment process, including:
- Creating job advertisements
- Screening CVs and ranking candidates
- Conducting initial interviews
- Scheduling and organising the hiring process
While these tools offer significant efficiency gains, they also present legal risks. The case of Amazon's abandoned recruitment tool in 2014 serves as a cautionary tale. The AI system, trained on historical data, systematically discriminated against women applying for technical positions. This highlights the potential for AI to perpetuate and amplify existing biases if not carefully designed and monitored.
Performance management and training
AI systems are being deployed to:
- Analyse employee tasks and create performance targets
- Identify skills gaps and recommend training programmes
- Automate aspects of performance reviews
While AI can provide data-driven insights, over-reliance on automated systems for performance management may erode the relationship of trust and confidence between employer and employee. Employers should maintain human oversight and ensure that employees can challenge AI-generated assessments.
Monitoring and surveillance
The use of AI for employee monitoring has become increasingly sophisticated, with capabilities including:
- Tracking user location and computer activity
- Analysing productivity metrics
- Using facial recognition for time and attendance
The 2024 case against Amazon France Logistique (AFL) demonstrates the legal risks associated with excessive monitoring. The French data protection authority (CNIL) imposed a €32 million fine on AFL for disproportionate and invasive monitoring of warehouse employees using handheld scanners. This demonstrates the need for employers to balance legitimate business interests with employee privacy rights.
The EU AI Act: a new regulatory landscape
The EU AI Act, which came into force on 1 August 2024, introduces a risk-based approach to regulating AI systems. There are four risk levels:
- Unacceptable risk (Prohibited)
- High risk
- Limited risk
- Minimal risk
Key dates for employers
- 5 February 2025: Provisions on AI literacy and prohibited AI practices take effect
- 2 August 2026: Regulations on high-risk AI systems become applicable. This is the key date for HR professionals and employers.
Implications for HR and employment practices
Prohibited practices
The Act bans certain AI practices in the workplace, including:
- Use of emotional inference AI systems during interviews or task performance
- Biometric categorisation systems that infer protected characteristics including race, religious belief, or sexual orientation
High-risk AI systems
Crucially, most HR-related AI applications fall under the high-risk category, including systems used for:
- Recruitment and selection
- Promotion and termination decisions
- Task allocation
- Performance monitoring and evaluation
- High-risk AI systems are subject to stringent regulations, including:
- Pre-market conformity assessments
- Post-launch monitoring obligations
- Human oversight requirements, and
- Transparency and explainability standards
Provider vs deployer obligations
The Act distinguishes between AI system providers (developers) and deployers (users). Most employers will be classified as deployers, with less onerous obligations than providers. However, deployer responsibilities still include:
- Using the system according to provider instructions
- Assigning competent personnel to oversee the AI system
- Ensuring input data quality and relevance
- Monitoring system performance and reporting incidents
- Maintaining system logs where possible
Employers should be mindful that if a HR team significantly modifies an existing AI system or repurposes a non-high-risk system for high-risk applications, they may be reclassified as a provider, incurring more substantial obligations.
Existing legal framework: continued relevance
While the Act introduces new regulations, existing employment laws remain crucial for ensuring fair and lawful use of AI in the workplace.
AI-driven decisions in recruitment, promotion, and termination must comply with the Employment Equality Acts 1998-2015. Employers are responsible for demonstrating that AI-driven decisions do not result in discrimination against individuals based on protected characteristics outlined in the legislation. These protected characteristics include gender, civil status, family status, sexual orientation, age, religious belief, membership of the Traveller community, race, and disability. Employers should regularly audit AI systems for potential bias and maintain human oversight to explain and justify their decisions.
Aside from obligations under equality legislation, the use of AI in employment decisions must not breach the implied common law duty of mutual trust and confidence. Employers should be able to explain how AI-generated decisions are reached to maintain transparency and trust.
Top tips for employers
AI offers tremendous potential to enhance HR practices and drive organisational efficiency. However, the legal and ethical implications of these technologies cannot be overlooked. By staying informed of regulatory requirements, maintaining robust governance structures, and prioritising fairness and transparency, organisations can harness the benefits of AI while mitigating legal risks. With that in mind, in advance of August 2026, we recommend that organisations should:
- Conduct an AI inventory: Identify all AI systems used in HR processes.
- Assess risk levels: Determine which systems fall under high-risk categories in the AI Act.
- Review vendor agreements: Ensure AI providers comply with relevant regulations and can support your obligations as a deployer.
- Implement governance structures: Assign responsibility for AI oversight and compliance within your organisation.
- Develop AI policies: Create clear guidelines for the acceptable use of AI in HR processes.
- Train staff: Educate personnel on AI capabilities, limitations, and legal implications.
- Establish audit procedures: Regularly test AI systems for bias and effectiveness.
- Plan for human oversight: Ensure mechanisms are in place for human review of significant AI-generated decisions and develop processes to explain AI-driven decisions to employees and candidates.
If you would like to discuss any related issue impacting your organisation, contact a member of our Employment Law & Benefits or Artificial Intelligence teams.
People also ask
Are employers considered deployers or providers? |
In most cases, an employer’s use of an AI system will deem them a deployer, with less onerous obligations for compliance with regulations. Exceptions apply where an employer develops or modifies an AI system. |
What risk category applies to a HR AI system? |
In most cases, HR functions will be classified as high-risk, and their use will be regulated. |
When do the regulations governing high-risk AI systems come into force? |
2 August 2026. |
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Share this: