The EU Artificial Intelligence (AI) Act has finally come into force, which means new rules and regulations for many organisations that use AI. HR is one field that’s enthusiastically adopting the technology, so your company may already be using AI technologies that fall under the Act’s scope.
This is where privacy and data protection professionals can step up to help their colleagues in HR to manage the burden of a new regulation. Many organisations may think the AI Act doesn’t apply to them, but there could be hidden risks in common organisational processes like HR.
A survey of 1,000 UK HR professionals found that 94 per cent think the technology can be “instrumental” in helping them with challenges in their work. That can include increased productivity, efficiency and cost reductions.
The risks emerge because HR data usually involves a lot of personal – and often highly sensitive – information. What’s more, use of AI could pose risks to employees’ fundamental rights where AI has made or influenced an unfair, biased, or discriminatory decision affecting their employment. This would be enough for the AI to be classified as ‘high risk’ under the Act.
In the data protection world, we’re already seeing the need for AI assessments within organisations. In this blog, I’ll look at the privacy risks from HR processes that may use AI, and explore ways to manage those risks.
Understanding the EU AI Act
The EU AI Act, the world’s first comprehensive AI law, regulates the development, deployment, and use of artificial intelligence in the EU. It aims to ensure that AI technologies are safe, transparent, and respect fundamental rights.
The AI Act predominantly imposes obligations on ‘providers’, which are the AI developers rather than on organisations using the AI system itself. The Act outlines the way in which significant risks stem from how AI systems are used. This means that providers can’t comprehensively assess the full potential impact of a high-risk AI system during the conformity assessment. Organisations using those systems must follow obligations to uphold fundamental rights as well.
Although many AI systems pose minimal risk, they still need to be assessed. If you are deploying AI tech on the EU market, you may be regulated under the Act.
AI: a hidden risk in HR?
One example of the potential for bias is discrimination in the recruitment process. Employers using AI systems for this may be subject to the regulations for ‘high-risk’ AI. This applies when AI is used in activities like placing targeted job ads, analysing and filtering applications, and assessing candidates.
High-risk AI systems need to adhere to certain requirements including risk management, data quality, transparency, human oversight and accuracy. On the other hand, organisations that deploy or use those systems face obligations around quality management, monitoring, record-keeping, and incident reporting. High-risk AI systems are subject to stringent regulations, and non-compliance could result in significant penalties
Mitigating actions to consider
To guard against the potential risk with AI models, privacy professionals can follow these six steps to help HR teams avoid any potential pitfalls under the AI Act.
- Implement an AI governance plan
This sounds very formal, but I liken it to a cookie governance plan you would have for your website. An AI governance plan should make clear who will be the AI champion in the organisation – that could be the privacy professional, or it could be the head of HR. It’s about making sure there is a human contact who people can report to and who has the authority to intervene if there is an issue. The AI champion should be an expert in the relevant organisational processes, workflows, and objectives. The plan should also include regular checks under a specific timeframe, with updates to HR processes accordingly.
- Identify and monitor AI within HR processes
Does your HR team know whether their software tools use AI or machine learning? Find out which tools already have some AI, or are planning to implement it. Dig deep to understand whether the AI is just automating simple administrative tasks, or if it’s contributing to actual decisions on hiring or promotion.
- Risk assess and audit AI systems and software
Carry out comprehensive checks on what AI tools the business is using, and make sure there are adequate controls in place so people can express their rights and freedoms.
Make sure any AI systems comply with the AI Act, and check this against the high risk category to make sure the systems don’t negatively impact on people’s fundamental rights. I would suggest this assessment is very similar to a GDPR gap analysis, checking to see where the activity crosses into the realm of what the regulation specifies. This assessment should note what controls you have in place, and who’s responsible for them.
- Ensure there’s human oversight and the option for human intervention
As with data subject rights under GDPR, there needs to be human intervention: decisions on employment terms or HR matters shouldn’t be automated completely. You need a human to check that there are no biases in the hiring process. Have a nominated person to act as a contact about possible issues. If people who are applying for roles, or who may have been overlooked for a promotion, feel they’ve been exposed to bias, they need to have a point of contact, and they are entitled to know that the decision was overseen by a human. AI must strive to protect people’s personal data, which involves developing mechanisms for individuals to control and be informed about how their data is collected and used.
- Implement training and awareness on AI systems
Although much of the AI Act is aimed at organisations that create AI systems and write the algorithms, organisations that use those systems still need to abide by the rules. So, the training should familiarise HR professionals with the key privacy issues and make them aware of potential risks like bias. The training should call attention to privacy considerations that HR teams might not have thought about. It should make them aware of when AI is involved and where it intersects with personal data. The training should guide them towards knowing what processes they need to have in place to mitigate the risks.
- Ensure AI systems comply with existing workplace policies, especially data protection policies and privacy notices, and stay informed about compliance requirements.
As AI’s rapid adoption in many parts of business and operations continues, it’s important to stay up to date with what the regulations require. Just like our understanding of data protection under the GDPR evolved over time, it’s likely that our interpretation of AI rules will need to do the same.
AI in HR? Yes you can
Privacy professionals can reassure their colleagues in HR: yes it’s possible to deploy AI but it’s important to do so while being mindful of the potential risks. It’s still early days for the AI Act, and as with GDPR, there are likely to be some faltering steps to begin with as everyone finds their feet with the new rules.
Taking the actions I’ve outlined here is a good place to start to mitigate those risks. When it comes to personal data, transparency is always a good place to start. That way, HR and the business can aim to strike the right balance between the benefits of AI from a productivity perspective, while doing right by people’s privacy.
