Bulletins //

NormCyber data protection bulletin: 14th June 2024

The responsible use of AI in HR and recruitment

The Department for Science, Innovation & Technology (DSIT) has released guidance to assist organisations that use artificial intelligence (AI) in recruitment to ensure that they are adhering to the UK government’s AI “high level principles” for the regulation of AI. This guidance seeks to clarity how the principles apply to AI used for recruitment. Specifically, it outlines how organisations should adopt AI assurance mechanisms to support the responsible procurement and deployment of AI systems in HR and recruitment.

Data Protection Bulletin

Comment: Although compliance with this guidance is not mandatory, it will be taken into consideration by regulators, including the ICO. While the guidance primarily focuses on AI in recruitment, many of its recommendations are applicable to the use and procurement of AI throughout the entire employment lifecycle.

As well as being aware of and implementing (as appropriate) this guidance, employers using AI at any stage in their employment lifecycles should, in particular, consider:

  • Completing due diligence before implementing any AI tools,
  • Encouraging HR to be open with candidates about how such technologies are used;
  • Implementing some form of regular human review in relation to the results produced;
  • Putting in place policies clearly setting out what the expectations are around AI use in connection with work;
  • Listening to employee feedback and addressing any concerns or issues that arise;
  • Reviewing their internal AI strategy and deciding on the steps required to align their use of AI tools with the emerging regulatory frameworks.

The TUC’s AI Bill

The TUC’s Artificial Intelligence (Employment and Regulation) Bill (Bill) aims to regulate protections and rights for workers, employees, jobseekers and trade unions, as well as obligations for employers and prospective employers when dealing with decision-making at work that is based on artificial intelligence systems.

It aims to provide for the fair and safe operation of AI systems where there is ‘high-risk’ decision making. This is similar in approach to the one taken by the EU in their AI Act. The Bill defines high risk being where there are “legal effects or other similarly significant effects”.

Key provisions of the Bill include:

  • the employer ensures only safe AI systems make it into the workplace, carries out detailed AI risk assessments of AI decision making and publishes a register of the AI decision – making systems in operation
  • employees, workers and unions are fully consulted, involved and informed before high-risk AI decision making systems in relation to employees are introduced.
  • all parties would have access to information about how the AI system is operating and would have a right to human review of AI decision-making.
  • a ban on emotion recognition technology.

Legal rights in the Bill which would go beyond current UK employment law include:

  • a right for unions to be given data about union members that is being used in relation to workplace AI decision-making; and
  • employers would have to show that there had not been AI based discrimination but would have a defence to show they have properly audited an AI system; and
  • a right for employees not to be unfairly dismissed by an AI system; and
  • a potential right for employees to disconnect outside of agreed working hours based on precedents from Europe and Australia.

Comment: Whether this stands any chance of becoming law probably depends on the outcome of the general election on 4 July. Is there any possibility that Labour could win?

Worker Compensated for AI Facial Recognition Discrimination

Mr M works as a delivery driver for Uber Eats. He was required to use its app when he was available for work which, occasionally, asked him to send “selfies” to register for jobs. However, after the company switched to the Microsoft-powered Uber Eats app, Mr M received repeated requests to verify his identity. If he didn’t pass these checks, he couldn’t access the platform or obtain any work.

Mr M was removed from the platform due to “continued mismatches” in the photos he submitted to the facial recognition system. Every image Mr M submitted was of himself and there were no obvious changes to his appearance. Despite this, the facial recognition AI consistently failed to recognise him.

Mr M believed that he had not been recognised because he was black, and that the system discriminated against him. He was not given any opportunity to challenge his suspension and brought legal proceedings against the company.

Mr M settled his claim before the final hearing. He was reinstated and continues to work for the company.

Comment: AI systems can foster unintended biases due to the data used for their learning. This kind bias within AI is not limited to facial recognition and can be found in a wide range of AI systems.

Action you should take: Carry out a DPIA before rolling any AI system out.

Proposal to Amend GDPR in UK at an End?

With the announcement that a General Election will be held on 4 July, the Data Protection and Digital Information Bill now seems very unlikely to become law. At the time the Election was called, the Bill was working its passage through Parliament, However, Parliament is due to be dissolved no later than the 31 May and there is no indication that the Bill will be fast-tracked before then.

Comment: Everything will now depend on what the new government does. If a Labour government assumes power, it is unlikely that the Bill will be re-introduced.

Get norm.’s data protection bulletin direct to your inbox

norm. tracks and monitors the latest data protection developments and collates these into a monthly data protection bulletin.

You can receive this bulletin for free, every month, by entering your business email address below: