The European Union Artificial Intelligence Act: an AI Game Changer

Male and female colleagues brainstorming in board room. They are at conference table in office.

In recent months, the European Union adopted The EU AI Act, a set of regulations to ensure transparency and accountability in the use of Artificial Intelligence (AI) throughout its member states.  

The act classifies different levels of risk associated with the use of AI in various industries. One industry that is addressed by the Act as high-risk is employment. More specifically, systems used in evaluation, hiring, and recruiting processes as well as in employee decisions about promotions, termination, and task allocation.  

What does this mean for the HR industry?  


According to the EU AI Act, before high-risk AI systems, like resume-sorting software, can be introduced to the market, they must meet strict obligations. These obligations include:  

  • Conduct comprehensive risk assessments and implement effective mitigation systems. 
  • Datasets used in training these systems must meet high-quality standards to minimize risks and prevent discriminatory outcomes.
  • All activity must be logged to ensure the traceability of results. 
  • Detailed documentation must be provided to authorities that offer comprehensive information about the system for compliance assessments.  
  • Clear and sufficient information must also be provided to users.  
  • Appropriate human oversight measures must be implemented to minimize risks. 
  • Systems must exhibit a prominent level of robustness, security, and accuracy. 

The New Era of AI Regulation Coming to the United States 
 

The escalating influence of ChatGPT and other GenAI tools like Copilot and Bard is transforming the HR landscape. These advanced AI technologies have enormous potential for refining workplace procedures, managing routine tasks, and addressing complex projects. However, lawmakers have voiced concerns about potential privacy violations and biased decision-making linked to these groundbreaking solutions. As a result, while HR professionals adopt these modern technologies to enhance their work productivity, they must also anticipate upcoming regulations. 

In an unprecedented move, New York City has pioneered the nation’s first workplace artificial intelligence law, regarding automated employment decision tools. The law requires a bias audit before using such tools and mandates notification to candidates or employees about their use in hiring or promotion processes. States, along with the federal government are currently considering AI legislation, signaling an increasing urgency to keep up with this rapidly evolving technology. 

In the meantime, the U.S. Equal Employment Opportunity Commission (EEOC) and other federal agencies have offered guidance on how existing laws pertain to AI. Recent innovations have empowered HR departments to automate a greater number of functions, making their work more efficient. These tools are used to screen candidate resumes and automate benefit choices during open enrollment periods. Additionally, chatbots are being deployed to address inquiries about benefits plan features and options. 

However, it is important to note that GenAI is still undergoing testing and refinement. The results it produces should be carefully verified for accuracy before being fully relied upon. 

States, along with the federal government are currently considering AI legislation, signaling an increasing urgency to keep up with this rapidly evolving technology. 

 

 

Ethical Considerations and Accountability in AI 
 

AI holds immense power and potential, but it also comes with ethical considerations and the need for accountability. The EU AI Act addresses these concerns head-on, highlighting the importance of ethical guidelines and principles in a way that respects fundamental rights, promotes human dignity, and complies with the rule of law. 

Best Practices for HR Under AI Systems 
 

To ensure compliance with the EU AI Act, US regulations, and upcoming laws, consider the following steps, according to Benjamin Ebbink, an attorney with Fisher Phillips in Sacramento, CA: 

  1. Start with an AI inventory to identify how the company is currently using AI. 
  2. Be aware of any potential employment discrimination issues. 
  3. Consider performing impact assessments or bias audits. 
  4. Ensure compliance with applicable data privacy regulations. 
  5. Make sure a human is reviewing AI-generated content for accuracy and making final decisions.  
  6. Train employees on how to use AI appropriately and recognize its limitations. 

Beyond the Horizon: Overcoming Challenges 
 

One of the main challenges in regulating AI is finding a balance between promoting innovation and implementing necessary regulations.  

It is crucial for employers to proactively prepare for the forthcoming regulations that are likely to be implemented. One significant aspect that employers are focusing on is eliminating human bias by utilizing AI to make fair and data-driven decisions. However, it is important to acknowledge that AI technology can also produce biased outcomes depending on the input data it receives. By implementing clear processes focusing on training, documentation, transparency, and human oversight, employers can reduce risk in utilizing AI while increasing trust with their employees and applicants.