Why Visier

AI Regulations and the Workplace: An Overview for HR

As the use of AI continues to evolve, it's critical to comply with AI regulations like the EU AI Act and the U.S. AI Bill of Rights. But where do you start? Read on to learn more.

8M Read
A stack of papers resembling the new AI regulations in the European Union and United States.

Generative AI (GAI) is quickly transforming how work gets done. HR organizations are using GAI to create personalized learning materials, design virtual training simulations, generate job descriptions, and even deliver fast, intuitive workforce insights to people leaders. Generative AI’s role goes beyond generating a few words, though, it’s becoming an integral part of many businesses.

AI plays a huge part in the datafication of HR, helping automate things like creating job descriptions, streamlining employee onboarding and training, and other tedious, repetitive tasks. As more and more business sectors start using AI, ethical and privacy concerns are rising and the need for regulations is more pressing than ever. And countries are starting to take action.

In 2023, Italy banned ChatGPT for a few weeks until OpenAI addressed privacy concerns and ensured GPDR compliance. Privacy regulations established pre-Chat GPT are no longer enough to address new and evolving privacy concerns, so various AI regulations started appearing.

Understanding and complying with these regulations is critical for businesses everywhere. HR, a department that constantly works with sensitive employee data, must pay special attention to them, ensuring discrimination occurrences and privacy breaches do not occur. AI regulations are currently being discussed worldwide, with the EU and the U.S. setting the tone. Their proposed solutions have many similarities—and differences—that businesses need to be aware of.

The EU AI Act

The European Union (EU) set the tone of privacy regulations starting with the GDPR in 2016, and it is doing the same with AI regulations. The AI Act, proposed on April 21st, 2021, and approved in February 2024 by the European Commission, encompasses all types of AI and all sectors except for military and defense.  

The AI Act classifies AI systems into three categories

  1. Unacceptable risk apps. These include any usage that is a threat to human safety or human rights. Examples include government-run social scoring systems, cognitive behavioral manipulation, and remote biometric identification systems. These types of apps are banned under the AI Act, though some exceptions may be allowed for law enforcement purposes.

  2. High-risk apps. This category considers AI systems that pose a high-risk threat to human safety or fundamental rights. High-risk systems include threats to physical safety wherein a malfunction could cause harm, like systems used in cars, medical devices, and infrastructure. It also includes CV-screening tools and exam scoring systems, where there’s a risk of processing data and profiling without consent. These apps are not banned, but their use is highly regulated. Generative AI isn’t considered high-risk, but its use is required to comply with EU copyright law and transparency requirements—we'll touch on those below.  

  3. Low or minimal risk apps. These pertain primarily to AI systems with specific transparency needs that require an opt-in, like using a chatbot for customer service or email spam filters. 

The AI Act includes certain provisions that will impact various business units, especially HR:

  • Transparency. If you use AI, you’ll need to be transparent about it. This includes abiding by EU copyright law and complying with transparency requirements, including disclosing when content is generated by AI and publishing summaries of copyrighted data used for training.

  • Data governance. Managing data and ensuring its quality, security, availability, and integrity, is always essential, but more so when using AI. Ensure data is always up to date and available, and that you’re using it ethically, without compromising anyone’s privacy and security. 

  • Risk assessment. Conducting a risk assessment when working with personal data, especially while using AI, is a must. Identify the risks, assess their impact, prioritize them, and then define your mitigation strategies.

The U.S. AI Bill of Rights Act 

In the United States, AI risk management is distributed across federal agencies. There are various AI-related data governance policies, each specific to certain sectors. In 2022, the Biden Administration released the Blueprint for an AI Bill of Rights, a nonbinding framework that outlines five principles to guide the “design, use, and deployment of automated systems to protect the American public.”

The Five AI Principles

  1. Safe and effective systems. AI systems should always be safe and effective for all users. This includes protections against inappropriate and irrelevant data use.

  2. Algorithmic discrimination protections. AI systems must be designed in an equitable way to protect users from algorithmic discrimination.

  3. Data privacy. AI systems need to protect individual privacy at all times, and users should be able to control how their data is used.

  4. Notice and explanation. AI systems require clear documentation that details how they work, make decisions, and their capabilities and limitations.

  5. Human alternatives, consideration, and fallback. Users should have the ability to opt out of automated systems and fall back on a human if the system has an error, fails, or if they want to challenge the decision.

Strategic compliance and best practices

Ensuring compliance with AI regulations can be daunting, especially if you operate in multiple jurisdictions. Here are a few best practices organizations can follow.

Developing an AI governance framework

Data governance is key in both the EU and U.S. privacy regulations, so developing a framework specifically with AI in mind will be the first step to compliance. 

Start by defining clear goals and principles that align with the company’s values. Next, consider what data your AI systems will use and conduct a risk assessment to help you identify potential compliance and ethical issues. Implement policies and procedures to ensure risks are mitigated as best as possible.

Don’t forget about transparency. Be open about the use of AI. Explain why and how you’ll use it, the implications, and the steps you’re taking to ensure privacy and security. Constantly monitor your systems and the use of AI to ensure everything is running as expected.

In HR, where you’re constantly processing the personal data of candidates and employees, ethics needs to take a central role. Ensure there is no bias and no discrimination. Provide a clear explanation about how you use AI, what decisions it can make, and the steps you’ve taken to protect data. 

Risk management strategies

Risk management is another key element of regulatory compliance when using AI. Before you implement AI in your company, conduct a risk assessment to identify all potential security and compliance issues. Once you have the risks, you can create a plan to mitigate them. 

Sometimes this could mean more privacy or security measures. Other times, you may simply need to limit the scope of AI or ensure sufficient human intervention to prevent issues like discrimination during candidate screening.

AI systems and regulations are constantly evolving, so you’ll need to conduct regular audits and assessments to ensure ongoing compliance. 

Training and awareness

Unless your team understands the implications and risks of using AI and its regulations, compliance will be challenging. Educate teams and employees on the ethical aspects to consider when using AI. Help teams understand the risks and the measures they must take to mitigate them.

Build a culture of transparency and accountability. Encourage communication, help employees understand how you’ll be using AI, and listen to their concerns when needed.

Preparing for the future of AI regulation

AI is shaping HR practices worldwide, taking a lot of tedious, repetitive tasks off the table, and giving employees more time to focus on complex tasks. For example, HR teams can use AI to automate repetitive tasks like writing job descriptions and creating onboarding materials.

AI can also be applied to distribute information to end users more quickly. For example, GAI tools like Visier's Vee can deliver workforce insights to end users in seconds, enabling people analytics teams, HR leaders, people managers, and executives to make informed decisions that impact the business. Vee is designed for enterprise-level operations and strengthened by Visier's robust security model.

The increased focus on AI regulations worldwide marks its significance. There are huge ethical implications on top of privacy and security concerns. Does this mean organizations will need to go back to doing everything manually? No, but keeping current with AI regulations and adhering to compliance requirements is essential.

HR can take a leading role by:

  • Delivering training to employees so everyone is on the same page about AI use.

  • Help develop policies that comply with AI regulations.

  • Assist with risk assessment.

  • Collaborate with IT teams to ensure employee data is protected.

  • Communicate with employees and foster a culture of transparency and accountability.

  • Make sure none of the decisions taken with AI contain discrimination or other unethical aspects that could negatively impact employees or customers.


AI is here to stay but to use it effectively, you need to pay close attention to AI regulations. The EU’s AI Act is legally binding, and not complying can come with severe consequences, including fines of up to €35 million or 7% of your annual global turnover.

HR leaders need to play a critical role in navigating these regulations and ensuring the company’s use of AI is ethical and doesn’t pose any risks to the privacy and security of its employees.

That’s not a task to undertake alone, though. AI regulations are complex, so engage with legal, ethical, and technological experts to ensure compliance. Together, you’ll be able to create a robust system that is compliant with regulations and allows you to stay competitive and reach your goals.

On the Outsmart blog, we write about workforce-related topics like what makes a good manager, how to reduce employee turnover, and reskilling employees. We also report on trending topics like ESG and EU CSRD requirements and preparing for a recession, and advise on HR best practices like how to create a strategic compensation strategy, metrics every CHRO should track, and connecting people data to business data. But if you really want to know the bread and butter of Visier, read our post about the benefits of people analytics.


Back to blog
Back to blog

Recommended resources

All resources

Get the Outsmart newsletter

You can unsubscribe at any time. For more information, check out Visier's Privacy Statement.