Trustworthy AI? An Overview of the EU AI Act

March 10, 2022

In 2021, the European Commission presented a new regulatory framework for AI – the Artificial Intelligence Act. The “EU AI Act” proposed a number of rules for the development, use, and commodification of AI-driven products, services, and systems within the EU.

These standards, coupled with a plan to invest €1 billion per year in AI, are all part of a collaborative effort to transform the EU into a global hub for “trustworthy AI.” 

The draft regulation divides AI technology into four categories based on the risk it may pose to citizens:

Image source


  • Minimal Risk - AI technologies that pose minimal or no risk for citizens, such as AI-enabled video games or spam filters, will be free to use and the new rules won't apply to them. The EU Commission believes that a majority of AI applications will fall in this category.

  • Limited Risk - AI technologies in this category will have transparency requirements. Organizations must inform citizens that they are interacting with a machine and provide the option to opt out. Examples include chatbots and emotion recognition systems.

  • High Risk- This category includes using AI for employment processes, legal procedures, and other uses that are considered controversial or could be potentially harmful or damaging to a user’s personal interests. The draft suggested AI technology in this category be "carefully assessed before being put on the market and throughout their lifecycle."

  • Unacceptable Risk - The last category is meant for AI technologies that pose "a clear threat to the safety, livelihoods and rights of people.” Examples include social scoring and facial recognition by governments, dark pattern AI, and using AI for any other manipulative purposes. 
“It takes a proportional and risk-based approach grounded in one simple logic: the higher the risk that a specific use of AI may cause to our lives, the stricter the rule.”

Margrethe Vestager, executive vice-president of the European Commission

The European Commission has also proposed establishing the European AI Board to help national authorities implement and monitor the new regulations. Similar to the General Data Protection Regulation (GDPR), the new AI rules will apply to every public and private organization in Europe. That includes any foreign organization working with AI products or services that serve the European market. 

It’s still unclear how exactly the EU would enforce the new AI rules and, if introduced, how long it would take until they go into effect. Similar to GDPR, the process could take years. Many companies didn’t start receiving fines for GDPR violations until two to four years after the regulation went into effect. It may be years until we see significant enforcement of the AI Act as well. 

Can the EU set a global standard for AI?

Regulating AI is one of the most challenging issues of our time. The EU AI Act is one of the world’s most significant attempts at drafting regulations explicitly aimed at AI. The proposed rules have far-reaching implications for AI’s use in everything from law enforcement and education to employment and entertainment. 

There is always some risk that the initiative could fall short of its goals, resulting in an impractical and ineffective approach to AI regulation. But there is also hope that it could serve as a “global standard,” inspiring other regions around the world to pursue more trustworthy AI applications. 

Countries with AI policy initiatives (Image Source)


At least 60 countries have adopted some form of AI policy since 2017. For instance, the Canadian federal government began requiring algorithmic impact assessments for all automated decision-making systems delivered to the federal government.

In the UK, the Central Digital and Data Office (CDDO) has launched an algorithmic transparency standard for government departments and public sector bodies.

Australia’s Government has also launched the National Artificial Intelligence Centre to help unlock the potential of AI for business, with plans to invest $124.1 million under its AI Action Plan. 

In the US, some of the largest financial institutions have been requested to reveal how they use AI. The US Federal Trade Commission has also made it clear that it plans to take legal action against organizations that fail to mitigate AI bias or that engage in any other harmful activities through the use of AI.

While some regions, such as the EU and US, are starting to align on AI regulation, the rapid growth of AI governance creates new challenges for international cooperation. As AI becomes ubiquitous in online services and accessible from basically anywhere with an internet connection, a unified approach to AI governance may be the best option for oversight and promoting AI best practices. 

“By drafting the Artificial Intelligence Act and embedding humanist norms and values into the architecture and infrastructure of our technology, the EU provides direction and leads the world towards a meaningful destination,” states Mauritz Kop, author of the EU Artificial Intelligence Act: The European Approach to AI. 

“While enforcing the proposed rules will be a whole new adventure, the novel legal-ethical framework for AI enriches the way of thinking about regulating the Fourth Industrial Revolution,” he adds.

Navigating AI regulations

As AI becomes increasingly embedded into products, processes, and decision-making, there’s no doubt more regulations are on the horizon. Companies will require new processes and tools to monitor AI and whether or not its outputs call for regulatory concern. 

According to McKinsey research, only 48% of organizations reported that they recognized regulatory-compliance risks in 2020, and even fewer (38%) reported that they were actively working to address them. 

The EU has identified explainability as a critical factor in increasing trust in AI. Though it will certainly be a balancing act between regulatory compliance and stakeholder interest, companies that can develop AI with explanatory capabilities will undoubtedly be better positioned to win the trust of regulators when the time comes. 

Rather than cutting back on AI development, the most successful organizations are establishing frameworks for risk management and compliance that will enable them to continue innovating and deploying AI safely. According to McKinsey, establishing an “AI governance system” can help put organizations in a position to reach this goal. 

Put your organization on the right side of AI

The EU AI Act is just the beginning of a wave of regulatory moves that are likely to transform everything we know about AI today. Though it will likely take several years before there is ever a global standard baseline for AI, organizations that build transparency into their systems and processes now stand a better chance of being compliance-ready. 

At Apres, we are helping organizations move in the right direction. 

Going beyond first-generation AI explainability, we have developed a platform that helps anyone in the organization understand and improve model decisions. Not only do we explain decisions to go beyond the standards of current and future regulation, but we use natural language interpretations to bridge the gap between technical and non-technical teams.

For the first time, anyone within the organization can incorporate feedback into model decisions to help the model continuously improve. 

If you’d like to learn more about how we can help your team deliver more transparent, truly governable, and compliance-ready AI, contact us to discuss your goals and see a quick demonstration.