Many multinational companies today use artificial intelligence (AI) in some capacity and, thus, likely will have to meet the stringent legal and compliance obligations of the European Union’s AI Act, referred to by the European Parliament as the “world’s first comprehensive AI law.”

At a high level, the AI Act sets out a comprehensive set of legal and compliance obligations to promote “human-centric and trustworthy” AI while ensuring a high level of protection for health, safety, and fundamental rights.

Beyond its compliance requirements, the AI Act is also significant for its potential to result in the so-called “Brussels Effect,” said Vishnu Shankar, former head of legal at the Information Commissioner’s Office and now a partner in the London and Brussels offices of Morgan Lewis.

Coined by Columbia Law professor Anu Bradford, the Brussels Effect refers to the EU’s prominence in the global market and how its regulations generally shape those of other countries and the international business community. “While not guaranteed, there is a good chance that the AI Act will end up having a global legislative impact,” Shankar said.

This article explores the AI Act’s key provisions, what impact it will have on compliance programs, and what compliance steps companies should be taking now to prepare for the forthcoming, tiered implementation dates.

Scope of the Act

The AI Act is quite complex, but generally regulates the following two categories of AI:

  • AI systems: Machine-based systems designed to operate with “varying levels of autonomy and that may exhibit adaptiveness after deployment, and that…infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” Common use cases include customer service chatbots, machine-learning translations, facial recognition, medical imaging, autonomous driving systems, and more.
  • General-purpose AI models (GPAI models): AI models “trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks … and that can be integrated into a variety of downstream systems or applications.”

Many types of stakeholders make up the AI supply chain, including providers, deployers, authorized representatives, importers, and distributors. This article focuses mostly on the obligations of providers.

Under the AI Act, providers are those that develop an AI system or GPAI model and “places it on the EU market” or that put an AI system into service under its name or trademark. Deployers are users of AI systems.

Importantly, the AI Act extends extraterritorially, applying to providers inside and outside the EU, as well as applying to providers and deployers outside the EU where the “output” of AI systems is “used in the EU.”

Four risk tiers

The AI Act’s central regulatory framework focuses on four risk tiers: prohibited, high-risk, transparency risk, and minimal risk.

From a legal and compliance standpoint, accurately assessing the risk tier under which the AI system or GPAI model falls is a critical step, because that will dictate the company’s compliance obligations that follow. “I cannot overemphasize how important this step is,” Shankar said.

If the risk tier of the AI system or GPAI model is mischaracterized, the company risks either being “over-compliant or under-compliant,” Shankar added. “You’re either unnecessarily handicapping your business or you’re unnecessarily exposing yourself to enforcement risk.”

Prohibited AI practices. Effective Feb. 1, 2025, certain AI systems will be prohibited altogether, including:

  • “biometric categorization and identification” that infer “sensitive attributes,” (e.g., race, political opinions, religious beliefs, sexual orientation);
  • “subliminal, manipulative, or deceptive techniques” that distort behavior or exploit vulnerabilities due to age, disability, or a specific social or economic situation;
  • “social scoring” systems that evaluate or classify individuals or groups based on social behavior or personal traits;
  • AI that infers emotions in workplaces or educational institutions; or
  • Real-time and remote biometric ID, such as facial recognition, with limited exceptions for law enforcement purposes.

High-risk AI systems. The AI Act sets out a complex methodology for classifying high-risk AI systems. One category of high-risk AI systems applies to those that are embedded as safety components in a product that falls under the EU’s product safety legislation. This could include AI-based civil aviation, vehicles, industrial machinery, toys, and more.

A second category of high-risk AI systems applies to specific other areas, including:

  • Operation of critical infrastructures (e.g., in the fields of road traffic and the supply of water, gas, heating and electricity);
  • Education and vocational training;
  • “Employment, worker management and access to self-employment” (e.g., for purposes of recruitment, candidate selection, or evaluation of performance); and
  • Access to essential private and public services and benefits (e.g., assessing eligibility for healthcare and insurance services, and credit scoring).

Limited exemptions apply, such as high-risk AI systems that have been placed on the market or into service before Aug. 2, 2026, absent any significant changes made to their design, or that meet other risk qualifiers set out in the Act.

Transparency risk.” This risk category speaks to AI systems that require transparency obligations. For example, providers of generative AI, like ChatGPT, must mark AI outputs in a “machine-readable format.” Deployers of generative AI systems that artificially generate or manipulate text, image, audio, or video content constituting deep fakes “must visibly disclose that the content has been artificially generated or manipulated.”

Minimal risk.” According to the European Commission, the “majority of AI systems” can be developed and used subject to the existing legislation without additional legal obligations. However, providers of those systems may voluntarily choose to apply the requirements.

Compliance obligations

Most of the AI Act’s compliance requirements fall on providers of high-risk AI systems. As described by the Act, those compliance requirements include, in summary:

  • Establish, implement, document, and maintain a risk management system throughout the high-risk AI system’s lifecycle (Article 9).
  • Conduct data governance based on training, validation, and testing datasets that meet an extensive list of criteria set out in the Act (Article 10).
  • Draw up technical documentation before placing a high-risk AI system on the market or putting it into service (Article 11) that provides national competent authorities and notified bodies with the necessary information to assess the AI system’s compliance with the Act’s requirements, including the elements in Annex IV.
  • Design and develop high-risk AI systems capable of logging events (Article 12). The Act describes what types of events the logging capabilities should record. Design and develop high-risk AI systems that enable deployers to interpret a system’s output and use it appropriately (Article 13). Instructions for use must be provided and contain specific information described by the Act.
  • Design and develop high-risk AI systems that can be overseen by humans to prevent or minimize health and safety risks, or violations of fundamental rights (Article 14).
  • Design and develop high-risk AI systems in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity (Article 15).

Providers of GPAI models have compliance obligations as well – albeit, to a lesser extent. Such obligations include providing technical documentation and instructions for use; complying with the Copyright Directive; and publishing a summary of the content used for training. Providers of GPAI models that present a systemic risk must conduct model evaluations; and adversarial testing; track and report serious incidents; and ensure cybersecurity protections.

Penalties for violations are potentially significant, ranging from the greater of €7.5 million or 1.5% of global group annual revenues to €35 million or 7% of global group annual revenue, depending on the nature of the violation, and company size.

Additional compliance measures

As a company goes about putting mechanisms around the design, development, and operation of AI, it needs to have governance processes that ensure that IT, product engineering, data scientists, and research and development teams work hand-in-hand with legal and compliance, Shankar stressed.

He offered a step-by-step compliance checklist of questions to consider:

  • Are any of the business’s AI-enabled technologies, applications, or products characterized as an AI system or GPAI model?
  • Does the business perform any of the AI Act’s functions, such as in the capacity of a provider or deployer?
  • For non-EU companies, does the AI Act apply extraterritorially?
  • Under which risk tier does the system fall? Are there any prohibitions?
  • What exemptions, if any, apply?

Staying abreast of regulatory updates and guidance will also be important. Many U.K. regulators have issued strategic approaches to AI regulation. Companies should refer to these resources as further guidance to aid them in compliance with the Act.

C5 will be holding its “19th Conference on Anti-Corruption London” on June 24-25, 2025, in London For more information, and to register, please visit: https://www.c5-online.com/ac-london/