In 2024, the European Union passed a com­pre­hen­sive law on ar­ti­fi­cial in­tel­li­gence — of­fi­cial­ly known as the EU AI Act — to establish a unified legal framework for the de­vel­op­ment and use of AI systems. As the first leg­is­la­tion of its kind globally, this reg­u­la­tion aims to promote the safe adoption of AI while min­i­miz­ing its risks.

In com­par­i­son, the United States has yet to adopt a com­pre­hen­sive federal law on ar­ti­fi­cial in­tel­li­gence. Instead, AI de­vel­op­ment is governed by a patchwork of sectoral and state-level rules, supported by the NIST AI Risk Man­age­ment Framework and recent executive orders.

IONOS AI Model Hub
Your gateway to a secure mul­ti­modal AI platform
  • One platform for the most powerful AI models
  • Fair and trans­par­ent token-based pricing
  • No vendor lock-in with open source

Why did the EU introduce a law on ar­ti­fi­cial in­tel­li­gence?

The EU’s law on ar­ti­fi­cial in­tel­li­gence was in­tro­duced to establish a clear and unified legal framework for the use of ar­ti­fi­cial in­tel­li­gence across Europe. The European Com­mis­sion presented the first draft in April 2021, and the final version was adopted in January 2024. The reg­u­la­tion was prompted by rapid ad­vance­ments in AI tech­nol­o­gy, which bring both op­por­tu­ni­ties and serious risks. Societal and ethical chal­lenges — such as al­go­rith­mic bias, lack of trans­paren­cy in automated decisions, and the potential misuse of AI for mass sur­veil­lance — made it clear that legal reg­u­la­tion was urgently needed.

The aim of the law, of­fi­cial­ly known as the EU Ar­ti­fi­cial In­tel­li­gence Act (EU AI Act), is to encourage in­no­va­tion without com­pro­mis­ing fun­da­men­tal European values such as data pro­tec­tion, security, and human rights. The EU has taken a risk-based approach, imposing strict reg­u­la­tions or outright bans on high-risk AI ap­pli­ca­tions. At the same time, the law aims to strength­en European companies in the global market by fostering trust and legal certainty.

Note

The EU AI Act is part of a broader legal ecosystem. Busi­ness­es engaging with the EU market, including US companies, should be aware of other ap­plic­a­ble rules and reg­u­la­tions, such as the Geo-blocking ban, the ePrivacy Reg­u­la­tion, and the Cookie Directive.

How the Law clas­si­fies AI systems by risk category

The EU’s law on ar­ti­fi­cial in­tel­li­gence cat­e­go­rizes AI systems into four risk levels based on their potential impact:

  1. Un­ac­cept­able risk: This category includes AI systems that are con­sid­ered a threat to safety, liveli­hoods, or in­di­vid­ual rights. These systems are pro­hib­it­ed. Examples include social scoring systems, where the behavior or per­son­al­i­ty of in­di­vid­u­als is assessed by gov­ern­ment bodies, or AI systems used for facial recog­ni­tion in public spaces without consent.
  2. High risk: These systems are allowed but are subject to strict re­quire­ments. They also impose sig­nif­i­cant oblig­a­tions on system providers and operators. This category includes AI systems used in critical in­fra­struc­ture (e.g. for safety in transport), as well as AI used in HR. In this case, decisions on hiring or firing must meet specific standards to protect both employees and ap­pli­cants.
  3. Limited risk (trans­paren­cy risk): This third risk category covers AI systems designed for direct in­ter­ac­tion with users. These systems have specific trans­paren­cy re­quire­ments, meaning users should be informed when in­ter­act­ing with AI. Most gen­er­a­tive AI falls into this category.
  4. Minimal risk: Most AI systems fall into this category and are not subject to specific oblig­a­tions under the EU law on ar­ti­fi­cial in­tel­li­gence. Examples include spam filters or AI-driven char­ac­ters in video games.
Note

In­ter­est­ed in using AI and want to create an AI-powered website? Then take a look at the following dedicated articles:

What should AI de­vel­op­ers and providers keep in mind?

The EU’s law on ar­ti­fi­cial in­tel­li­gence es­tab­lish­es a set of re­quire­ments for de­vel­op­ers and providers of AI systems, par­tic­u­lar­ly high-risk ones, to ensure these tech­nolo­gies are used re­spon­si­bly. The re­quire­ments cover various aspects, including trans­paren­cy, security, accuracy, and the quality of the un­der­ly­ing data. They are designed to ensure the safety and trust­wor­thi­ness of AI tech­nolo­gies without unduly hindering in­no­va­tion.

Risk man­age­ment

Companies must implement a con­tin­u­ous risk man­age­ment system to identify, assess, and minimize potential risks. This includes regularly reviewing their AI system’s impact on in­di­vid­u­als as well as on society as a whole. Focus areas include the pre­ven­tion of dis­crim­i­na­tion, un­in­tend­ed biases in decision-making, and risks to public safety.

Data quality and bias pre­ven­tion

The training data used to develop an AI system must meet high quality standards. This means the data must be rep­re­sen­ta­tive, error-free, and suf­fi­cient­ly diverse to avoid dis­crim­i­na­tion and biases. Companies are required to establish mech­a­nisms to detect and correct these biases, es­pe­cial­ly when AI is used in sensitive areas such as personnel decisions or law en­force­ment.

Doc­u­men­ta­tion and logging

De­vel­op­ers must create and maintain com­pre­hen­sive technical doc­u­men­ta­tion for their AI systems. These documents should not only describe the structure and func­tion­al­i­ty of the system but should also make the AI’s decision-making processes un­der­stand­able. Ad­di­tion­al­ly, companies must keep records of their AI systems’ op­er­a­tions to allow for future analysis or potential trou­bleshoot­ing.

Trans­paren­cy and user in­for­ma­tion

The EU AI Act requires that users be clearly informed when they are in­ter­act­ing with an AI system. For instance, chatbots or virtual as­sis­tants must disclose that they are not human coun­ter­parts. In cases where AI systems make decisions with sig­nif­i­cant impact on in­di­vid­u­als (e.g., regarding loan or job ap­pli­ca­tions), affected persons have the right to an ex­pla­na­tion of how the decision was made.

Human oversight and in­ter­ven­tion

High-risk AI systems must not operate fully au­tonomous­ly. Companies must ensure that human control mech­a­nisms are in­te­grat­ed so that humans can intervene and make cor­rec­tions if the system behaves er­ro­neous­ly or un­ex­pect­ed­ly. This is par­tic­u­lar­ly important in areas such as medical di­ag­nos­tics or au­tonomous mobility, where wrong decisions can have severe con­se­quences.

Accuracy, ro­bust­ness, and cy­ber­se­cu­ri­ty

The EU AI Act mandates that AI systems be reliable and robust to minimize the risk of erroneous decisions and security threats. De­vel­op­ers must prove that their systems function stably under various con­di­tions and cannot easily be affected by external attacks or ma­nip­u­la­tions. This includes measures for cy­ber­se­cu­ri­ty, such as pro­tec­tion against data leaks or unau­tho­rized ma­nip­u­la­tion of al­go­rithms.

Con­for­mi­ty as­sess­ments and cer­ti­fi­ca­tion

Before a high-risk AI system can be brought to market, it must undergo a con­for­mi­ty as­sess­ment to verify that it meets all reg­u­la­to­ry re­quire­ments. In some cases, an external audit by a notified body is required. The reg­u­la­tion also provides for con­tin­u­ous mon­i­tor­ing and regular re-eval­u­a­tions of the systems to ensure that they continue to meet the standards.

What are the im­pli­ca­tions for busi­ness­es?

The EU’s AI Act provides busi­ness­es with a clear legal framework, aiming to promote in­no­va­tion and trust in AI tech­nolo­gies. However, it also increases com­pli­ance burdens, technical ad­just­ments, and strategic planning re­quire­ments. Companies that develop or use AI tech­nolo­gies must carefully study the new re­quire­ments. Doing so will help them avoid legal risks and remain com­pet­i­tive in the long term.

Increased com­pli­ance burdens and costs

One of the biggest chal­lenges for companies is the ad­di­tion­al costs as­so­ci­at­ed with complying with the new reg­u­la­tions. For providers and users of high-risk AI systems, extensive measures are required, which may involve in­vest­ments in new tech­nolo­gies, skilled personnel, and po­ten­tial­ly external con­sul­tants or auditing bodies. Small and medium-sized en­ter­pris­es (SMEs), in par­tic­u­lar, could face dif­fi­cul­ties raising the financial and personnel resources needed to meet all reg­u­la­to­ry re­quire­ments.

Companies that fail to comply with the reg­u­la­tions risk heavy fines, similar to those already faced under the EU’s General Data Pro­tec­tion Reg­u­la­tion (or GDPR).

Op­por­tu­ni­ties for in­no­va­tion

Despite the ad­di­tion­al reg­u­la­tions, the law could help strength­en trust in AI systems and promote in­no­va­tion in the long term. Companies that adapt early to the new re­quire­ments and develop trans­par­ent, safe, and ethical AI solutions could gain a com­pet­i­tive advantage.

By in­tro­duc­ing clear rules, a unified legal framework has been es­tab­lished within the EU, reducing un­cer­tain­ty around AI de­vel­op­ment and use. This makes it easier for companies to market their tech­nolo­gies through­out the EU without dealing with different national reg­u­la­tions.

The EU AI Act is also one of the first of its kind worldwide and sets high standards. Companies that meet these can position them­selves as trusted providers in the mar­ket­place, giving them an advantage over com­peti­tors adhering to less stringent rules.

Global reach and impact on US companies

The EU’s law on ar­ti­fi­cial in­tel­li­gence does not only apply to companies based in the EU. It also applies to in­ter­na­tion­al firms that offer AI systems in the European Union or use EU-collected data for AI ap­pli­ca­tions. For example, a US-based company offering AI-powered re­cruit­ment software in the EU must comply with European reg­u­la­tions.

This global reach forces many companies outside the EU, including US ones, to adjust their products and services to meet the new standards if they want to access the European market. While this could lead to a more globally uniform approach to AI reg­u­la­tions, it could also pose a barrier for non-European companies seeking to enter the EU market.

However, there are concerns that European companies could fall behind in­ter­na­tion­al­ly due to these reg­u­la­tions. While countries like the United States and China push AI in­no­va­tions forward with fewer re­stric­tions, the strict EU reg­u­la­tion could slow down the de­vel­op­ment and im­ple­men­ta­tion of new tech­nolo­gies in Europe. This could be par­tic­u­lar­ly chal­leng­ing for startups and SMEs in Europe, as they compete with tech giants with sig­nif­i­cant­ly larger resources.

AI Tools at IONOS
Empower your digital journey with AI
  • Get online faster with AI tools
  • Fast-track growth with AI marketing
  • Save time, maximize results
Go to Main Menu