The EU AI Act: A comprehensive guide to the new AI regulation

Introduction to the EU AI Act

The EU AI Act is an important regulation of the European Union dedicated to the regulation of artificial intelligence (AI). This legislation aims to steer the use of AI technologies in the EU by promoting ethical standards and responsible practices. The AI Act lays down basic provisions and regulations to minimize potential risks and strengthen consumer confidence.

Objective and significance of the EU AI Act

The EU regulation aims to create a safe and transparent framework for the use of AI systems. This is to ensure that AI technologies are reliable and ethically justifiable. The AI Act is designed to promote innovation while protecting the rights of citizens. Companies and organizations must adapt their AI systems and take compliance measures to meet the new requirements.

Risk-based categorization of AI systems

The EU AI Act divides AI systems into different risk classes, each of which requires different levels of regulation:

Illustration of the risk categories of the EU AI Act, from prohibited AI applications to high-risk AI systems and limited risk to minimal risk.

The pyramid shows the four risk categories of the EU AI Act: 1. prohibited AI applications, 2. high-risk AI systems, 3. limited risk, and 4. minimal risk. Each level represents the risk and the corresponding regulatory requirements.

  1. Prohibited applications: Systems that manipulate behavior, exploit vulnerabilities or engage in social scoring. Only governments are allowed to develop such applications.
  2. High-risk systems: In critical sectors such as healthcare, finance and employment, with a major impact on security or rights. Most strongly regulated.
  3. Limited risk: Applications such as chatbots or content creation. Main requirement: transparency. AI-generated content must be labeled for the user.
  4. Minimal risk: Most AI applications, e.g. games, spam filters, recommendation systems.

A critical look at EU regulation

While strict regulation by the EU sets high standards for data protection and security, there are concerns. The EU could fall behind in the global innovation competition. Many of the pioneering AI developments come from the USA and China. The question remains as to whether EU regulation promotes or hinders innovation. It is still unclear whether the AI Act will strengthen the EU’s competitiveness in the AI sector or whether it will slow down technological development.

Important provisions of the EU AI Act

  • Transparency requirements: Companies must disclose how their AI systems work, which data sources are used and how decisions are made.
  • Data protection and data security: Strict data protection regulations are designed to protect the privacy of users.
  • Cybersecurity: Measures to defend against cyberattacks on AI systems are mandatory.

Timetable for the introduction

The EU AI Act already has a detailed timetable for its introduction and implementation. Here are the most important milestones:

  • April 21, 2021: EU Commission proposes the AI Act
  • December 6, 2022: EU Council unanimously adopts the general approach of the law
  • December 9, 2023: Negotiators of the European Parliament and the Council Presidency agree on the final version
  • March 13, 2024: EU Parliament approves the draft law
  • 20 days after its publication in the Journal: Entry into force of the law
  • 6 months after entry into force: Ban on AI systems with unacceptable risk
  • 9 months after entry into force: Codes of conduct apply
  • 12 months after entry into force: Governance rules and obligations for General Purpose AI (GPAI) become applicable
  • 36 months after entry into force, with specific exceptions: Application of the entire EU AI Act for all risk categories (including Annex II)

This timetable gives companies and organizations clear guidelines and sufficient time to make the necessary adjustments and comply with the new regulations.

>> Timeline of the EU Commission

Promoting innovation and ethical standards

The EU AI Act strives for a balance between strict regulation and the promotion of innovation. Compliance with ethical guidelines is intended to ensure that AI systems are used responsibly and contribute to the well-being of society.

Monitoring and enforcement

Compliance with the EU AI Act is ensured by monitoring and enforcement mechanisms. Supervisory authorities are authorized to conduct audits and impose sanctions in the event of violations. This is to ensure that the provisions of the AI Act are strictly adhered to.


The EU AI Act represents a significant step towards the regulation of AI in the EU. It creates strict requirements and transparent guidelines for the use of AI systems. Categorization according to risk potential ensures that technologies that pose a higher risk are regulated more strictly. This promotes the ethical and safe use of AI and strengthens consumer confidence.

However, there are also critical voices pointing out that the EU may be falling behind in the global competition for AI innovations. While strict regulations increase data protection and security, they could also slow down the development of new technologies and impair the EU’s innovative strength. It remains to be seen whether the AI Act can strike a balance between security and innovation.

Overall, the AI Act offers both opportunities and challenges. It has the potential to set a world-leading example for responsible AI regulation. At the same time, it must be ensured that European innovative strength does not suffer as a result of the strict regulations.

Do you have any questions about implementing AI in your company? will be happy to provide you with advice and assistance. >> Get in touch now!

More exciting internal articles on the subject:

>> The path to digital self-determination – a startup spotlight

>> A guide through the AI jungle

>> RAFT – How language models become smarter with new knowledge

Further external sources and links:

>> Read the entire plenary session document of the European Parliament here