EU AI Act 2026

What providers and deployers need to know now
Using AI in a legally compliant and future-proof way? This is certainly one of the major business challenges in 2026. With us, you can keep track of things
The introduction of the EU AI Act marks a milestone for the responsible use of artificial intelligence in Europe. The aim is to clearly regulate the use of AI systems and at the same time promote innovation. AI systems are divided into four risk classes according to their risk to fundamental rights, safety and health of people: unacceptable risk, high risk, limited risk and minimal risk. Systems with unacceptable risk, such as workplace monitoring or social scoring systems, are prohibited. High-risk systems, for example in medicine, transport or safety-critical areas, are subject to comprehensive obligations. These range from technical documentation and emergency procedures to filing in the EU database.
A central point of the EU AI Act is the promotion of AI competence in companies.
The role of a company is also important: As a provider, you develop an AI system and launch it on the market under your own name or brand. Providers must fulfill comprehensive obligations, including providing “Instructions for Use” and classification in the risk class. Deployers, on the other hand, use AI systems that are provided by a provider. They have fewer obligations, but must ensure that they are used within the framework of the requirements. In complex cases, a company can be both a provider and a deployer, which doubles the obligations.
Implement company policies better
For AI systems with limited risk, such as customer service chatbots, the focus is on transparency and disclosure. Users must be informed that they are interacting with AI and which data is being processed. In some cases, GDPR rules also apply here, for example for obtaining consent. Minimal risk systems do not require specific regulatory measures, but it is recommended that governance structures, logging, technical documentation, and ethical guidelines be introduced voluntarily.
A central point of the EU AI Act is the promotion of AI competence in companies. Vendors and operators must take steps so that their personnel have sufficient knowledge of how to use AI. This includes training to minimize risks such as IP leaks or data loss, but also to identify potential for process optimization. Even though certificates and evidence are currently not mandatory, they help prove responsibility in the event of a claim and document compliance in a comprehensible manner.
And in practice?
Companies should first carry out an inventory of all AI systems, analyze existing and planned systems and classify them by risk class. Workshops and digital tools such as compliance checkers can help identify roles, obligations, and risks. Systems with limited or minimal risk form the majority, meaning that many companies only have to implement a few of the high-risk obligations initially.
It is recommended that governance processes be introduced
It is also recommended to introduce governance processes that facilitate the evaluation of new AI systems and clearly define responsibilities. This not only supports compliance, but also the strategic use of AI in all areas of business. The EU AI Act thus provides clarity about permitted applications, obligations and risks — a prerequisite for promoting innovations in a targeted manner without jeopardizing fundamental rights.
The clear classification according to risk classes and the training of employees create transparency, minimize risks and promote the potential of AI. Companies can thus ensure the safe and responsible use of AI without slowing down their innovative strength.
EU AI Act 2026 — Risk Classes and Obligations
Unacceptable risk
e.g. monitoring in the workplace
high-risk
e.g. medicine, transport
Duties:
— Technical documentation
— Emergency procedures
— Deposit in EU database
Limited risk
e.g. customer service chatbots
Duties:
— transparency
— Disclosure
Minimal risk
e.g. spam filters
— No specific measures







