The EU AI Act

What companies need to know now
The AI Act after it came into force — why standardization, deadlines and SME participation are now decisive, according to the AI Federal Association.
The AI Act has made history in several ways. On the one hand, the world's first comprehensive AI regulation was adopted and this in addition in what is arguably one of the longest negotiation marathons in EU history to date, which lasted a full 36 hours. On the other hand, the question arose from the outset whether the AI Act could prove to be an instrument for promoting innovation or not more of an economic impediment. This is because the horizontal approach of regulating AI across the board — instead of adapting existing sectoral regulations — was controversial from the start. The AI Act has now been in force for a few months, and even though important implementation steps are still pending, initial developments can certainly be identified.
AI Act: Implementation obligations are getting closer
Since August 1, 2024, the AI Act has officially been in force. This date is particularly important because it has given the go-ahead for the phased implementation process. The developments so far are manageable, but not insignificant: By last November, the member states had to announce which authorities and institutions will be responsible for protecting fundamental human rights in connection with AI applications. Since February 2025, the bans have now also come into effect. The prohibited AI applications include applications such as social scoring, emotion recognition in the workplace or in educational institutions, manipulative AI systems and AI applications that unintentionally read biometric data, in particular faces — from open sources and use it for databases.
In addition to the regulations that have already come into force, intensive work must also be done to ensure compliance for the next, upcoming deadlines. All actors who develop or use AI systems must assess them on the basis of the risk level system introduced in the AI Act and, if necessary, take measures to meet the requirements. The requirements for so-called high-risk AI systems are particularly strict. Vendors and operators must also ensure that all people involved in developing, applying, or monitoring high-risk AI have a sufficient level of AI expertise.
However, both the definition of high-risk AI systems (HRAIS) and the mandatory training requirements are currently still vague. Although the legal text contains extensive passages outlining various HRAIs, there is a lack of comprehensive guidelines on the basis of which each system could be clearly classified as high-risk or non-high-risk. In addition, the EU Commission reserves the right to add further systems to its existing list of HRAIS afterwards.
Lack of Standards Hampers Implementation of the AI Act
Just over half a year after it came into force, there are mostly critical voices in practice. This is because many companies are currently confronted with considerable uncertainties. This is primarily due to the fact that the AI Act refers in many places to future technical standards and norms that are currently still under development. In particular for the specific risk assessment, the implementation of safety requirements and the design of Training programs Binding requirements are still missing. At the same time, initial obligations are already in force, meaning that companies are sometimes unclear about the exact interpretation of the regulations.
“Companies are sometimes uncertain about the exact interpretation of the regulations.”
The existing uncertainties are particularly clear when looking at the current state of standardization processes. A comprehensive analysis by the AI Federal Association in Cooperation with Partners shows that the practical implementation of the key obligations of the AI Act — such as risk and quality management, transparency and documentation requirements or human supervision requirements — decisively depends on harmonized technical standards, which are currently still in an early phase of development. It is not only the expected very short implementation period of less than one year in practice that is likely to be particularly challenging, but also the limited participation of small and medium-sized companies in standardization committees. These players in particular, who strongly shape the European AI market, are confronted with a double problem: On the one hand, they often lack the resources to actively contribute to the development of the standards, and on the other hand, they depend on their precise design in order to be able to adapt their systems in a legally secure and efficient manner.
Recommendations for implementing the AI Act
Against this background, we formulate concrete recommendations to ensure a practical and innovation-friendly implementation of the AI Act. These include in particular the extension of implementation deadlines, low-threshold participation opportunities for SMEs and start-ups in the standardization process, and accompanying measures such as financial support and practice-oriented guidelines. In addition, the timely clarification of the issue of free access to the harmonized standards is considered crucial in order to provide smaller providers in particular with the necessary legal certainty. Whether and to what extent these recommendations are implemented in the coming months is likely to play a central role in whether the AI Act achieves its goal of promoting innovation and security in equal measure.
“It is important for companies to keep themselves constantly informed about relevant developments.”
Precisely because so many aspects of the AI Act are currently still being worked out in detail, it is important for companies to keep themselves informed about relevant developments so that they can take compliance measures in good time as soon as relevant deadlines are approaching or clarification is provided in some areas. In addition, proprietary analyses should be carried out proactively in order to classify the company's own systems into the appropriate risk categories. Early and comprehensive documentation of all relevant processes and decisions is also advisable.
The question of whether the AI Act can meet its claim of simultaneously promoting innovation and effectively limiting risks remains. The coming months will be decisive, particularly with regard to standardization processes and their results. However, it is already clear that the AI Act is not only a milestone in regulation, but also a touchstone for the European AI strategy.
AI Act Timetable
Expected additions
Delegated or Implementing Acts
- 9 months: Codes of Conduct for GPAI providers
- 12 months: First annual review of the list of
Prohibited AI systems - 18 months: Model for monitoring after
Placing on the market - 24 months: Review of the list of high-risk AI systems
The Commission must also prepare
- Procedure for setting up a scientific advisory board
- Methods for developing regulatory sandboxes
Under real conditions
Harmonized standards
(Publication planned for the second half of 2025)
- risk management
- Data quality and management
- logging
- transparency
- Human Supervision
- precision
- robustness
- Cybersecurity
- Quality Management
- Conformity assessment
For Member States
- 3 months: Appointment of the Protection Authority
fundamental rights - 12 months: Identification and naming of
Competent national authorities - 24 months: Communication to the Commission on the
System of Penalties - 24 months: Set up at least one
regulatory sandbox - List of applicable obligations
This article was first published in our magazine data! Issue 5. Read now for free.







