Category
5 min read

Attention: The EU Artificial Intelligence Act comes into force

Published on
Jan 11, 2022
Subscribe to our newsletter now
Share article

What does the AI Act mean for companies?

The rapid development of artificial intelligence presents companies with new opportunities, but also with challenges. With the introduction of the world's first AI law by a leading regulatory authority, the use of AI in many areas is now regulated by law. The law came into force in August 2024 and from 2025, the first regulations for the use of AI systems will become mandatory. It classifies AI applications into four risk categories and asks companies to rethink their AI strategies and governance structures. What does this new regulation mean for companies and how can they best prepare for it?

An overview of the new AI law

The new AI Act aims to ensure the responsible use of artificial intelligence while promoting innovation. It divides the application of AI into four risk categories, which place different demands on companies:

Unacceptable risk applications

AI applications that are considered unacceptable, such as social scoring, which evaluates people based on behavioral patterns, are prohibited. These types of AI systems violate fundamental ethical principles and people's privacy. In addition to social scoring, this also includes applications such as biometric identification based on sensitive characteristics using AI or the untargeted reading of facial images for facial recognition databases.

High-risk applications

The use of AI systems is considered particularly risky in certain areas. These include recruitment processes, credit checks, medical devices, educational contexts, critical infrastructure management, and legal and democratic processes.

These applications are subject to special requirements that oblige companies to ensure transparency, traceability and security, for example. They must be regularly audited and tested to avoid discrimination and unfair practices.

Medium-risk applications

With AI systems that interact with natural persons, companies must inform users about the use of AI. This applies, for example, to chatbots or personalized recommendations in e-commerce platforms. This requires transparency so that users know that they are interacting with an AI system and not with a human.

Low-risk applications

Applications that are considered to be non-risky remain unregulated. This type of AI refers to areas that have no significant impact on individuals' rights or freedoms, such as in some forms of automated data analysis or image processing.

What does this mean for companies?

The Introduction of the AI Act will oblige companies to rethink their existing processes and develop a clear AI strategy. This is not only a question of legal compliance, but also an opportunity to gain customer trust and use innovative AI applications responsibly.

Development of an AI Strategy

Companies must be clear about how they want to use AI and how they can meet the new regulatory requirements. This means that an AI strategy not only answers technological questions, but also takes ethical, legal and social aspects into account. A well-thought-out strategy helps companies implement AI responsibly and effectively at the same time.

The need for AI governance

AI Governance describes the organizational structures and processes that ensure that AI applications are used ethically and securely. These governance structures are essential for companies to ensure that AI usage complies with new regulations while maintaining customer and user trust.

Compliance with new regulations

The new AI Act requires companies to review not only the technology, but also the processes and use of AI. In concrete terms, this means:

  • Ensuring that AI applications meet the requirements accordingly, depending on the type of classification
  • To train employees on the legal and ethical aspects of AI
  • To document the entire use of AI in the company in order to be able to prove that all regulatory requirements have been met in the event of an audit by supervisory authorities.

The path to the future of AI

The new AI Act presents companies with the challenge of making the use of artificial intelligence responsible and sustainable. By developing a clear AI strategy and implementing AI governance structures, companies can not only comply with legal requirements, but also gain the trust of their customers and take on a leadership role in the responsible use of AI.

Companies that proactively address the new regulatory requirements can not only avoid legal risks, but also strengthen their competitiveness in an increasingly AI-driven world.

Event: AI Networking for Leaders (Stuttgart)

Headquarters Cologne

taod Consulting GmbH
Oskar-Jaeger-Strasse 173, K4
50825 Cologne
Hamburg location

taod Consulting GmbH
Alter Wall 32
20457 Hamburg
Stuttgart location

taod Consulting GmbH
Schelmenwasenstrasse 32
70567 Stuttgart