Many companies have long been using AI as a matter of course - but what if this very use soon becomes legally problematic? The EU AI Act suddenly turns an innovation topic into a compliance issue with real implications for HR, IT and management. Which applications are still uncritical, and where does the liability and reputational risk begin?

 

KI EU AI Act: Why the regulation is relevant for companies now

With the European AI Act (EU Regulation 2024/1689), the EU is establishing a comprehensive, risk-based set of rules for artificial intelligence for the first time. The aim is to enable innovation while limiting risks to safety, health and fundamental rights. For companies in D-A-CH, this is not an abstract regulation: as soon as AI systems are developed, purchased, integrated or used in everyday working life, specific obligations arise - depending on whether your company is a provider, deployer or importer/distributor and how the AI use case is categorised. The EU Commission provides a compact overview of the EU position and the regulatory framework.

In practice, AI is used in almost all areas of business today - from HR tools (screening, talent analytics) to customer service chatbots, marketing automation, fraud detection and security analytics. It is precisely this breadth that makes the AI Act relevant, because it not only addresses "high-end AI", but also everyday systems, provided they fall within its scope. It is therefore crucial that AI compliance is not treated as a one-off legal review, but as a management and governance issue, similar to data protection or information security.

An additional driver is the staggered implementation: the AI Act does not apply "all at once", but comes into force in phases. The EU Commission summarises key milestones, including the fact that prohibitions (Article 5) have been applicable since February 2025 and further obligations will follow in stages. For organisations, this means that anyone using AI productively must now know which use cases are permitted, which transparency obligations apply and which governance must be established.

Risk-based approach: from prohibitions to high-risk compliance

The AI Act does not rely on "one size fits all", but on a risk logic. In simplified terms, AI applications can be categorised into four groups: prohibited practices, high-risk systems, AI with transparency obligations and low risk (with mainly voluntary measures/best practices). This is helpful for companies because it creates a clear implementation mode: First exclude prohibited use cases, then identify and safeguard high-risk cases, then operationalise transparency obligations.

How companies classify AI use cases cleanly

In practice, compliance often fails not because of a lack of will, but because of the scope: "Is this even an AI system within the meaning of the law?" This is precisely why the EU Commission published Guidelines on the definition of an AI system at the beginning of 2025. This is relevant because many products market "AI", while other AI components are "hidden" in software (e.g. recommendation systems, ranking, classification).

Robust classification in companies typically starts with an AI inventory:

  • Which tools/models are developed, fine-tuned or operated internally?
  • Which AI functions are added via SaaS/platforms (e.g. copilots, CRM automation, helpdesk bots)
  • Which AI-supported decisions affect people (e.g. HR, credit, insurance, access control)?
  • What data flows into training/prompting/monitoring (incl. personal data)?

The EU can now also provide practical tools for this inventory: The AI Act Service Desk offers, among other things, an "AI Act Explorer" and a "Compliance Checker", which support companies in navigating through chapters/requirements.

Which obligations are derived per risk level

1) Prohibited practices (Article 5): These are not permitted "with conditions", but are generally prohibited. In order to standardise their application, the EU Commission has published Guidelines on prohibited AI practices. German authorities also refer to these guidelines and emphasise the need for a case-by-case assessment.

2) High-risk AI: Extensive requirements apply to certain use cases (and in some cases as a safety component of regulated products) - including risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity. The AI Act contains classification logic.

3) Transparency obligations: Transparency can also apply outside of high-risk situations (e.g. when humans interact with AI or content is generated synthetically). Companies should therefore standardise when they have to display labelling, notices or usage information. The EU Service Desk Timeline lists transparency rules as part of the broadly applicable obligations from August 2026.

4) Low risk: Many internal assistance systems will fall into this category - nevertheless, compliance is not optional, as other areas of law (GDPR, copyright, IT security law, labour law) apply in parallel. In addition, the risk increases if systems "grow" into more sensitive contexts later on.


Current developments 2024-2026: What guidelines, debates and tools are changing

Between 2024 and 2026, a pattern has emerged around the AI Act: Rules have been adopted, but the reality of implementation depends heavily on guidelines, codes of practice and official practice. This is precisely why the five developments you have selected are so important for companies.

1) Guidelines on prohibited practices (04/02/2025):
The Commission has published guidelines to clarify the interpretation and examples of prohibited AI practices. This is particularly relevant for companies because "prohibited" in practice not only refers to a product feature, but also the context of use and purpose (e.g. profiling, manipulation, certain forms of biometric identification depending on the design). The leverage here lies in the use case review before rollout.

2) Guidelines on the definition of "AI system" (6 February 2025):
These guidelines address the key question of whether a system falls under the AI Act at all - and thus reduce legal uncertainty for hybrid software products, statistics/analytics, rule-based systems and modern ML approaches. In terms of implementation, this means that the AI inventory should not only list "tools", but also functions (ranking, scoring, classification, generation) in order to classify properly at a later date.

3) Debate on "pause"/"stop the clock" (Reuters, 25 June 2025):
Industry associations and individual political players called for implementation to be delayed. Reuters reports, for example, on the push by a tech lobby association, which referred to the lack of implementation elements and uncertainty. The practical lesson for companies: Postponement is not a plan. Even if individual documents are delayed, bans and pending obligations remain a governance risk - especially for internationally active organisations that need standardisation anyway.

4) AI Act Service Desk & Compliance tools (ongoing):
The EU is building a compliance infrastructure: Explorer, Compliance Checker, FAQ and a service desk for enquiries to the AI Office. For companies, these tools are less "nice to have" than a means of ensuring internal policy interpretation and increasing verifiability ("Why did we classify this way?").

5) EDPB/EDPS Joint Opinion on the "Digital Omnibus" (10.02.2026):
Although this document does not "only" address AI, it shows how strongly the regulatory framework in the digital sector is evolving and how closely issues such as data protection, data governance and digital compliance are interlinked. This is a signal for companies: AI compliance should be compatible with existing systems (GDPR, ISMS, supplier management) - otherwise parallel structures will emerge that are difficult to manage on the audit side.

To summarise: The last few months have brought fewer new "paragraphs", but more practical tools and interpretation aids. Those who use these can set up compliance in a comprehensible way from the outset - and reduce rework costs later on.


AI Act company obligations: Governance, processes, evidence

When the AI Act fails in companies, it is usually due to three gaps: a lack of overview (inventory), a lack of decision-making bodies (governance) and a lack of evidence (documentation/training). From this, a practical set of obligations can be derived that is compatible for IT management, compliance and HR.

1) Establish AI governance (roles, responsibilities, escalation):

  • Designate AI officers (e.g. AI Compliance Owner / AI Risk Officer) and define interfaces to data protection, information security, HR and procurement.
  • Set up use case approval process: "no-go" screening (prohibited practices), risk classification, data protection/security check, contract/supplier check.
  • Create a policy set: permitted tools, data categories, logging/monitoring, prompting rules, human oversight.

2) Supplier and contract management ("AI in the supply chain"):
Many organisations procure AI as SaaS. This shifts some of the obligations to providers - but companies remain responsible for selection, purpose, data flows and governance. In practical terms, this means:

  • AI addenda in contracts (transparency, support, audit information, security, update policy, incident handling)
  • Clear regulations on training data, prompts, output usage and IP/copyright
  • Proof of classification and, if applicable, compliance measures
  • . for conformity measures

For a structured introduction to legal and contractual guidelines, you can build on what training courses teach internally: e.g. via "KI & Rechtliche Grundlagen", as well as "KI & Verträge- each as a building block for consistently rolling out legal/compliance standards within the company.

3) Security by design for AI (also beyond the AI Act):
AI systems increase the attack surface: prompt injection, data exfiltration, model misuse, manipulation of training/feedback data. The AI Act establishes robustness and cybersecurity as relevant quality dimensions, especially for more demanding categories; classic security frameworks (e.g. ISMS) work in parallel. A focussed learning path can be useful for security teams, for example via in-depth training on "KI & Cyber Security", in order to integrate threat models and controls into existing security processes.

4) Awareness and verifiability:
In addition to technology and law, the human factor is crucial: employees use AI tools on a daily basis, often without clear guidelines. This is precisely where a compliance risk arises (e.g. incorrect choice of tool, unauthorised data, non-transparent use). A scalable approach is a short awareness format that can be rolled out company-wide, which categorises the AI Act and conveys rules for action.

In terms of content, this can be mapped as micro-learning: "KI & the European AI Act" is described as a 10-minute basic training course for all employees, with a focus on the classification of the regulation, risk-based logic and a compact overview. In addition, rollout features such as interactivity/quizzes, certificate of participation (for evidence) and LMS integration (SCORM 1.2 | xAPI | HTML5) or alternative provision via an Academy are emphasised.

Implementation roadmap: Achieving audit-ready AI compliance in 90 days

A realistic implementation approach is to build AI Act compliance in waves: immediate "stop" controls for prohibitions, parallel governance and inventory, followed by process hardening and evidence. A 90-day plan is practicable in many organisations because it delivers measurable results quickly while leaving room for maturity development.

Phase 1 (day 1-30): Create clarity and stop risks

  1. Start AI inventory (top 20 tools/use cases, incl. Shadow AI)
  2. No-go screening against prohibited practices + documented decision
  3. Operationalise definition/scope based on the Commission guidelines "AI system"
  4. Interims rules: permitted tools, data categories, prompting principles, approval process "light"

Phase 2 (day 31-60): Consolidate governance and processes

  • KI governance board (IT, compliance, data protection, HR, purchasing, security)
  • Risk classification as a standard process (incl. catalogue of criteria, documentation, owner)
  • . criteria catalogue, documentation, owner)
  • Supplier/Contract Checks (AI Addenda, security, data flows, support, update/change)
  • Minimum technical controls (logging, access control, data minimisation, monitoring)

Phase 3 (day 61-90): Evidence, training, scaling

  • Define evidence structure: Inventory, decisions, risk analyses, controls, training evidence
  • Awareness rollout to broad workforce, supplemented by role training (legal/IT/security)
  • Use tooling: EU AI Act Explorer/Compliance Checker for documentation consistency
  • Align roadmap to next milestones (e.g. further obligations, transparency rules, high-risk requirements from widespread adoption)

The strategic categorisation is important here: the public debate around a delay has shown that uncertainty does not mean that risks will disappear. In 2025, Reuters reported both on calls for a "pause" approach and on the political debate surrounding its practical feasibility. The most robust way forward for companies is therefore to establish compliance-ready basic structures that can be further refined with guidelines and regulatory practice.


Conclusion

AI compliance as a management system instead of an individual project

The AI EU AI Act does not force companies to avoid AI - but it does force them to manage AI systematically. Establishing an AI inventory, clear governance, risk-based classification, supplier controls and verifiable awareness at an early stage reduces legal and reputational risks while creating a stable basis for scalable AI use. The EU Commission's current guidelines (definition of "AI system", prohibited practices) and the emerging compliance tools show that the practice is becoming more concrete - and that companies benefit most when they link AI compliance to existing structures such as data protection and information security.


Note: This blog was supported in its research with AI.