See how easily you can break the law with AI.
The EU AI Act is not a policy exercise—it’s a liability framework. Certain AI practices are prohibited, others require documented controls, logging, human oversight, and incident processes. This article breaks down your obligations as a deployer or provider and includes a one-page readiness scorecard to identify your biggest legal gaps in 10 minutes.
You don’t need a complex AI system to create legal risk. In most companies, a simple prompt, a copied spreadsheet, or a “quick check” with an AI tool is more than enough. No intent. No malicious motive. Just everyday work.
Let’s look at a few examples of common AI use cases that can break the law—without your company even noticing.
This article highlights simple, typical AI use cases in HR, finance, and procurement—and shows how seemingly harmless actions can violate data protection rules, employment law, or internal controls. AI compliance isn’t about stopping innovation; it’s about understanding how easily boundaries can be crossed without anyone realizing it.
1. Procurement (Purchasing / Procurement Management)
1.1 Supplier evaluation using generative AI
A buyer uploads large amounts of data, service descriptions, or email correspondence into an AI tool to compare vendors or create rankings.
- Risk:
Confidential contract and pricing data leaves the company. - No traceability of how the evaluations were produced (bias, misinterpretation).
- Potential breaches of NDAs and competition law.
- Decisions are not audit-proof and are not properly documented.
1.2 Contract drafting or clause review using AI
AI is used to “quickly review” procurement contracts or propose alternative clauses.
- Risk:
Legally incorrect or incomplete clauses are adopted. - Responsibility for the legal assessment becomes unclear.
- False sense of security: “the AI checked it” replaces a proper legal review.
1.3 Preparing price negotiations with AI
Examples include prompts like: “How can I get this supplier to lower their prices?”, including specific supplier information.
Risk:
- Sensitive business relationships are disclosed externally.
- AI may generate overly aggressive or unethical negotiation strategies.
- Reputational damage if such use becomes known.
2. Human Resources
2.1 Pre-screening candidates with AI
CVs and application documents are uploaded into AI tools to generate rankings or shortlists.
Risk:
- Processing highly sensitive personal data.
- Discrimination due to training bias (age, gender, origin).
- Violations of GDPR and anti-discrimination law (e.g., the German General Equal Treatment Act).
- Lack of transparency toward applicants.
2.2 Performance reviews using AI-generated summaries
Managers use AI to summarize or assess employee feedback, goal achievement, or meeting notes.
Risk:
- Subjective or biased judgments are “objectified.”
- Employee data is processed without a clear, documented purpose limitation.
- Escalation and liability risks in termination or promotion decisions.
2.3 Drafting warnings or termination letters
HR uses AI to “efficiently create” employment-law documents.
Risk:
- Legally flawed wording.
- Insufficient consideration of the specifics of the individual case.
- High legal risk in labor court proceedings.
3. Finance (Accounting, Controlling, Finance)
3.1 Financial data analysis using AI
Uploading profit-and-loss data, cashflow figures, or forecasts into AI tools to analyze deviations or identify cost-saving potential.
Risk:
- Highly sensitive financial data leaves the company.
- No control over storage or further use.
- Violations of internal control systems (ICS).
Automated forecasts and predictions
AI is used to generate revenue or liquidity forecasts that directly influence management decisions.
Risk:
- Incorrect assumptions or hallucinations go unnoticed.
- Management relies on unvalidated outputs.
- Liability risks due to wrong decisions.
Support for annual financial statements or valuations
AI helps create provisions, valuations, or commentary for annual financial statements.
Risk:
- Blurred lines between decision preparation and decision-making.
- Insufficient traceability for auditors.
- Compliance and governance risks.
Clear conclusion
These use cases are not “wrong” in the sense that they must be explicitly forbidden—but they are risky because:
- Responsibilities become unclear.
- Data is used in an uncontrolled way.
- Decisions are no longer traceable.
- Business units use AI as a shortcut without appropriate governance.
This is exactly where meaningful AI governance comes in: not as a brake on innovation, but as clarity on who is allowed to do what, with which data, and for what purpose—and who ultimately remains accountable.