Most Swedish business leaders know the EU AI Act exists. Few know what it actually requires of them.
The most common mistake: assuming the law applies to the tech companies building AI. It does. But the regulation’s full text defines three categories relevant to your organisation: providers who build AI systems, distributors who place them on the market, and deployers (those who use AI systems in their operations). That is you.1
On 2 August 2026, the obligations start applying in earnest.
Who is a “deployer”?
Article 3(4) of the regulation defines the term2:
a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
— Regulation (EU) 2024/1689, Art. 3(4)
In plain terms: any organisation that uses AI systems in its operations, except for purely private use. Does your company purchase an AI service to analyse job applications? You are a deployer. Do you use an AI tool to assess customers? Deployers. Do you run customer service through an AI-driven case management system? Deployers.
It does not matter whether you know about it. It does not matter whether your employees built or bought the system. If an AI system makes or supports decisions that concern people, and you operate the business it runs in, your company is a deployer.
Which AI systems are high-risk?
Annex III of the regulation lists the areas where AI systems are automatically classified as high-risk.3 Three of them are directly relevant to most Swedish SMEs:
Employment (Annex III, point 4): AI systems used to recruit, filter job applications, assess candidates, or make decisions about promotion, task assignment, and termination. Do you run an AI that sorts CVs? That is likely a high-risk system.4
Credit assessment (Annex III, point 5b): AI systems that assess the creditworthiness of individuals or set credit scores. Does not apply to AI used to detect fraud.3
Access to essential services via the public sector (Annex III, point 5a): AI used by or on behalf of public authorities to assess entitlement to essential public services, including healthcare and social benefits. This point applies specifically to public-sector actors. Private healthcare providers or insurers do not automatically fall under 5a.3
Insurance (Annex III, point 5c): AI for risk assessment and pricing in life and health insurance for individuals.3
Emergency services (Annex III, point 5d): AI that classifies emergency calls or prioritises dispatch for police, fire services, and ambulances.3
Exceptions exist. Article 6(3) clarifies that a system listed in Annex III is not high-risk if it does not pose a significant risk of harm, for instance because it does not materially affect decision-making or performs only a narrow procedural task.5 The exception never applies, however, if the system performs profiling of natural persons. Such systems are always classified as high-risk, regardless of other circumstances.5
Assessing whether the exception applies is the provider’s responsibility. It is the provider (the company placing the system on the market) that must make and document that assessment before deployment.5 As a deployer (user), you should ensure the provider has carried out this assessment and can demonstrate it.6
What does Article 26 require of you?
Article 26 is the deployer article. It lists concrete obligations for all organisations running high-risk AI systems.7
You must ensure the system is used in accordance with the manufacturer’s instructions (art. 26.1).7 That sounds straightforward but it requires that you actually have a process for it.
You must assign human oversight to persons with appropriate competence, training and authority (art. 26.2).7 The AI system does not make decisions independently without human supervision. And those exercising oversight must be capable of doing so.
You must retain the logs automatically generated by the system for at least six months (art. 26.6).7
Before deploying a high-risk AI system in the workplace, you must inform the representatives of the affected employees, and the employees themselves, that they will be subject to the system (art. 26.7).7
You must also use the information provided by the system’s manufacturer to fulfil your obligation to carry out a data protection impact assessment under GDPR Article 35 (DPIA) where applicable (art. 26.9).7
None of these obligations require you to be a tech company. They require you to be an organisation that takes AI use seriously.
The repurposing trap: when your employee makes you a provider
This is the part most organisations miss.
Article 25(1)(c) states: if a distributor, importer, deployer, or other third party modifies the intended purpose of an AI system (including a general-purpose AI system) that has not been classified as high-risk, in such a way that the system becomes high-risk, that person shall be considered a provider within the meaning of the regulation.8
What this means in practice: your finance assistant takes ChatGPT and builds it into an internal tool that ranks job applications. ChatGPT is a general-purpose AI system, not high-risk. But a tool that ranks job applications falls under Annex III, point 4.3 It is now high-risk. And the person who turned it into a high-risk system is the provider.8
That is your company.
Provider obligations are heavier than deployer obligations: technical documentation, a functioning quality management system, CE marking, conformity assessment.6 These are requirements designed for companies that build and sell AI systems. They apply to you if an employee vibe-codes a general-purpose tool into something that makes decisions about people.
This is not a hypothetical risk. It is what happens when the HR department takes one of the eleven Claude Cowork plugins Anthropic published in January 2026, adapts it to filter candidates, and puts it into operation without IT knowing.
Transparency for AI-generated text
Article 50(4) applies to deployers, not just providers: if your company uses an AI system to generate text published for the purpose of informing the public on matters of public interest, you must disclose that the text is AI-generated.9
Exception: if the text has gone through a process of human review in which a natural or legal person holds editorial responsibility for the publication.9
The obligation applies to text published for the purpose of informing the public on matters of public interest: journalistic and political content, civic information, and the like. Ordinary business communications (customer emails, product newsletters, internal press releases) generally do not fall under this requirement. But if you publish content addressed to a broader public on a matter of public interest and cannot show that a human reviewed and took editorial responsibility for it, the disclosure obligation applies.9
Timeline
| Date | What applies |
|---|---|
| 2 February 2025 | Article 4 (AI literacy) already in effect. Ban on AI systems with unacceptable risk (Art. 5).10 |
| 2 August 2025 | Rules for general-purpose AI models (GPAI, Art. 51–56) and the sanctions framework (Art. 99) enter into force.10 |
| 2 August 2026 | General application: high-risk requirements, deployer obligations (Art. 26), transparency (Art. 50), repurposing rules (Art. 25). (See note below.)10 |
| 2 August 2027 | High-risk classification via EU harmonised product legislation (Art. 6(1)).10 |
November 2025 — Digital Omnibus: On 19 November 2025 the Commission proposed (COM(2025) 868) that the high-risk rules can be delayed by up to 16 months if the required standards and support tools are not in place by 2 August 2026. The latest possible date would then be December 2027. The proposal is not yet adopted. Deployer obligation (Art. 26) and transparency (Art. 50) dates could be affected by the same shift.11
Article 4 is already in effect. This means your staff working with AI systems must now have adequate AI literacy. That requirement is live.10
Fines
Breaches of deployer obligations (Article 26) and transparency requirements (Article 50) fall under Article 99(4): fines of up to EUR 15 million or 3 percent of global annual turnover, whichever is higher.12
For SMEs, Article 99(6) applies: fines shall be the lower of the amounts or percentages specified. This is an explicit protection for SMEs, but it assumes fines are actually set proportionately. It is not an exemption.12
What you can do now
The law provides no step-by-step guide, but the logic is clear.
Start with an inventory. Which AI systems are used in your operations? Purchased, subscribed, and anything employees have built with general-purpose tools. You cannot assess risk for systems you do not know about.6
Then classify each system: does it affect decisions that concern individuals? Does it fall within any of the Annex III areas?3 Is any employee about to repurpose a general-purpose AI system for one of these purposes?8
Assign human oversight to high-risk systems. That requires a named person with the right competence and authority to stop the system if it behaves incorrectly.7
And document. The Article 26 obligations require you to demonstrate compliance, not just achieve it.7
ISO 42001 as a structured path forward
The obligations in the EU AI Act (inventory, risk assessment, oversight, documentation, impact assessment) are exactly what ISO/IEC 42001 (management systems for AI) structures. The standard is not a legal requirement, but it gives you a demonstrable framework showing that you work systematically.
If you already have ISO 9001 or ISO 27001, the underlying logic is the same: document what you do, why, and how you verify it works. ISO 42001 takes that structure and applies it to AI systems.
We have written more about the connection between ISO 42001 and the EU AI Act in this article.
If you would like to see how AmpliFlow supports this work: book a walkthrough.
If you want a broader perspective on what the AI shift means for organisations (including shadow AI, vibe coding, and why your CFO may be asking the wrong question), read our related article on building or buying software with AI.
Footnotes
-
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024. Full text: https://eur-lex.europa.eu/eli/reg/2024/1689/oj ↩
-
Regulation (EU) 2024/1689, Article 3(4) — definition of deployer. ↩
-
Regulation (EU) 2024/1689, Annex III — list of high-risk AI systems, points 4 (employment), 5a (essential public services), 5b (credit assessment), 5c (insurance), 5d (emergency services). ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
Lindahl Advokatbyrå, “AI i rekryteringsprocessen — vad är tillåtet?”, practical analysis of Annex III point 4 in HR contexts. https://www.lindahl.se/aktuellt/nyheter/ai-i-rekryteringsprocessen/ ↩
-
Regulation (EU) 2024/1689, Article 6(3)–(4) — exceptions to high-risk classification and profiling carve-out. ↩ ↩2 ↩3
-
CMS Law, “EU AI Act — Questions and Answers”, overview of deployer obligations and scope. https://cms.law/en/int/publication/eu-ai-act-questions-and-answers ↩ ↩2 ↩3
-
Regulation (EU) 2024/1689, Article 26 — deployer obligations, paragraphs 1, 2, 6, 7, and 9. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8
-
Regulation (EU) 2024/1689, Article 25(1)(c) — repurposing rule: when a deployer becomes a provider. ↩ ↩2 ↩3
-
Regulation (EU) 2024/1689, Article 50(4) — transparency for AI-generated text on matters of public interest. ↩ ↩2 ↩3
-
Regulation (EU) 2024/1689, Article 113 — entry into force and application dates. ↩ ↩2 ↩3 ↩4 ↩5
-
European Commission, COM(2025) 868, Digital Omnibus, 19 November 2025. https://digital-strategy.ec.europa.eu/en/library/digital-omnibus ↩
-
Regulation (EU) 2024/1689, Article 99(4) and 99(6) — penalties and SME protection. ↩ ↩2