Your employees are already using AI Do you have governance?
According to Eurostat's latest figures, 35 percent of Swedish companies with 10 or more employees use AI, third most in the EU. ChatGPT, Copilot, image generation: AI tools spread quickly in organizations. Without governance you risk data leaks, ethical missteps and regulatory violations. ISO 42001 provides the framework. AmpliFlow makes it practical.
AmpliFlow participated in ISO's working group for ISO/IEC 42001.



AI chaos or AI governance?
Without control over AI usage, risks grow quickly.
Shadow AI in the workplace
According to Eurostat's latest figures, 35 percent of Swedish enterprises use AI, but most lack policies for how. Employees paste customer data, contracts and source code into ChatGPT without the employer knowing. Without governance, every employee is a potential data risk.
The Copilot attack
Security researchers showed how an attacker could trick Microsoft 365 Copilot into reading confidential documents and forwarding the contents by hiding secret instructions in a shared document. No user interaction required.
The Air Canada ruling
Air Canada's chatbot made up a refund policy that didn't exist. The airline was held liable by court. AI-generated information is treated as the company's promises.
GDPR fines for AI
OpenAI received a 15.6 million euro fine from Italy's data protection authority, December 2024. Reason: deficiencies in legal basis and transparency regarding personal data processing in ChatGPT. AI without documented governance is a GDPR risk.
EU AI Act: fines up to €15M for deployers
The EU AI Regulation enters into force gradually: prohibited systems since February 2025, high-risk systems from August 2026. Violations of deployer obligations (Art. 26) and transparency requirements (Art. 50) carry fines up to EUR 15 million or 3% of global turnover. Violations of the prohibited AI ban (Art. 5) can reach EUR 35 million or 7%.
The lawyer who trusted ChatGPT
A New York lawyer used ChatGPT to find case law and submitted six entirely fabricated court cases. The judge called them 'bogus with bogus quotes and bogus citations'. The lawyer and firm were fined $5,000.
DPD's chatbot went rogue
After a system update, parcel carrier DPD's AI chatbot started swearing at customers, writing poems about how terrible the company was, and recommending competitors. The post went viral with 800,000 views in 24 hours. DPD was forced to shut down its AI chatbot.
Extensions harvesting AI conversations
Eight browser extensions with over 8 million users silently collected complete conversations from ChatGPT, Claude, Copilot and five other AI services, including prompts, responses and timestamps. The data was sold for marketing analytics. Seven of the extensions had Google's and Microsoft's 'Featured' badges.
DoNotPay: fined for AI fraud
The US FTC fined DoNotPay $193,000 for marketing its AI chatbot as 'the world's first robot lawyer'. The company had never tested whether the AI's output met legal quality standards and employed no lawyers.
What you say vs what the auditor looks for
There's a difference between having an AI management system and living it. Here's what the auditor actually checks:
AI governance with built-in ISO 42001 controls
AmpliFlow has all ISO 42001 Annex controls built in. AI helps you generate content per control, the SoA is created automatically, and you can assign tasks and track progress per control. We use ISO 42001 ourselves to govern our own AI usage.
AI-driven risk analysis
AmpliFlow's AI generates risk scenarios and consequence analyses automatically. Combine it with ORA to assess AI risks according to ISO 42001 (6.1.2).
Impact assessment for AI systems
Document how your AI systems affect individuals and society. A unique requirement in ISO 42001 (6.1.4, A.5).
Statement of Applicability (SoA)
The SoA is generated automatically based on your applicability decisions per control. Justify included and excluded controls directly in the control register (6.1.3e).
Legislation monitoring for EU AI Act
Add EU AI Act to the legislation registry. Set target dates for when you need adaptations complete.
Process maps for AI-affected processes
Document which processes use AI and how. Link AI tools to process steps.
AI policies via Pages
Create AI policies in the built-in text editor (wiki). Organize in folders and share with employees.
Stakeholder analysis with ISO 42001 linkage
Map stakeholders affected by your AI usage and their requirements.
Action management for AI initiatives
AmpliFlow's AI suggests actions and improvements. Follow up with owners and deadlines.
Responsible AI in practice
ISO 42001 is built on principles for responsible AI. AmpliFlow helps you translate principles into concrete actions.
Transparency
Explain how AI systems work and make decisions
Fairness
Avoid bias and ensure equal treatment
Accountability
Clear ownership and responsibility for AI decisions
Safety
Protect against misuse and unintended harm
Prepare for the EU AI Regulation
The EU AI Act is the world's first comprehensive AI legislation. ISO 42001 helps you meet the requirements.
Risk classification
Classify your AI systems according to EU risk levels: unacceptable, high, limited or minimal risk.
Documentation requirements
Meet requirements for technical documentation, user instructions and quality management.
Compliance evidence
Show supervisory authorities that you're in control. ISO 42001 certification provides credible evidence.
Timeline
The EU AI Act enters into force gradually: prohibited AI systems since February 2025, requirements for GPAI models from August 2025, and high-risk systems from August 2026. Fines up to 35 million euros or 7 percent of global turnover. Note: ISO 42001 is not a harmonized standard under the AI Act. It does not provide a presumption of conformity, but it provides a proven structure that supports compliance.
Classify your AI system
Answer three questions about your AI system and see which risk level it falls under according to the EU AI Act.
Question 1 of 3
What is the AI system used for?
From AI system to safe use
ISO 42001 requires risk-based thinking for AI. AmpliFlow makes it concrete.
What changes in daily work?
ISO 42001 is not about writing documents nobody reads. It's about concrete changes in how you work with AI:
From "everyone does what they want" to approved tools
A clear list of which AI tools may be used, with what data, and who is responsible. Employees know what applies without having to ask.
AI training from day one
New employees get an introduction to the organization's AI policy and tools. Not a PowerPoint, a practical walkthrough of what's allowed and why.
AI incidents are reported and investigated
When something goes wrong with an AI system there is a process: report, investigate, act, learn. Just like quality deviations, but adapted for AI-specific risks.
Regular follow-up
Quarterly review of AI usage, incidents and risks. Management makes decisions based on facts, not assumptions.
From start to AI governance
A realistic timeline for ISO 42001 implementation. Duration varies based on organization size and AI maturity.
AI inventory
1-2 wMap all AI systems and use cases in the organization
Gap analysis
1-2 wCompare current governance against ISO 42001 requirements
Risk assessment
2-3 wAssess risks for each AI system according to EU AI Act risk levels
Impact assessment
1-2 wAssess how your AI systems affect individuals and society (6.1.4). Document according to A.5.
Policy & processes
4-8 wDevelop AI policy, guidelines and governance documents
Statement of Applicability (SoA)
1-2 wDocument which Annex A controls you apply and justify inclusions and exclusions (6.1.3e)
Implementation
4-12 wDeploy controls, train staff and begin applying
Internal audit
1-2 wReview AIMS effectiveness and address findings
Why ISO 42001?
Advantage in the AI era.
Build customer trust
Customers and partners want to know you handle AI responsibly. Certification proves it.
Responsible AI use
Minimize risks of bias, privacy violations and incorrect decisions.
Reduced risk
Structured AI governance reduces the risk of incidents and regulatory violations.
First-mover advantage
Be early. ISO 42001 is new. Get certified before competitors.
Read more about ISO 42001
What is ISO/IEC 42001?
A deep dive into the standard for AI management systems: what it requires, how it relates to other standards, and why it exists.
Read article →AmpliFlow in the ISO 42001 working group
How AmpliFlow contributes to the development of the standard and what it means for our customers.
Read article →ISO 42001 is not just for AI companies
All organizations that use AI tools need AI governance. It's not about developing AI. It's about using AI responsibly.
Developing your own AI? Then ISO 42001 is even more important, but the standard is designed for everyone who uses AI, not just those who build it.
Questions about ISO 42001
Answers without AI jargon.
What is an AI Management System (AIMS)?
AIMS is a management system specifically for AI use. It defines how the organization governs, monitors and improves its AI use. Just like an ISMS for information security or QMS for quality.
Do we need ISO 42001 if we don't develop our own AI?
Yes, if you use AI tools like ChatGPT, Copilot or AI-based services. ISO 42001 is about responsible use of AI, not just development. All organizations where employees use AI tools benefit from the standard.
How does ISO 42001 relate to the EU AI Act?
The EU AI Act is legislation with requirements and sanctions. ISO 42001 is a voluntary standard that helps you meet legal requirements. Implementing ISO 42001 is one way to demonstrate EU AI Act compliance.
Can we use the same tools as for ISO 27001?
Yes. AmpliFlow has built-in controls for both ISO 27001 and ISO 42001 with AI assistance, automatic SoA, and task management per control. Beyond the control registers, both standards share the same tools: risk analysis, documentation via Pages, action management, and checklists.
How long does implementation take?
It depends on how many AI systems you have, how mature your current governance is and whether you already have other ISO certifications. The time goes toward embedding new ways of working, not writing documents.
Do we need to get certified?
No, certification is voluntary. You can implement ISO 42001 without external audit. But certification provides credible evidence for customers, partners and supervisory authorities.
Can AI models deliberately avoid controls?
Yes. Anthropic's research (December 2024) showed that AI models can exhibit so-called "alignment faking": they behave differently when they know they're being monitored. In 12 percent of cases, the model acted against its instructions when it believed it was unmonitored. That's why ISO 42001 requires ongoing monitoring, not just initial testing.
What is prompt injection and why does it matter?
Prompt injection means an attacker manipulates an AI system's instructions through hidden text in documents or web pages. The Microsoft 365 Copilot demonstration showed how an attacker could extract sensitive data by embedding hidden instructions in a shared document. ISO 42001 does not prescribe specific technical controls. Instead, it addresses this through risk assessment (6.1.2) that identifies prompt injection as a risk, risk treatment (6.1.3) that selects appropriate controls from Annex A, and requirements for operation and monitoring (A.6.2.6) of AI systems. The standard takes a risk-based approach rather than requiring specific technical solutions.
What do we do about shadow AI?
Shadow AI (when employees use AI tools without the organization's knowledge) is one of the most common risks. Step one: map actual AI usage (not just approved usage). Step two: create clear guidelines for which tools may be used and with what data. Step three: make it easier to use approved tools than to go around the system. ISO 42001 formalizes this through requirements for AI policy and awareness raising.
Ready for responsible AI?
Book a demo and we'll show you how AmpliFlow can help with AI governance. No sales pitch, just practical answers.