AI in AmpliFlow

AI That Knows Your Business

AI in AmpliFlow uses your data to give you results that actually matter - not generic suggestions anyone could get from ChatGPT. Here's exactly how it works, what data is used, and what guardrails are in place.

Last updated: 2026-02-14

How Your Data Powers Better Results

AI in AmpliFlow uses your data to give you contextual, grounded results - that's what makes it valuable.

We don't give AI free access to everything. Each AI function sends targeted, relevant context for the specific task. When you activate AI features, you choose to send data to AI providers. This is always opt-in, per feature.

Our AI Principles

Principles that govern how we build and use AI in AmpliFlow - in line with the EU AI Act.

Humans Decide

AI creates drafts, you review and publish. AI can create and modify records in your system, but these are marked as AI-generated. No AI content is intended to count as approved until a human has reviewed it.

Transparent

You initiate AI suggestions yourself and actively choose to save them. We design our systems to mark AI-generated content, but large language models are probabilistic and we cannot guarantee marking works in 100% of cases. We monitor and improve continuously.¹

Opt-in with Targeted Context

AI features are opt-in. When you activate an AI feature, AmpliFlow sends relevant data from your account to the AI provider for contextual results.

Better AI Through Your Industry Data

We do not currently train our own models on customer data. We plan to train our own models on de-identified data in the future to give you better, industry-specific AI suggestions. De-identified means the data cannot be traced back to individual organizations.

Secure

Data is sent encrypted to AI providers. AmpliFlow sends only the data relevant to the specific task - not your entire database.

AI Features in AmpliFlow

Each AI feature is designed with human oversight as a requirement. Here's how it works:

Operational Risk Analysis

AI assists through three fields in the risk assessment, where each step builds on the previous one. First it suggests a risk scenario based on your industry, organization, and existing risks. Then it suggests potential consequences based on the scenario. Finally it suggests risk reduction measures based on the scenario, consequences, and your existing controls. You can edit, rephrase, or regenerate at every step.

Your role: Edits and approves each field

Competencies

AI generates descriptions in English and Swedish based on the competency name and your organization context. If you already have text, AI can rephrase it. Both language fields are filled simultaneously.

Your role: Reviews and adjusts the description

Positions

AI generates a structured position description with responsibilities, authorities, and expected behavior, in English and Swedish. The description is based on the position title and your organization context.

Your role: Reviews and adjusts the description

ISO Controls (27001 and 42001)

AI generates content per control: requirement explanation, internal implementation description, SoA text, in-depth information, and tool recommendations. Each field builds on the previous one. Batch generation runs across all controls in a standard with the ability to pause and resume.

Your role: Reviews each field, adjusts, and approves

New from Labs: af-cli

We are exploring how AI agents can work directly inside your management system. af-cli gives agents access to projects, risks, deviations, goals, and checklists from the terminal. This is a research preview, not a finished product.

Learn more about af-cli →

Missing AI assistance in another part of AmpliFlow? See all Labs projects →

EU AI Act

The EU AI Act came into force in 2024 and sets requirements for how AI systems may be used. The regulation classifies AI systems by risk and requires specific measures for high-risk systems.

AmpliFlow's AI features are classified as limited risk because they:

  • Are used as support tools, not for automated decisions
  • Do not affect fundamental rights
  • Require human review for all outputs
  • Are transparent about AI being used

We follow the regulation's transparency requirements (Article 50) and work to inform users when they interact with AI-generated content.³

Learn more about the EU AI Act and what it means for your organization.

AI Risk Assessment

We assess the risks of our own AI features using the same principles we help our customers apply. This means we systematically identify and evaluate risks associated with how AI is used in AmpliFlow.

The risk criteria we focus on include:

  • Incorrect suggestions that could lead customers in the wrong direction
  • Hallucinations - AI presenting fabricated information as fact
  • Data integrity - ensuring the right data is sent to the right AI function and nothing more
  • Availability and dependency on third-party providers

AmpliFlow builds risk management tools and uses them internally too. Learn more about how we manage risks in our security practices.

AI System Impact Assessment

We conduct impact assessments to understand how our AI features affect both individual users (employees working in the system) and organizations (customers relying on AI-generated suggestions).

Our key safeguards:

  • AI can create drafts and suggestions that are marked as AI-generated. Human review is required before they count as approved
  • AI features are opt-in. Nothing is activated without your conscious choice
  • AI is a support tool. We design the system so AI does not make decisions for you, but we cannot guarantee a language model never behaves unexpectedly²
  • We have processes to detect and handle cases where AI does not follow our rules, and we improve these processes continuously

These measures are designed to keep AI as a support tool that does not replace human judgment in your management system.

Incident Handling

If you discover unexpected or incorrect AI behavior, contact us at info@ampliflow.com. We treat AI-related incidents with the same seriousness as security incidents.

Internally, we have an established process for reporting and handling AI incidents. All employees can report anomalies in AI behavior, and these are investigated as part of our regular deviation management system.

For serious incidents, we follow the same notification process as for security incidents, including notifying affected customers within 24 hours. Learn more about our incident handling and security practices.

Supplier Governance

We currently use OpenAI and Anthropic as AI providers, exclusively via API calls. Under the providers' API terms (as of February 2026), data sent via API is excluded from training their models.

Our supplier governance includes:

  • API-only access - no direct access to models outside our controlled calls
  • Data Processing Agreement (DPA) with OpenAI. Anthropic's DPA is incorporated into their commercial terms of service
  • Encrypted transmission of all data

We regularly evaluate our AI providers based on security, performance, and regulatory compliance. Learn more about how AmpliFlow can help you with ISO 42001 certification.

AI Lifecycle and Monitoring

We manage the lifecycle of our AI features through configuration and testing:

  • Each AI feature has a defined model configuration - we control which model is used for each task
  • Automated error monitoring catches issues in production

When AI providers update their models, we test the new versions before activating them in production. This includes prompt testing and output quality validation. Retirement of older models happens in a controlled manner with adequate transition time.

Learn more about our AI tools in AmpliFlow.

Management Review

AI usage is part of our regular management review. This means that management regularly evaluates how AI features perform, what risks exist, and whether our AI principles are being followed.

This transparency page is version-controlled - the latest update was made 2026-02-14. Our AI policy is reviewed and updated as part of the normal management system cycle, in line with how we work with all other parts of our management system.

Frequently Asked Questions

Are AI models trained on our data?

We do not currently train our own models on customer data, but we plan to in the future. The goal is to build models that give better, more relevant suggestions for your industry and type of organization. All training data is de-identified, meaning it cannot be traced back to your organization. We do not give identifiable customer data to third-party providers for their model training.

What data does AI have access to?

When you use an AI feature, AmpliFlow sends relevant context for that specific task to the AI provider. For example, when generating risk scenarios, the AI receives information about your industry, existing risks, and organization context. It does not get access to your entire database. AI agents that build our website never access customer data.

Where is data processed?

Data is processed by AI providers (such as OpenAI and Anthropic) via encrypted APIs. When you activate an AI feature, you are choosing to send that data to these providers. AmpliFlow sends only the data relevant to the specific task.

How is this different from using ChatGPT?

Context. A CEO asking ChatGPT gets generic answers based on public knowledge. AmpliFlow's AI is grounded in your actual data - your risks, your processes, your organization - which means you get relevant, actionable suggestions instead of generic advice you still need to adapt.

Can AI make decisions for us?

AI in AmpliFlow is designed as a support tool. AI can create and modify records, but these are marked as AI-generated and intended to be reviewed by a human before counting as approved. We design our systems with that goal, but large language models are probabilistic and we cannot guarantee they follow every rule in every situation. That is why we have monitoring processes and improve continuously.

How do you comply with the EU AI Act?

AmpliFlow's AI features are classified as low or limited risk under the EU AI Act (Article 50). We work toward human oversight and transparency about AI use, and document our AI systems according to the regulation's requirements. We actively monitor how the law is applied in practice and adapt our processes accordingly.

Can we turn off AI features?

Yes. AI assistance is entirely optional and opt-in. You can use AmpliFlow without AI features, and each individual AI feature can be enabled or disabled separately.

Notes and legal background

  1. On AI marking and probabilistic systems. Large language models (LLMs) are probabilistic: they generate responses based on probabilities, not deterministic rules. We can design systems and write instructions that say "always mark AI-generated content," but we cannot guarantee the model follows that instruction in 100% of cases. The volume of AI-generated content also makes it impractical to manually review every individual output. We have processes and workflows aimed at catching deviations, and we improve these continuously. The EU AI Act (Article 50.2) recognizes this reality with the phrase "as far as this is technically feasible."
  2. On AI as a support tool vs. guarantees. We design AmpliFlow's AI features as support tools where humans review and approve. But claiming AI "never" does something specific would be dishonest given how language models work. We choose to be transparent about this limitation instead of making promises we cannot keep. Our safeguards (opt-in per feature, AI marking, review workflows) are designed to minimize risk, not eliminate it.
  3. On EU AI Act transparency requirements. The EU AI Act (Article 50.4) requires that AI-generated text published to inform the public on matters of public interest be disclosed as AI-generated. An exemption applies when the content has undergone human review or editorial control and a person or organization holds editorial responsibility. AmpliFlow's model (AI creates drafts, humans review and publish) falls within this exemption. The regulation's transparency requirements (Chapter IV) entered force on 2 August 2025 with full application from 2 August 2026. We actively monitor how the law is applied and interpreted in practice, including guidance from the EU AI Office and national supervisory authorities, and update our processes accordingly.

Questions About AI?

Do you have questions about how AmpliFlow uses AI, or want to discuss AI usage for your organization? Contact us.

Email: info@ampliflow.com