Swedish Guidelines for Generative AI: What DIGG and IMY Expect

Sweden's DIGG and IMY have published joint guidelines for generative AI. Here is what they mean for private companies — policy, GDPR, the AI Act, and information classification.

Swedish Guidelines for Generative AI: What DIGG and IMY Expect

Swedish Guidelines for Generative AI: What DIGG and IMY Expect

Generative AI is a daily tool now. Employees use ChatGPT, Copilot, and similar services, often without organizational guidelines. That creates risks: personal data can leak, confidential information can end up with external providers, and no one knows who is responsible for what the AI produces.

Sweden’s Agency for Digital Government (DIGG) and the Swedish Authority for Privacy Protection (IMY) have published joint guidelines for generative AI. The guidelines target public administration, but the advice applies equally to private companies. They are built on the same laws: GDPR and the EU AI Act.

This article explains what the guidelines mean for private companies and how to embed AI governance into your existing management system.

Start with an AI Policy

The first step according to DIGG and IMY: create an AI policy. Not because the law requires it, but because employees already use AI. Without clear boundaries, each person decides for themselves what is acceptable.

An AI policy should be part of your management system’s documented information. It does not need to be called “AI policy.” Your existing internal structure for policies determines where it fits best.

DIGG publishes its own AI policy openly for anyone to read and draw inspiration from. It covers concrete questions organizations struggle with:

  • Who it covers: all staff, consultants, interns
  • Which AI tools are approved (only authorised ones, no personal licenses)
  • Prohibition against automated decision-making without human oversight
  • Warning about AI agents acting independently and risking loss of control
  • Transparency requirements: document when AI has been used
  • Environmental sustainability: resource-intensive AI (image, video) only when justified
  • Openness: AI models and code shared openly by default

DIGG emphasises that responsibility never rests with the AI. You as employees are responsible for what AI generates, just as you are for everything else you produce in your work.

GDPR Always Applies, Even Without Personal Data in the Prompt

This is perhaps the most important point in the guidelines, and the one most people miss.

Many think: “We never feed personal data into ChatGPT, so GDPR does not apply.” Wrong. Language models have been trained on data containing personal data. The mere fact that the model can generate information about individuals can trigger GDPR requirements.

Consider this: an employee pastes a customer list into ChatGPT to draft an email. Who is the data controller for that processing? What does your data processing agreement with OpenAI say? Do you even have one? These are the questions that a GDPR risk assessment for AI is actually about.

IMY’s guidance on GDPR and AI goes deeper into the issue. They highlight what they call the “black box problem”: AI models function as closed systems where it can be difficult to explain how a particular result arose. This conflicts with GDPR’s requirements for transparency and the individual’s right to information.

Every organisation needs to answer:

  • What legal basis do you have for processing? Consent, legitimate interest, contract?
  • Who is the data controller and who is the processor? The relationship with the AI provider must be clear.
  • Does processing involve transfer to a third country? Most major AI models run in the US.
  • Do you need to conduct a data protection impact assessment? DIGG recommends doing one always, even when not formally required.

It is not enough to insert a clause in the supplier agreement. You need to actively understand what happens to the data and be able to demonstrate it.

Prepare for the EU AI Act Now

The EU AI Act classifies generative AI as “general purpose AI” (GPAI). The rules for GPAI took effect on August 2, 2025. Most other AI Act rules apply from August 2, 2026.

DIGG points out that organisations can have different roles under the regulation: provider, deployer, or downstream provider. Which role you have determines your obligations.

Even if you only use AI, not develop it, you can be a deployer with obligations. For example, informing users that they are interacting with AI and having procedures for human oversight.

Do not wait. Start by mapping which AI tools are used in the organisation, what roles you have under the regulation, and what gaps exist.

Classify Information Before It Reaches an AI Tool

The guidelines are clear: classify information before feeding it into an AI tool. This applies to all information, not just what is obviously confidential.

DIGG’s own guideline states directly: “Do not use personal data, classified data, security-sensitive data, or data that could be sensitive for the authority if it were disclosed.” That rule is equally sensible for private companies. Replace “authority” with your company name.

In the public sector, AI-generated content can become public records. Prompts and responses stored by an authority are likely subject to the principle of public access, and anyone can request them.

But the principle is the same regardless of sector: assess how sensitive the information is before sending it to an AI tool.

Ethics and Sustainability as Guiding Principles

The guidelines cover more than law. DIGG and IMY highlight ethical principles that should guide AI use: transparency, accountability, fairness, and environmental sustainability.

The last one is surprisingly concrete. DIGG’s policy states that employees should only use image and video generation, which consumes significant computing power, “when it is justified.” Every employee is expected to be aware of AI’s climate impact.

Transparency is another thread. DIGG requires employees to state when AI has been used as a source. When you use AI in services directed at the public, it must be clearly indicated. It is about trust: people should be able to know whether and how AI has influenced what they read or decisions that affect them.

Four Steps to Take Now

Based on DIGG and IMY’s guidelines, most organisations should take these four steps:

Write an AI policy. Use DIGG’s example policy as a starting point. It covers who is covered, which tools are approved, transparency and sustainability requirements. Adapt it to your business, anchor it with management, and make it a controlled document in your management system. It does not need to be complicated. Five to ten pages is often enough.

Assess GDPR risks. Review which AI tools employees actually use, not just those you have approved. Map where personal data can be processed. Review your data processing agreements with AI providers. Do they cover the processing that actually takes place? Conduct a data protection impact assessment.

Map your roles under the AI Act. Are you a deployer, provider, or both? What obligations follow from each role? Most private companies using ChatGPT or Copilot are deployers, which includes requirements for information and human oversight.

Implement information classification. Ensure employees know what type of information may and may not be entered into AI tools. Give concrete examples: customer data, salary information, and trade secrets do not belong in an external chat service. Make the rules easy to follow.

None of these steps require you to stop using AI. Quite the opposite. The point is to use AI deliberately, with boundaries that protect your organisation and the individuals whose data is involved.

Build It Into Your Management System

An AI policy sitting as a PDF on the intranet changes nothing. It needs to be connected to processes that already work: risk assessments, document control, training, and follow-up.

Companies that already have a management system for ISO 9001 or ISO 27001 have a head start. The AI policy does not need to be a new document floating in isolation. It fits into the document control, risk management, and management review processes you already have. What we see is that organisations treating AI governance as a separate project, disconnected from their management system, never quite make it stick.

If you want to connect your AI policy directly to risk assessments and document control in a system that already handles your ISO certification, book a demo to see how it works.

Related articles

The more afraid of AI people are, the more they use it

The more afraid of AI people are, the more they use it

AI agents and management systems: hype, reality, and what we actually built

AI agents and management systems: hype, reality, and what we actually built

AI Doesn't Give People More Time - It Gives Them More to Do

AI Doesn't Give People More Time - It Gives Them More to Do