GDPR and AI tools: it's easier than you think to break the rules

AI tools don't breach GDPR by themselves. But one email with a name in a personal account, and you have a problem. How it happens, and what to do about it.

GDPR and AI tools: it's easier than you think to break the rules

ChatGPT is not a GDPR problem by itself. It becomes one the moment you paste in a person’s name, an email address, a case record, an HR document. That happens constantly, without anyone thinking about it.

The issue is not that employees use AI. It is that the threshold for a breach is low, and most organisations do not know where that threshold is.

Why personal accounts are risky

GDPR Article 28 requires a binding contract when a third party processes personal data on your organisation’s behalf. Article 28(3) states:

“Processing by a processor shall be governed by a contract or other legal act under Union or Member State law, that is binding on the processor with regard to the controller and that sets out the subject-matter and duration of the processing, the nature and purpose of the processing, the type of personal data and categories of data subjects and the obligations and rights of the controller.”

With a personal AI account, that contract does not exist. Your employee chooses the provider themselves. You have no agreement with that provider. No control. No visibility.

If personal data is sent into that session, you have a problem. And it is easy to do without noticing: the name of a contact person at a client in an email draft, a case with a national identity number, an HR document you want summarised. It does not have to be intentional to be a breach.

It is the account, not the tool

ChatGPT, Claude, Gemini: the tools themselves are not the problem. The account is.

With a personal account, consumer terms apply. Anthropic’s privacy policy is explicit: it does not apply when Anthropic processes data on behalf of business customers with Claude corporate accounts. For personal accounts, Anthropic is the data controller. Not you.

In practice, that means:

  1. You cannot request what has been stored
  2. You cannot demand deletion
  3. You do not know whether the data is shared further or how
  4. You do not know which country the server is physically located in

Nobody in your organisation can audit that session. You are the data controller for that data, but you have zero control over how it is handled.

Personal accounts: a simple decision

This should not be a policy with grey areas and case-by-case assessments. It should be a rule:

Work data is not processed in personal AI accounts. Ever.

It does not matter if it is “just” an email draft. If personal data is in it (names, email addresses, case records, contract details) Article 28 applies. The rule needs to be simple enough to follow without thinking.

Employees using personal accounts today are not acting in bad faith. They lack an alternative. Give them one.

Corporate accounts solve half the problem

With a corporate account (Claude Team, Claude Enterprise, ChatGPT Team, or equivalent), a data processing agreement exists. You decide what may be stored, you have the right to request deletion, and you know in which country the data is processed.

That is necessary. But it is not sufficient.

Article 5(1)(c) of GDPR (data minimisation) applies regardless of what agreement you have:

“adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’)”

You need to ask: do we need to include the personal data at all?

In many cases the answer is no. You can anonymise, pseudonymise, or refer to a case without pasting in names, national identity numbers, and contact details. It reduces risk, reduces exposure, and is good practice regardless of what agreement you have with the provider.

What you actually need to do

Step 1: Prohibit personal accounts, no exceptions. Put it in a policy. Communicate it. Give clear examples of what counts as work data. Not “sensitive data”: work data. Emails, case records, contracts, HR information, customer data.

Step 2: Give people a corporate account. It does not need to be everyone. Identify the roles that regularly handle personal data in AI tools and start there. A policy without an alternative is a declaration, not governance.

Step 3: Train for data minimisation. Even with a corporate account: train people not to paste in more than necessary. “Describe the customer’s situation” is not the same as “paste the case record with name and national identity number.”

Step 4: Inventory what is being used. You cannot address a problem you cannot see. Ask managers in each department. A short inventory is enough.

Policies without alternatives solve nothing

It is tempting to send an email and consider the matter closed. It does not work if the personal account is the only available tool.

Give people an alternative. Explain why the rule exists. Make it easy to do the right thing.

GDPR Articles 28 and 5 are not administrative requirements to interpret away. They exist to protect the people whose data you handle. Personal accounts breach Article 28. Always. And even with an agreement in place, the minimisation requirement remains.


Would you like to see how AmpliFlow helps structure an AI policy and manage which tools are used in your organisation? Book a walkthrough.

Related articles

The EU AI Act isn't just for OpenAI. It applies to you.

The EU AI Act isn't just for OpenAI. It applies to you.

Sweden's Cybersecurity Act and ISO 9001, 27001 and 42001: what applies now?

Sweden's Cybersecurity Act and ISO 9001, 27001 and 42001: what applies now?

AI has made it cheaper and easier to hack you

AI has made it cheaper and easier to hack you