Shadow AI: managers build tools IT doesn't know about

Shadow AI 2026: employees build AI tools outside IT's control. Understand the GDPR risk, why it happens, and what your AI policy needs to address.

Shadow AI: managers build tools IT doesn't know about

What used to be called Shadow IT was straightforward. A few employees install Dropbox without permission. IT finds it, shuts it down, everyone moves on. The problem was easy to spot and easy to stop.

Shadow AI in 2026 works differently.

A finance assistant with a ChatGPT or Claude account can build a functioning tool in an afternoon. No IT ticket. No approval. No credentials to save. Six months later the tool is in production, connected to real systems, handling customer data, and nobody in IT knows it exists.

And the person who built it is often not the intern. According to a survey by vendor Retool, which asked hundreds of developers and business builders about their AI habits, 60 percent had built something outside IT oversight in the past year.1 In the same survey, 64 percent of respondents were senior managers or above.1

The technology is not the problem. The problem is that it has become so easy to use that governance cannot keep up.

Why does it happen?

The answer is not that employees are careless or acting in bad faith. They are making rational choices.

The reasons are clear: 31 percent bypassed IT because it was faster, 25 percent because the existing tools were inadequate, 18 percent because the IT process felt too slow, and 10 percent because IT lacked capacity altogether.1

This is not sabotage. It is someone in marketing who needs to analyse customer data by Friday and knows an IT request takes three weeks. So they open Claude.com, upload their Excel file, and run the analysis.

Rational. And that is precisely the problem.

What makes Shadow AI more dangerous than old Shadow IT?

Old Shadow IT was passive. An employee installed a programme and used it. It was a tool with a fixed purpose, created by a company with a privacy policy.

Shadow AI is active. The tool can generate code, connect to APIs, process data autonomously, and produce new outputs. The employee is not just building a tool for themselves. They are building a system that others in the company may start using, that connects to customer data, and that nobody planned for.

Samsung has security experts that most Swedish SMEs will never hire. In April 2023, employees pasted confidential source code into ChatGPT during debugging work.2 TechCrunch reported that Samsung then temporarily restricted generative AI tools on company-owned devices and internal networks while it put security measures in place.2

That is the point: competence does not protect you. Governance does.

The GDPR consequence nobody talks about

When an employee uses a personal AI account to process company data, a legal problem arises.

With an organisation-managed business workspace at Claude or ChatGPT, the provider’s business terms and DPA can apply to that workspace. Anthropic’s Commercial Terms say its DPA forms part of the contract for commercial services.34 OpenAI says the same in its Services Agreement and DPA for ChatGPT Business, ChatGPT Enterprise, and other business services.56

With a personal consumer account, the situation is different. Anthropic’s privacy policy applies when someone uses Claude as a consumer, and it does not apply when an employer has provisioned a Claude for Work account.7 Your organisation is then usually not the customer in the contract, cannot give documented instructions under Article 28, and has weaker visibility into subprocessors, deletion, and incident handling.

Your organisation is likely still responsible for the customer data that passed through that session. But you have no agreement with the party that processed it.

Note that the absence of a data processing agreement does not mean your organisation escapes liability. It means you have lost control. GDPR’s requirements for appropriate technical and organisational measures (Articles 5 and 24) apply to you as data controller regardless of whether your employees use approved or unapproved tools. The difference is that with a corporate account you can demonstrate compliance. With a personal account, you cannot.

We have written a deeper look at exactly this — GDPR and AI tools: what happens when employees use personal accounts.

The person who built it may have left

There is one more risk that is invisible until it is too late.

The tool built in an afternoon may now be used daily by six people in the company. They know it works. They do not know how.

The person who built it may have left.

The next time it breaks (when an external integration is updated, when the data format changes, when someone asks how customer data is handled) nobody can fix it. IT cannot help; they did not know it existed. The original builder is gone.

This is not a hypothetical scenario. It is what happens when infrastructure is built by one person without anyone else documenting, participating in, or approving it.

How to build an AI policy and regain control

Banning AI is the wrong answer. It does not work and it creates competitive disadvantages. Those who use AI tools effectively gain an edge.

Three things are reasonable to start with:

  1. Take an inventory of what exists. Systematically ask which AI tools are being used in the organisation and for what purpose. Not as an interrogation but as a factual basis. You cannot govern what you do not know about.

  2. Give the IT process a fast track. If the normal route takes three weeks, people will route around it. A simple approval process for low-risk AI tools, where the answer comes back in a day or two, removes the strongest reason for bypassing the system.

  3. Decide who owns the issue. When something goes wrong with a Shadow AI system, who is responsible? That must be decided in advance, not during the crisis.

ISO 42001 is a standard for AI governance that addresses precisely these questions: what is being used, by whom, with which data, and who is accountable if something goes wrong. It gives you a framework for asking the right questions systematically.

If you want to see how AmpliFlow handles these questions in practice: book a walkthrough.


This article is a spinoff from Why are we paying for software that AI can build for free?. The parent article goes deeper into the build-versus-buy calculation and the financial chaos AI demos have triggered on the stock market.

Footnotes

  1. Retool, “The build vs. buy shift: how vibe coding and shadow IT have reshaped enterprise software”, February 2026. Survey of 817 respondents, Retool customers and builders; sample bias towards those already building their own systems. Source. 2 3

  2. Kate Park, “Samsung bans use of generative AI tools like ChatGPT after April internal data leak”, TechCrunch, 2 May 2023. Source. 2

  3. Anthropic, Commercial Terms of Service (effective 17 June 2025). Anthropic’s commercial services sit under separate business terms. Source.

  4. Anthropic, Data Processing Addendum (effective 24 February 2025). The DPA forms part of Anthropic’s commercial terms. Source.

  5. OpenAI, OpenAI Services Agreement (effective 1 January 2026). The agreement covers ChatGPT Business, ChatGPT Enterprise, and other business services. Source.

  6. OpenAI, OpenAI Data Processing Addendum (effective 1 January 2026). The DPA is incorporated into the OpenAI Services Agreement. Source.

  7. Anthropic, Privacy Policy (effective 12 January 2026). The policy applies where Anthropic acts as controller, for example in consumer use, and says it does not apply where an employer has provisioned a Claude for Work account. Source.

Related articles

Saved passwords in the browser, a bigger risk than many think

Saved passwords in the browser, a bigger risk than many think

AI hiring bias: your hiring AI may be screening for itself

AI hiring bias: your hiring AI may be screening for itself

The more afraid of AI people are, the more they use it

The more afraid of AI people are, the more they use it