Shadow AI: managers build tools IT doesn't know about

Shadow AI 2026: employees build AI tools outside IT's control. Understand the GDPR risk, why it happens, and what your AI policy needs to address.

Shadow AI: managers build tools IT doesn't know about

What used to be called Shadow IT was straightforward. A few employees install Dropbox without permission. IT finds it, shuts it down, everyone moves on. The problem was easy to spot and easy to stop.

Shadow AI in 2026 works differently.

A finance assistant with a ChatGPT or Claude account can build a functioning tool in an afternoon. No IT ticket. No approval. No credentials to save. Six months later the tool is in production, connected to real systems, handling customer data, and nobody in IT knows it exists.

And the person who built it is often not the intern. According to a survey by vendor Retool, which asked hundreds of developers and business builders about their AI habits, 60 percent of those who had built something with AI had done so outside IT’s control.1 64 percent of them were managers or above.

The technology is not the problem. The problem is that it has become so easy to use that governance cannot keep up.

Why does it happen?

The answer is not that employees are careless or acting in bad faith. They are making rational choices.

The reasons are clear: 31 percent bypassed IT because it was faster, 25 percent because the existing tools were inadequate, 18 percent because the IT process felt too slow, and 10 percent because IT lacked capacity altogether.1

This is not sabotage. It is someone in marketing who needs to analyse customer data by Friday and knows an IT request takes three weeks. So they open Claude.com, upload their Excel file, and run the analysis.

Rational. And that is precisely the problem.

What makes Shadow AI more dangerous than old Shadow IT?

Old Shadow IT was passive. An employee installed a programme and used it. It was a tool with a fixed purpose, created by a company with a privacy policy.

Shadow AI is active. The tool can generate code, connect to APIs, process data autonomously, and produce new outputs. The employee is not just building a tool for themselves. They are building a system that others in the company may start using, that connects to customer data, and that nobody planned for.

Samsung has security experts that most Swedish SMEs will never hire. In April 2023, some of them pasted confidential source code into ChatGPT during debugging work.2 No malicious intent. The data went to OpenAI’s servers regardless. Samsung’s response was not training. It was to block AI tools on all devices.

That is the point: competence does not protect you. Governance does.

The GDPR consequence nobody talks about

When an employee uses a personal AI account to process company data, a legal problem arises.

With a corporate account at Claude or ChatGPT, a data processing agreement exists. You are the data controller. The provider processes data on your terms, and you can audit and demand deletion.

With a personal $20 account, the provider decides. Anthropic’s privacy policy is explicit: for personal consumer accounts, Anthropic is the data controller, not your organisation.3 You are not a party to the agreement. You cannot audit. You do not know which country the server is in.

Your organisation is likely still responsible for the customer data that passed through that session. But you have no agreement with the party that processed it.

Note that the absence of a data processing agreement does not mean your organisation escapes liability. It means you have lost control. GDPR’s requirements for appropriate technical and organisational measures (Articles 5 and 24) apply to you as data controller regardless of whether your employees use approved or unapproved tools. The difference is that with a corporate account you can demonstrate compliance. With a personal account, you cannot.

We have written a deeper look at exactly this — GDPR and AI tools: what happens when employees use personal accounts.

The person who built it may have left

There is one more risk that is invisible until it is too late.

The tool built in an afternoon may now be used daily by six people in the company. They know it works. They do not know how.

The person who built it may have left.

The next time it breaks (when an external integration is updated, when the data format changes, when someone asks how customer data is handled) nobody can fix it. IT cannot help; they did not know it existed. The original builder is gone.

This is not a hypothetical scenario. It is what happens when infrastructure is built by one person without anyone else documenting, participating in, or approving it.

How to build an AI policy and regain control

Banning AI is the wrong answer. It does not work and it creates competitive disadvantages. Those who use AI tools effectively gain an edge.

Three things are reasonable to start with:

  1. Take an inventory of what exists. Systematically ask which AI tools are being used in the organisation and for what purpose. Not as an interrogation but as a factual basis. You cannot govern what you do not know about.

  2. Give the IT process a fast track. If the normal route takes three weeks, people will route around it. A simple approval process for low-risk AI tools, where the answer comes back in a day or two, removes the strongest reason for bypassing the system.

  3. Decide who owns the issue. When something goes wrong with a Shadow AI system, who is responsible? That must be decided in advance, not during the crisis.

ISO 42001 is a standard for AI governance that addresses precisely these questions: what is being used, by whom, with which data, and who is accountable if something goes wrong. It gives you a framework for asking the right questions systematically.

If you want to see how AmpliFlow handles these questions in practice: book a walkthrough.


This article is a spinoff from Why are we paying for software that AI can build for free?. The parent article goes deeper into the build-versus-buy calculation and the financial chaos AI demos have triggered on the stock market.

Footnotes

  1. Retool, “The build vs. buy shift: how vibe coding and shadow IT have reshaped enterprise software”, February 2026. Survey of 817 respondents, Retool customers and builders; sample bias towards those already building their own systems. Source. 2

  2. Kate Park, “Samsung bans use of generative AI tools like ChatGPT after April internal data leak”, TechCrunch, 2 May 2023. Source.

  3. Anthropic, Privacy Policy (effective 12 January 2026). For personal consumer accounts, Anthropic acts as data controller. Source.

Related articles

AI agents and management systems: hype, reality, and what we actually built

AI agents and management systems: hype, reality, and what we actually built

What Research Actually Says About AI Hallucinations - And What It Means for HSEQ

What Research Actually Says About AI Hallucinations - And What It Means for HSEQ

GDPR and AI tools: it's easier than you think to break the rules

GDPR and AI tools: it's easier than you think to break the rules