Why do we pay for software AI can build for free?

Your CFO has seen the AI demos. But the question should not be 'should we build our own?' but 'what have people already built without us knowing?'

Why do we pay for software AI can build for free?

Your CFO has seen the demos. Microsoft’s Copilot creating financial dashboards in Excel without anyone writing a formula. Anthropic’s Claude Cowork asking “Let’s knock something off your list” and then sorting files, building spreadsheets from receipts, or writing reports while you do something else. Someone on LinkedIn who built a CRM in an afternoon (see glossary: vibe coding). It looks like every piece of software you’re paying for can be replaced by AI, for free, in an afternoon.

Claude Cowork start page: "Let's knock something off your list" with buttons for creating files, analysing data, building prototypes and organising folders. Image: Anthropic’s Claude Cowork (claude.com/product/cowork). This is what your employees see when they open the tool.

Microsoft Copilot Agent Mode in Excel: AI automatically creates a financial dashboard with insight analysis, trends and charts. Image: Microsoft’s Copilot Agent Mode in Excel (microsoft.com/microsoft-365-copilot). The AI builds a complete financial report with charts and analysis.

The market took it literally. In February 2026, global tech stocks crashed and traders on Wall Street called it the “SaaSpocalypse”.1

The trigger was Anthropic releasing eleven plugins for Claude Cowork. Not vague categories, but ready-made workflows.2

The legal plugin reviews contracts clause by clause and flags deviations. The finance plugin reconciles bank accounts and generates SOX audit documentation. Sales summarises calls and builds pipeline forecasts. Support triages tickets and turns resolved issues into knowledge base articles.

All eleven free, open source, plugging straight into Salesforce, HubSpot, Jira, Slack, Snowflake, Excel. Not a chatbot. An AI that produces finished deliverables.

And that’s what Anthropic wants you to believe it is. Read their own help pages a bit further and the message is different: avoid giving Cowork access to sensitive files.3 In their security documentation they’re even more direct: the tool is not suitable for regulated industries.4 Not HIPAA. Not FedRAMP. Not for medical data, financial data, or sensitive business information. That is the manufacturer writing it, in the official documentation, about the tool that just triggered a stock market crash.

Investors saw demos and did the maths: if an AI can do what Thomson Reuters’ legal tools do, why pay for Thomson Reuters? The stock dropped 16 per cent in a day. LegalZoom fell 20 per cent. Salesforce, the world’s largest CRM vendor, was down 29 per cent. Atlassian, which makes Jira and Confluence, had fallen 50 per cent since the start of the year.5 In total, roughly 285 billion dollars evaporated from software stocks in a few days.

Two weeks later Thomson Reuters rose 7 per cent. They had beaten analyst expectations, raised the dividend by 10 per cent, and announced investments in their own AI solutions.6 The market sold first and asked questions later.

And then came the next wave. On 20 February 2026 Anthropic launched Claude Code Security, a tool that automatically scans source code for security vulnerabilities.7 Not with fixed rules like traditional scanners, but with a language model that reads code roughly the way a security researcher does. CrowdStrike and Cloudflare each fell 8 to 10 per cent. Global X Cybersecurity ETF, which tracks the companies that sell security auditing, dropped close to 9 per cent. Dennis Dick at Triple D Trading called it a “mini-flash-crash” in Bloomberg.

The market is not afraid of AI in general. It is pricing in something specific: that companies will build their own tools with AI instead of paying license fees. Why pay Atlassian for Jira if a team can vibe-code its own project tracker in a week? Why pay Salesforce if an AI builds a CRM in an afternoon? That is the bet Wall Street is making. Not whether AI works, but whether the customers leave.

The question landing on your desk now: why are we paying tens of thousands per year for software when AI can do it for free and the market has already priced it in?

The short answer is that for some software, that’s right. Building (with AI) is the right call. For other software, it’s a miscalculated equation. And then there’s a third question most people miss entirely.

When building is actually the right answer

AI can build working software fast. That’s not hype.

Gergely Orosz, editor of one of the most-read newsletters in tech, replaced a tool he’d been paying $120 a year for. It took 20 minutes.8 It worked. A tech company (with many of its own developers) in Australia built its own CRM-like system in ten days with AI and it saved them tens of thousands of dollars a month.9

For simple, standalone tools without sensitive data and without compliance requirements, building can be the right decision. An internal report that otherwise runs manually. A calculation that lives in Excel and nobody owns. Here the cost to build is lower than the cost to buy, and complexity is low enough to manage without a dedicated engineering team. This is giving individual employees or teams superpowers.

When the equation looks better than it is

The problem is that every demo looks the same regardless of what’s actually being built.

A founder who documented seven months of AI-assisted software development describes it:

“Pure AI coding gets you maybe 60% there.”10

The remaining 40 per cent is database problems that surface with more users, security vulnerabilities that don’t show until they’re exploited, and integrations that break when an external API changes (see glossary: API).

Early research on AI-generated code points to a clear pattern: the code contains security issues that don’t show until they’re exploited (see glossary: security vulnerabilities). A review of seven AI-built prototypes found 970 such issues, of which 801 were classified as serious.11 Veracode, which has analysed AI-generated code at scale, found that nearly half contained vulnerabilities.12 The researchers behind the vibe coding study recommend treating AI-generated code as a first draft that must be reviewed and tested by competent people before running in production.13

That’s advice for programmers. But the person building is not always a programmer.

The problem is not the cost of one system. It’s what happens when you have five, or 50.

Every AI-built tool that reaches production needs someone who understands it, debugs it when it breaks, updates it when an integration changes, and knows what happens with customer data. That’s not a full-time job for 1 tool. But for 5 tools it’s starting to look like one. And that person is probably the same person who built everything, now carrying responsibility for holding together an internal collection of tools nobody planned for.

According to Unionen’s salary statistics for 2024, a junior systems developer costs roughly SEK 750,000 per year when employer contributions and overheads are included.14 But cost is not the worst part. The worst part is that when three things break at the same time, and they will, that person quickly finds a new job.

Anish Acharya at venture capital firm Andreessen Horowitz writes about it: “Software value compounds; content value decays.”15 A good way of putting why an established system is hard to copy.

The question you didn’t ask

Here is what most CFOs don’t think about when they see the demo.

An industry survey from a leading no-code vendor, with clear bias in the results, asked hundreds of developers and business developers about their AI habits in early 2026. 60 per cent of those who had built something with AI had done so outside IT’s control.16 35 per cent of organisations had already replaced at least one purchased application with a custom build.

These are not interns experimenting. 64 per cent of those who built outside official channels were managers or above. They went around IT because it was faster (31 per cent), because existing software didn’t do enough (25 per cent), because the IT process was too slow (18 per cent), because tools didn’t integrate with each other (10 per cent), or because IT lacked capacity (10 per cent).

The question your CFO is asking is “should we build instead of buy?” That question assumes you control what gets built. You might not. Someone in finance may have already built something with customer data that nobody else knows about. That’s called shadow IT (see glossary). It’s not new. It’s just much easier to create now.

Six months after the tool was built, it’s in production, connected to real systems, with customer data, and without anyone in IT knowing about it. The person who built it may have left. IT can’t stop software they don’t know exists.

And then there’s the security problem

There are more problems than IT not knowing what’s running.

Exfiltration - how easily AI is hacked to leak data

Security researchers showed in January 2026 that Claude Cowork could be tricked into sending all files in a folder to an attacker without the user noticing.17 This happened with the exact tool whose manufacturer, as we’ve seen, already advised against using it with sensitive files.

Personal accounts vs business accounts

And your employee with a personal $20 subscription is not connecting their private Claude session under your data processing agreement. They’re connecting it to Anthropic’s servers. That’s not just wording. It’s a legal shift.

It gets worse. With a personal account, Anthropic trains new versions of their models on what the employee writes (or gives access to), unless the user turns it off in account settings.18 Data is stored on servers in the US. The transfer happens via EU standard contractual clauses, not via the EU-US Data Privacy Framework, which Anthropic is not certified under.19 Anthropic has SOC 2 Type II, ISO 27001, and annual independent security audits, but that doesn’t help you: with a personal account you have no contractual right to see the audit reports or demand anything. Those certifications protect enterprise customers who have a DPA. Not the employee who paid with their personal card.

GDPR is still a law

This is a GDPR problem. Not theoretical, but concrete. Your organisation is probably still the data controller for the customer data or business data passing through the session. It’s your data, your records, your customers. But you have no data processing agreement with the party processing it.20 You can’t audit. You can’t stop onward sharing. And the data trains the model unless someone turned it off.

ISO 42001 - management system for AI

ISO 42001 is an international standard for AI management systems, published in 2023. It exists for exactly this scenario. Not to stop employees from using AI, but to give the organisation control: what is being used, by whom, with what data, and who is responsible if something goes wrong?

The standard’s controls hit shadow AI directly. Control A.2.2 requires a documented policy for how AI systems may be used. Control A.3.2 requires that roles and responsibilities are defined, that there is a person who owns the issue when something goes wrong. Control A.5.2 requires an impact assessment: who is affected if this tool processes customer data and something fails? Control A.9.2 requires approval processes for responsible use. Control A.10.3 requires you to evaluate AI vendors the same way you evaluate other vendors, because OpenAI and Anthropic are your vendors when your employees use their tools with your data.21

The same logic as ISO 9001 for quality or ISO 27001 for information security. Not bureaucracy for the sake of bureaucracy.

The EU AI Act already applies

The legal reason is bigger than most people think.

The EU AI Act, Article 4, requires all organisations that use AI systems to ensure that relevant personnel have sufficient AI competence. This has been in effect since 2 February 2025. But Article 4 is only the first provision. On 2 August 2026 the rest takes effect.22

If an employee vibe-codes a tool that prioritises job applications, scores customers for creditworthiness, or automates decisions about which tickets to escalate, your organisation has (depending on industry) deployed a high-risk AI system. This is not a theoretical classification. Annex III of the regulation lists the areas where AI systems count as high-risk: employment, credit assessment, access to public services. It doesn’t matter that the tool was built in an afternoon by someone without a technical background.

With a high-risk system come obligations. Article 26 requires human oversight by competent persons. It requires that employees are informed they are subject to AI. It requires a data protection impact assessment under GDPR. Article 25 goes further: if you take a general-purpose AI system like ChatGPT and change its intended purpose so that it becomes a high-risk system, you become a provider under the regulation, with requirements for technical documentation, CE marking, and quality management.

And if an employee publishes AI-generated text externally without disclosing that it’s AI-generated? Article 50 requires transparency for AI-generated text published in the public interest, unless a human has reviewed it and taken editorial responsibility. Shadow IT makes that review impossible.

Fines follow the same scale as GDPR: up to EUR 15 million or 3 per cent of global turnover for violations of provider and deployer obligations. For SMEs, the lower of the amount or percentage applies.

ISO 42001, like other ISO standards, is voluntary. The EU AI Act is binding legislation. They are not the same thing. But ISO 42001 covers every obligation just listed. Control A.5.2 (impact assessment) maps to Article 26’s DPIA requirement. Control A.9.4 (intended use) maps to the requirement to use AI systems according to instructions. Control A.8.2 (information to users) maps to the transparency requirements in Article 50. The standard gives you not just AI competence, but a structured way to meet the rules that take effect in six months.

Should you build your own with AI or buy? It depends on what it is. But start by finding out what people have already built without you knowing. That’s the question that should keep your CFO up at night.

If you want to see how AmpliFlow handles this in a management system: book a walkthrough.


Glossary

Vibe coding - Describing what you want in plain text to an AI tool, which then writes the code. No programming knowledge is needed to get started, but it’s not enough to keep what you build running.

API (Application Programming Interface) - What lets two programs talk to each other. When your accounting software fetches payments from your bank, it does it via an API. If the API changes and nobody updates the code, it stops working.

Shadow IT - Software or tools used within a company without IT knowing about or approving it. The problem is old. What’s new is how little effort it takes to create such tools today.

Security vulnerabilities - Weaknesses in code that make it possible for outsiders to access data they shouldn’t have access to. Often arise without the builder knowing about them, and don’t show until they’re exploited.

ISO 42001 - An international standard for how companies should govern and control their use of AI. Published by ISO in 2023. Provides a framework for who is responsible for what when AI is used in the business.


This article was written in February 2026, in the middle of one of the fastest technology shifts in recent memory. Much of what we write about here may look different a year from now. We’re curious to see how it holds up.

Footnotes

  1. Jonathan Barrett, “Is the share market headed toward a ‘SaaS-pocalypse’?”, The Guardian, 20 February 2026. The term “SaaSpocalypse” was coined by analysts at investment bank Jefferies. Source.

  2. Anthropic, “Cowork plugins”, 30 January 2026. Eleven plugins published as open source. Source. Source code: GitHub.

  3. Anthropic, “Use Cowork safely”, Claude Help Center (2026). Verbatim: “Avoid granting access to local files with sensitive information, like financial documents.” Source.

  4. Anthropic, “Use Cowork safely”, Claude Help Center (2026). Verbatim: “Cowork activity is not captured in audit logs, Compliance API, or data exports. Do not use Cowork for regulated workloads.” Source. See also: Using Agents According to Our Usage Policy.

  5. John Furrier, “The SaaSpocalypse mispricing: Why markets are getting the AI-software shakeout wrong”, SiliconAngle, 10 February 2026. Thomson Reuters -16%, RELX -13%, LegalZoom -20% on the plugin day. Source.

  6. Tim Bohen, “TRI Stock Surges Amid AI Growth and Strategic Upgrades”, StocksToTrade, 24 February 2026. TRI +6.97% after quarterly report beating expectations (EPS $1.07, revenue $2.01 billion), $1 billion share buyback, 10% dividend increase, RBC Capital upgraded to “Outperform”. Source.

  7. Jakob Steinschaden, “Anthropic’s Claude Code Security Triggers Flash Crash in Cybersecurity Stocks”, Trending Topics, 23 February 2026. CrowdStrike -8 to 10%, Cloudflare -8 to 10%, Global X Cybersecurity ETF -9%. Dennis Dick (Triple D Trading) quoted in Bloomberg: “mini-flash-crash.” The tool launched 20 February 2026 and is in limited preview for Enterprise and Team customers and open source maintainers. Source.

  8. Gergely Orosz, “I replaced a $120/year micro-SaaS in 20 minutes with LLM-generated code”, The Pragmatic Engineer, 2026. Source.

  9. Hypergen, “SaaS: The Build vs Buy Equation Just Changed”, 2025. Source.

  10. Reddit, r/EntrepreneurRideAlong, “7 months of vibe coding a SaaS and here’s what I learned”, 2025. Source.

  11. M Waseem, A Ahmad, KK Kemell, J Rasku, “Vibe Coding in Practice: Flow, Technical Debt, and Guidelines for Sustainable Use”, arXiv:2512.11922 (preprint, submitted to IEEE Software), December 2025. Security figures based on scanning 7 of their own MVPs, an illustrative sample. Source.

  12. Veracode, cited in Waseem et al. arXiv:2512.11922. Original report: Veracode State of Software Security Report.

  13. M Waseem et al., arXiv:2512.11922. Verbatim: “Based on our experience, we recommend treating AI-generated code as a first draft that must pass strict automated gates such as linting, type checks, and tests before merging.” Experience-based recommendation, not a measured result.

  14. Unionen market salaries 2024, systems developer, entry salary SEK 36,000-45,000/month. Source. Total cost SEK 750,000/year: Unionen’s salary figure plus employer contributions (31.42%) plus estimated overheads (holiday, pension, equipment), calculation based on Unionen’s published data.

  15. Anish Acharya, “Software’s YouTube moment is happening now”, Andreessen Horowitz, 2026. Source.

  16. Retool, “The build vs. buy shift: how vibe coding and shadow IT have reshaped enterprise software”, February 2026. Survey of 817 respondents, Retool customers and builders, sample bias towards those who already build their own systems. Source.

  17. Prompt Armor, “Claude Cowork Exfiltrates Files”, 14 January 2026. Source. See also: Simon Willison, simonwillison.net/2026/Jan/14/claude-cowork-exfiltrates-files/.

  18. Anthropic, Privacy Policy (effective 12 January 2026), section 2: consumer accounts (Free/Pro) are used for model training by default. Users can turn this off in account settings. Business accounts (API, Team, Enterprise) are never trained on customer data per Commercial Terms section B. Source.

  19. Anthropic, Data Processing Addendum (effective 24 February 2025). SOC 2 Type II, ISO/IEC 27001, ISO/IEC 27017, ISO/IEC 27018, CSA STAR. Annual independent security audit (DPA Schedule 2, section D.2). Annual penetration test by external assessors (DPA Schedule 2, section G.2). Audit rights for customers with DPA (DPA section F). Source. Certifications listed at claude.com/regional-compliance.

  20. Regulation (EU) 2016/679 of the European Parliament and of the Council (GDPR), Article 28: processing by a processor shall be governed by a contract or other legal act. Without such an agreement, the legal basis for a third party to process personal data on behalf of the controller is missing.

  21. ISO/IEC 42001:2023, Annex A (normative), controls A.2.2 (AI policy), A.3.2 (roles and responsibilities), A.5.2 (impact assessment), A.8.2 (information to users), A.9.2 (responsible use), A.9.4 (intended use), and A.10.3 (suppliers).

  22. Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (EU AI Act). Articles cited: 4 (AI literacy, in effect since 2 Feb 2025), 25(1)(c) (changing intended use makes you a provider), 26 (deployer obligations for high-risk: human oversight, inform employees, DPIA), 50(4) (transparency for AI-generated text), 99(4)(6) (fines up to EUR 15M / 3% of turnover, lower for SMEs). Annex III lists high-risk areas: employment, credit scoring, public services. Articles 25, 26, 50 apply from 2 Aug 2026. Full text, OJ L 2024/1689.

Related articles

Most companies have a management system. The problem is it doesn't manage anything.

Most companies have a management system. The problem is it doesn't manage anything.

EU AI Act and ISO 42001: how they connect

EU AI Act and ISO 42001: how they connect

Management Review in ISO Standards: How to Do It Right

Management Review in ISO Standards: How to Do It Right