I have been waiting for this story. Not the specific story — I could not have told you the platforms or the numbers — but the shape of it, which has been visible from a long way off. On 7 May, WIRED published research from RedAccess showing more than 5,000 vibe-coded applications sitting on the open web with no authentication. Close to 2,000 of them were actively leaking sensitive data — hospital staffing rosters with doctor names and phone numbers, marketing strategy decks, customer chat histories, financial records, shipping manifests. Some allowed an anonymous visitor to elevate themselves to administrator and remove the legitimate owners. The platforms named are Lovable, Replit, Base44 and Netlify, and the discovery method was straightforward search-engine dorks.
This is the 2017 Amazon S3 bucket era with a new coat of paint. The technology has changed; the failure mode has not.
AI is an amplifier of intelligence
I have written about this before in different contexts, and I will keep writing about it because the framing matters. Artificial intelligence is an amplifier of intelligence. It does not generate it. If a thoughtful, careful engineer prompts an AI coding tool, the output is amplified thoughtful, careful engineering — faster, more verbose, occasionally more elegant. If a hurried marketing executive prompts it under time pressure to "build me a customer feedback form by Friday", the output is amplified haste. The AI has nothing to amplify other than what was given.
Crap in, crap out. The phrase pre-dates this technology by decades, and it has never been more applicable. The platforms are not stupid; their engineering is in many places genuinely impressive. The problem is that the people prompting them are being asked to perform a role — application designer, security architect, data steward — that they have not been trained for and have not been told they are now performing. Then, with predictable consequences, they do not perform it.
What the boardroom needs to hear
If you sit on a board, particularly as a non-executive director with risk or audit responsibilities, here is the conversation to have with your executive team this quarter. I would put it on the agenda before the half-year close.
- Do we know what our staff are building with AI coding tools? Not "do we have a policy". Do we have evidence — a list of applications, owners, data classifications and access controls — that reflects what is actually deployed? If the answer is "we have a policy", the real answer is no.
- Where does that evidence come from? If it is self-reporting only, it is incomplete. The whole point of the WIRED finding is that the people doing the building do not know they have built anything risky. They have no incentive to declare it and frequently no awareness that it needs declaring.
- Who is accountable for shadow development as a category? Not just shadow IT in the traditional sense of unsanctioned SaaS. Shadow development is what we are looking at — production applications built by people whose job description does not include "application developer".
- What is our regulator exposure? Under UK GDPR Articles 5(1)(f), 25 and 32, the controller is responsible for security of processing regardless of who built the system. The ICO's interest will not waver because the application was generated by AI.
- What is the incident response plan if this happens to us? If a journalist contacts the company on a Friday evening to say they found a Lovable app on your subdomain leaking five years of customer enquiries, who picks up the phone, what do they say, and how quickly do you contain it?
If the executive team cannot answer those five questions cleanly, the gap is the answer. It is not a technical gap; it is a governance gap. Governance gaps are board-level findings.
The platforms' response is structurally wrong
The vendors named have all responded along the same axis: this is the user's choice, the user's responsibility, our defaults are documented, our terms are clear. They are not lying. They are also not being honest about the design decision that produced this outcome.
When a platform's default publishing mode is open-to-the-internet — and when the path to a private, authenticated deployment requires the user to know that such a path exists, to know that they want it, and to know how to ask for it — then the platform has chosen the defaults that produce the headlines. The platform is then surprised, publicly, by the predictable consequence of its own choices. We have lived through this exact film before with cloud storage, with default SNMP community strings, with anonymous FTP, with open Elasticsearch. The lesson each time has been the same: insecure defaults produce insecure deployments, at scale, and the burden of correction cannot reasonably be placed on the least technical user in the chain.
I would expect, over the next eighteen months, regulator pressure or class-action litigation to push the platforms toward authenticated-by-default deployments with an opt-in for public publishing. That would be the right outcome. It will not arrive in time to help anyone currently exposed.
What I would do in the chair
If I were the executive responsible for this in a mid-sized organisation, my next ninety days would look like this.
Weeks one to two. Send a short, plainly worded all-staff communication. Explain what vibe coding is, why the WIRED article matters, and that anyone who has built something with an AI coding tool in the last twelve months should email a named person — not their line manager, a named person — with the URL and a one-line description. Make clear that nobody is in trouble. The objective is visibility, not enforcement.
Weeks three to four. Run an external attack-surface discovery against your own brand and known subdomains. The same dorks RedAccess used will find your exposures too. Hand the output to a security partner you trust — disclosure: my firm Hedgehog Security does this work, and so do others — and triage the findings within the week.
Weeks five to eight. Publish a short, useful policy. Not a fifty-page document; a one-page policy that covers three things: declare what you are building, do not publish internal data to the open web, and if you must publish to the web, use an authenticated deployment with multi-factor authentication. Pair the policy with a five-minute internal video explaining why.
Weeks nine to twelve. Add vibe-coded applications to the asset register. Make sure they appear in the next penetration test scope. Update the data protection impact assessment process to include AI-generated applications as a recognised category. Brief the board.
None of this is expensive. All of it is governance work. The organisations that do it now will look prudent in twelve months; the organisations that do not will look like the next case study.
Further reading
I have written about this at several angles for the audiences I serve. If you are responsible for a small or medium business, the plain-English version is over on SOC in a Box. If you run a security or technology function and want the controls and regulatory mapping, the enterprise CISO view is on Cyber Defence with a Spanish translation. And if you are interested in how an offensive testing team would actually find these in your estate, Hedgehog Security has the pen-tester's walkthrough.
For the regulatory anchor, the ICO's security guidance under UK GDPR is the right starting point, and the NCSC's secure development collection remains the best free resource on what good looks like.
One closing thought
The Gartner figure that gets quoted in coverage of this story is that 80 per cent of business users will be building their own applications by the end of 2026. Take that figure with whatever pinch of salt you like, but the direction of travel is real. The technology is not going to retreat. The honest question for boards is not "how do we stop this" — you will not — but "how do we govern this". The organisations that answer that question deliberately and unhurriedly will be fine. The ones that wait for the regulator to ask them on their behalf will not enjoy the experience.
AI amplifies what is there. Put governance in front of it, and you amplify governance. Put haste in front of it, and you amplify haste. The choice has always been ours, and it still is.