The democratisation of custom software
Something remarkable is happening across every industry. Franchise owners are asking AI chatbots to build inventory dashboards. Sole traders are generating their own invoicing portals. HR teams at mid-sized firms are spinning up custom onboarding tools in a weekend. Employees at every level — with no formal training in software engineering — are shipping functional applications faster than ever before.
This is not a niche trend. It is a structural shift in how organisations operate. Thanks to conversational AI coding assistants, the barrier to creating bespoke software has collapsed. The result is a wave of internally-built tools that would have taken months and significant budget just a few years ago.
The business case is compelling: faster workflows, lower vendor costs, tools that fit your exact process rather than requiring your process to fit the tool. For small and mid-sized businesses in particular, this feels like a levelling of the playing field.
But there is a serious problem that most of these businesses do not discover until it is too late.
AI can write code. It cannot guarantee secure code.
This is not a theoretical concern or a speculative future risk. It is a documented, well-understood limitation that is acknowledged across the cybersecurity industry. AI code assistants are trained to produce functional output — code that does what you asked for. Security, however, is not just about functionality. It is about what your application does when someone tries to break it.
Research published by academic institutions and security firms consistently shows that AI-generated code contains security vulnerabilities at a meaningful rate. Common issues include:
Common AI-generated security gaps
- Client-side authentication guards — protecting pages with JavaScript instead of server-side logic, meaning a user with JavaScript disabled or a basic HTTP client can bypass your login entirely.
- Missing rate limiting — login endpoints that accept unlimited password guesses, making automated brute-force attacks trivial.
- Outdated or vulnerable dependencies — AI assistants draw on training data that may be months or years old, and will confidently recommend library versions with known, publicly documented exploits.
- Exposed session tokens — cookies and tokens stored without appropriate flags (such as HttpOnly), making them readable by any injected JavaScript on the page.
- Sensitive data in client-side code — business logic, pricing formulas, employee data, or API keys embedded in JavaScript files that anyone can read.
- Missing security headers — headers like Content-Security-Policy and X-Frame-Options are almost never included in AI-generated deployments, leaving applications open to cross-site scripting and clickjacking.
- Overly permissive CORS policies — allowing any third-party website to make authenticated requests on behalf of your users.
What makes these risks particularly dangerous is how they compound. A single vulnerability may be low severity in isolation. But two or three layered together can create a complete account takeover chain — where an attacker uploads a crafted file, a user opens it, and within seconds the attacker has full access to your business data. This is not theoretical. It has happened.
Who is most exposed?
The irony is that the businesses most likely to benefit from AI-built tools are also the most likely to be harmed by their security shortcomings. Consider:
Franchisees and multi-location operators
Often building lightweight CRMs or reporting tools to bridge gaps left by franchisor-provided software. These tools frequently handle customer PII, payment data, or operational data across multiple sites — and are almost never security-reviewed.
Self-employed professionals and sole traders
Lawyers, accountants, tradespeople and consultants are increasingly building their own client portals, quoting tools and scheduling systems. These often contain confidential client information with no security review and no incident response plan.
Internal teams at growing companies
Operational, marketing and finance teams are building internal dashboards and automation tools that access company databases, third-party APIs, and employee data. These shadow-IT applications bypass IT governance and security review processes entirely.
Startups moving fast
Early-stage companies using AI to accelerate their MVP often inherit insecure patterns that persist as the product scales — only to discover the extent of the problem when a security-conscious investor, enterprise customer or regulator asks questions.
Why AI does not warn you
AI coding assistants are optimised to answer your question. If you ask "build me a login page", you will get a login page. You will not get an unsolicited lecture on OWASP Top 10, rate limiting best practices, or the importance of HttpOnly cookies — unless you specifically asked for it.
Most users are not asking. They do not know what they do not know. This is not a character flaw — it is simply the reality of non-technical builders working outside their domain of expertise. The AI fills in the gaps, but it fills them with plausible-looking code rather than secure code.
There is also a confidence problem. AI-generated code looks polished and professional. It often follows good structural conventions. It may even pass a surface-level review. Security vulnerabilities are frequently invisible to the naked eye — they exist in what the code does not do, not in obvious errors.
A word on the broader shift
The rise of AI-assisted development is part of a larger transformation sometimes described as the "Web 3.0" era of business operations — an era defined by automation, personalisation, and the blurring of the line between technology users and technology builders. This is genuinely positive. Empowering non-technical people to build the tools they need is one of the most significant productivity advances in a generation.
But every wave of democratised technology brings its own risk surface. The democratisation of web publishing brought spam and phishing. The democratisation of mobile apps brought privacy nightmares. The democratisation of AI-built software is bringing a wave of insecure applications into production environments — deployed by people who have no reason to suspect they are operating a security liability.
What responsible AI adoption looks like
None of this means you should stop using AI to build tools for your business. It means you should use it with appropriate caution — and supplement it with the expertise it cannot provide itself.
1. Treat every AI-built app as untested software
Regardless of how professional it looks, any application built with AI assistance should be treated as needing a security review before it handles real customer data, employee data, or financial information.
2. Know what data your tool touches
Map out what information your application stores, processes, or transmits. The higher the sensitivity of that data, the higher the bar for security assurance needs to be.
3. Ask specific security questions of your AI assistant
Prompt your AI to explicitly consider rate limiting, authentication guards, cookie security flags, security headers, and dependency versions. Then verify the answers independently — AI can confidently produce wrong security advice.
4. Get an independent review before going live
A lightweight security assessment of a small application does not need to be expensive or time-consuming. Even a focused review of authentication flows, data exposure, and dependency hygiene can catch the most critical issues before they become incidents.
5. Build an incident response plan
Know what you will do if a breach occurs. Who do you notify? How do you contain it? Do you have obligations under GDPR, CCPA, or sector-specific regulations? Having a plan before an incident is far cheaper than having none.
The bottom line
AI-assisted development is a genuine competitive advantage for businesses that use it thoughtfully. The businesses that will regret it are the ones that moved fast without asking the right questions — who discover too late that the tool they built to save money and time has become their biggest operational liability.
Security is not a feature you add at the end. It is an absence of risk you design in from the start. When your developer is an AI, that responsibility does not disappear — it shifts to you.
