Vibe Coding Security: The Complete Guide to Shipping AI-Built Apps Without Getting Breached
Mr. Ballaz- Focus
- Vibe Coding
- Risk
- High
- Stack
- Supabase/Next.js
- Detection
- Ubserve Runtime Simulation
Vibe coding moves fast. Security incidents move faster. This is the complete guide to vibe coding security best practices for founders shipping with Cursor, Lovable, Bolt, and Supabase in 2026.
Most vibe-coded apps share the same five vulnerabilities. They are introduced by the same AI tools, in the same order, for the same reasons. Here is how to stop them before they cost you.

Vibe coding security is the gap between the app your AI built and the app that is safe to ship. That gap is real, it is consistent, and it follows predictable patterns regardless of which AI tool you use.
In 2026, the majority of security incidents in AI-built apps come from five sources: API keys exposed in frontend bundles, missing or broken Supabase RLS policies, service role credentials in client code, unprotected API routes, and public endpoints with no rate limiting. These are not exotic vulnerabilities. They are the predictable output of AI tools optimizing for working code over secure code.
This guide covers how to close every one of them.
Why Vibe-Coded Apps Have a Security Problem
AI coding tools generate code that runs. That is their job and they do it well. But running code and secure code are not the same thing.
When Cursor, Lovable, Bolt, or Windsurf scaffolds your auth flow, it produces code that authenticates users. What it does not produce:
- Token expiry validation
- Horizontal access control (user A cannot read user B's data)
- Rate limiting on auth endpoints
- Input validation on write endpoints
- Least-privilege database access
These are not bugs in the AI tools. They are the expected output of tools optimizing for fast, working code. The security layer is always a manual responsibility.
The founders who get breached are the ones who assumed the AI handled it.
The 5 Most Common Vibe Coding Security Vulnerabilities
1. API Keys Exposed in Frontend Code
This is the fastest path to a financial incident in any AI-built app.
When you ask Cursor or Bolt to "add OpenAI to the app," the fastest implementation calls the API directly from client code. The key goes into the JavaScript bundle. The bundle is public. Scrapers find key patterns in public bundles within hours of deployment.
// What AI tools often generate
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }],
// API key is in process.env.OPENAI_KEY — which is in your client bundle
});
Fix: All paid API calls go through a server function. The key lives in server environment variables only. The client calls your function, your function calls the provider.
2. Missing or Broken Supabase RLS
Row Level Security is the most misunderstood security control in the Supabase ecosystem.
Founders enable RLS and assume the work is done. But an enabled RLS table with an incorrectly scoped policy is still a data leak. The most common pattern:
-- This policy compiles and passes your happy-path tests
create policy "users can read their team's data"
on team_documents for select
using (team_id in (select team_id from memberships));
-- Problem: missing auth.uid() means any authenticated user reads all teams
The policy works. It just works for every user instead of the right user.
Fix: Every policy must reference auth.uid(). Every policy must be tested with a denied-access case — not just a successful read.
3. Service Role Key in Client Code
The Supabase service role key bypasses every RLS policy. That is what it is designed to do — for server-side admin operations.
AI tools scaffold Supabase clients quickly, and the fastest implementation often uses the service role key in a shared utility file. When that file is imported by client components, the key reaches the browser.
// lib/supabase.ts — imported everywhere, including client components
export const supabase = createClient(url, process.env.SUPABASE_SERVICE_ROLE_KEY!);
// Every visitor now has admin database access
Fix: Separate your Supabase clients. Client components use the anon key. Server-only files use the service role. Add import 'server-only' to the admin client file to enforce the boundary at build time.
4. Unprotected API Routes and Server Actions
v0, Cursor, and Lovable generate API routes and server actions. They do not add authentication by default.
An unprotected Next.js API route is publicly callable by anyone who knows the URL. In most AI-generated codebases, the URLs are predictable (/api/user, /api/data, /api/admin).
// What AI tools often generate
export async function POST(req: Request) {
const { userId, data } = await req.json();
await db.insert('records', { userId, data }); // No auth check
}
Fix: Every API route and server action that handles user data starts with an auth check. Middleware at the Next.js level enforces this at the routing layer before requests reach handler code.
5. No Rate Limiting on Any Endpoint
AI-generated apps ship with zero rate limiting by default. Every endpoint — login, AI proxy, data export, password reset — is publicly callable at unlimited speed.
This makes brute-force attacks trivial, enables API key abuse via your own proxy endpoints, and allows scrapers to extract your entire user dataset through pagination.
Fix: Add rate limiting to login, auth, and AI proxy routes at minimum. Use Upstash Redis for distributed rate limiting that works across Vercel Edge, serverless, and any Node.js environment.
The Vibe Coding Security Checklist (Short Version)
Before every production deploy of an AI-built app:
Secrets:
- No API keys in client-side code or
NEXT_PUBLIC_vars - Service role key server-only and not in shared utility files
- Scan built JS bundle for key patterns before deploy
Database:
- RLS enabled on every user-facing table
- Every policy references
auth.uid()— tested with deny cases - Private storage buckets verified with unsigned URL 403 test
Auth:
- Auth middleware on every API route and server action
- Token expiry check present — not just session existence
- Horizontal access test: user A cannot read user B's data
Infrastructure:
- Rate limiting on login, auth, and AI proxy routes
- Input validation with Zod on all write endpoints
- CORS explicit allowlist — no wildcard in production
The Tools I Use to Catch What Manual Review Misses
Manual review catches obvious problems. Automated scanning catches the systematic ones — the auth check removed from one route in a refactor, the RLS policy that looks right but uses the wrong comparison, the API key that made it into the client bundle through a transitive import.
Ubserve was built specifically for vibe-coded app security. It scans for every pattern in this guide: key exposure, RLS coverage, service role misuse, unprotected routes, and CORS misconfiguration. Every finding comes with an explanation in plain English and a fix prompt you paste into your AI tool to resolve it.
The free scan covers frontend key exposure and your security score. The full audit covers your entire stack.
Run your free scan before your next deploy.
Vibe coding is not inherently insecure. AI tools build fast, and fast apps can be secure apps. The founders who ship safely are not the ones who code slower — they are the ones who close the security gap between what the AI built and what is safe to run in production.
That gap is predictable. It is fixable. And it takes less time to close than you think.
— Mr. Ballaz, Founder of Ubserve