Architecture
How ContentAI delivers a full SaaS experience with zero custom backend.
ContentAI is intentionally designed to have no separate backend. Everything runs on Next.js — a single app you can deploy to Vercel, Netlify, or any Node.js host.
High-level flow
┌─────────────────┐ ┌──────────────────────────┐ ┌─────────────────┐
│ Browser │ │ Next.js App │ │ AI Provider │
│ (React + UI) │──────▶│ /api/generate route │──────▶│ Groq / Google │
│ │ │ │ │ OpenAI / etc. │
│ Zustand store │◀──────│ generateText() via │◀──────│ │
│ (localStorage) │ │ Vercel AI SDK v6 │ │ │
└─────────────────┘ └──────────────────────────┘ └─────────────────┘- The user fills out a template form in the browser.
- The React code builds a prompt by replacing
{field}placeholders with form values. - The client sends
POST /api/generatewith{ prompt, provider, model, apiKey }. - The Next.js serverless route instantiates the AI SDK client for the chosen provider, calls
generateText(), and returns the result. - The client shows the content and optionally saves it to the Zustand store (which persists to
localStorage).
Why no backend?
Traditional SaaS requires:
- A database for users, content, billing
- An auth system
- A queue / background worker for AI calls
- DevOps to run all of it
ContentAI skips all of that by:
- Storing user data (keys, generations, documents) in the browser
- Using Next.js API routes as a stateless proxy to AI providers
- Letting users bring their own API keys (BYOK), which also removes your AI cost burden
This makes the app:
- Cheaper to host — fits comfortably in Vercel's free tier
- Easier to deploy — one command, one service
- Safer — you never custody user secrets on your server
Adding a real backend
If you want accounts, shared workspaces, or server-side billing, you can add a backend without rewriting anything:
Option 1 — NextAuth + Supabase Postgres (reference build)
- Sign-in with Auth.js (NextAuth v5) — GitHub and Google OAuth
- Store
generations,documents, and encrypted settings in Supabase using the service role from API routes only - Protect dashboard routes with middleware and
auth()in/api/*
See Authentication & database (NextAuth + Supabase). For an alternate Supabase Auth + @supabase/ssr style integration, see Add Supabase (optional).
Option 2 — Clerk + Neon/Postgres
Drop-in auth with Clerk; use any Postgres (Neon, Railway, Supabase).
Option 3 — Keep it local
Don't add a backend at all. Perfect for:
- Internal company tools
- Personal content studios
- MVPs before you know which features matter
Request lifecycle (generate)
Here is what happens when you click Generate:
app/(dashboard)/generate/page.tsxcollects form values.- It interpolates them into the template's
promptfromlib/templates.ts. - It reads
activeProvider,selectedModels[activeProvider], andapiKeys[activeProvider]from the Zustand store. - It
fetch()es/api/generatewith a JSON body. app/api/generate/route.tscallscreateModel(provider, apiKey, model)which picks the right@ai-sdk/*factory.- It calls
generateText({ model, prompt, maxOutputTokens }). - On success → returns
{ text, wordCount }. On error → returns{ error }with a parsed, friendly message and a meaningful HTTP status. - The client pushes the result into the generations array.
Streaming vs. non-streaming
ContentAI v1.0 uses non-streaming (generateText) by default because it gives clean error handling — providers like Google return 429 rate-limit errors inside the stream rather than on the HTTP response, which makes streaming UX surprisingly tricky.
If you want streaming with a typewriter effect, see Enabling Streaming.
Next: API Keys →