ContentAIDOCS
Getting Started

Architecture

How ContentAI delivers a full SaaS experience with zero custom backend.

ContentAI is intentionally designed to have no separate backend. Everything runs on Next.js — a single app you can deploy to Vercel, Netlify, or any Node.js host.

High-level flow

┌─────────────────┐       ┌──────────────────────────┐       ┌─────────────────┐
│   Browser       │       │   Next.js App            │       │   AI Provider   │
│  (React + UI)   │──────▶│   /api/generate route    │──────▶│  Groq / Google  │
│                 │       │                          │       │  OpenAI / etc.  │
│  Zustand store  │◀──────│   generateText() via     │◀──────│                 │
│  (localStorage) │       │   Vercel AI SDK v6       │       │                 │
└─────────────────┘       └──────────────────────────┘       └─────────────────┘
  1. The user fills out a template form in the browser.
  2. The React code builds a prompt by replacing {field} placeholders with form values.
  3. The client sends POST /api/generate with { prompt, provider, model, apiKey }.
  4. The Next.js serverless route instantiates the AI SDK client for the chosen provider, calls generateText(), and returns the result.
  5. The client shows the content and optionally saves it to the Zustand store (which persists to localStorage).

Why no backend?

Traditional SaaS requires:

  • A database for users, content, billing
  • An auth system
  • A queue / background worker for AI calls
  • DevOps to run all of it

ContentAI skips all of that by:

  • Storing user data (keys, generations, documents) in the browser
  • Using Next.js API routes as a stateless proxy to AI providers
  • Letting users bring their own API keys (BYOK), which also removes your AI cost burden

This makes the app:

  • Cheaper to host — fits comfortably in Vercel's free tier
  • Easier to deploy — one command, one service
  • Safer — you never custody user secrets on your server

Adding a real backend

If you want accounts, shared workspaces, or server-side billing, you can add a backend without rewriting anything:

Option 1 — NextAuth + Supabase Postgres (reference build)

  1. Sign-in with Auth.js (NextAuth v5) — GitHub and Google OAuth
  2. Store generations, documents, and encrypted settings in Supabase using the service role from API routes only
  3. Protect dashboard routes with middleware and auth() in /api/*

See Authentication & database (NextAuth + Supabase). For an alternate Supabase Auth + @supabase/ssr style integration, see Add Supabase (optional).

Option 2 — Clerk + Neon/Postgres

Drop-in auth with Clerk; use any Postgres (Neon, Railway, Supabase).

Option 3 — Keep it local

Don't add a backend at all. Perfect for:

  • Internal company tools
  • Personal content studios
  • MVPs before you know which features matter

Request lifecycle (generate)

Here is what happens when you click Generate:

  1. app/(dashboard)/generate/page.tsx collects form values.
  2. It interpolates them into the template's prompt from lib/templates.ts.
  3. It reads activeProvider, selectedModels[activeProvider], and apiKeys[activeProvider] from the Zustand store.
  4. It fetch()es /api/generate with a JSON body.
  5. app/api/generate/route.ts calls createModel(provider, apiKey, model) which picks the right @ai-sdk/* factory.
  6. It calls generateText({ model, prompt, maxOutputTokens }).
  7. On success → returns { text, wordCount }. On error → returns { error } with a parsed, friendly message and a meaningful HTTP status.
  8. The client pushes the result into the generations array.

Streaming vs. non-streaming

ContentAI v1.0 uses non-streaming (generateText) by default because it gives clean error handling — providers like Google return 429 rate-limit errors inside the stream rather than on the HTTP response, which makes streaming UX surprisingly tricky.

If you want streaming with a typewriter effect, see Enabling Streaming.

Next: API Keys →

On this page