Security & Privacy
How ContentAI handles user data, API keys, and AI content.
ContentAI is designed around one simple principle: the user owns their data and keys. Here's exactly how that works.
Where API keys are stored
Default / local-first build: API keys live in the browser's localStorage, under the store key contentai-store. They are:
- Never written to disk on your server
- Never stored in environment variables
- Never logged
- Sent to
/api/generatewith each request over HTTPS, then discarded
If a user clears their browser data, their keys are gone. That is by design.
Production build with NextAuth + Supabase: keys are not sent from the client on each generate call. They are encrypted server-side and stored in Postgres (user_settings); /api/generate loads the key for the active provider after validating the session. See Authentication & database (NextAuth + Supabase).
Where generations and documents are stored
Same place: localStorage, in the same store. This means:
- Content is device-specific and browser-specific
- Users can't access their content from a different device
- Clearing browser data deletes everything
If you need cross-device sync, add a database. The reference deployment uses Supabase — see Authentication & database (NextAuth + Supabase), or the alternate notes in Add Supabase.
The /api/generate route
This is the main server-side entry point for AI calls. It:
- Accepts
POSTwith JSON{ prompt, provider, model }(and optional extra fields depending on your fork) - In the authenticated build, requires a signed-in user and loads the user's API key server-side from encrypted storage (not from the JSON body)
- Constructs an AI SDK client in memory using that key
- Calls
generateText()and returns plain text or JSON errors
In the localStorage-only build, the key is included in the request body instead. The route does not persist prompts or keys to disk.
Rate limiting
There is no rate limiting out of the box, because:
- The user's own provider already enforces rate limits on their account
- ContentAI is typically deployed for a small team or as a single-user tool
If you're running ContentAI as a public service with your own shared keys, add:
- Upstash Redis +
@upstash/ratelimit, or - A Vercel Edge Middleware check, or
- An auth gate (Clerk, Supabase, NextAuth)
CSRF, XSS, and CORS
- CSRF — lowest risk with same-site session cookies (NextAuth). For the local-only build without sessions, requests carried the API key in the JSON body; use HTTPS and avoid cross-site embedding.
- XSS — React escapes all user input by default. The only place content is rendered as HTML is in
ReactMarkdown, which usesremark-gfmand does not allow raw HTML by default. - CORS — the
/api/generateroute only needs to be called from the same origin. Next.js API routes respect same-origin by default.
What the provider sees
When a user generates content, the AI provider (Groq, Google, OpenAI, Anthropic) sees:
- The prompt (including any text the user pasted into fields)
- The user's API key
- Standard HTTP metadata (IP, user-agent)
Each provider has its own data retention and training policies:
| Provider | Training on inputs? | Retention |
|---|---|---|
| Groq | No (as of 2025) | 30 days for abuse |
| Free tier may train | Varies by tier | |
| OpenAI API | No by default | 30 days then deleted |
| Anthropic | No by default | 30 days then deleted |
Always check each provider's latest policy; this table is a snapshot, not legal advice.
Hardening a production deployment
If you're offering ContentAI as a paid product, consider:
- Add authentication — gate
/dashboard/*and/api/generatewith Clerk, Supabase, or NextAuth. - Encrypt keys at rest — if you move keys to a database, use
pgcryptoor a KMS. - Add a Content Security Policy — restrict where scripts and frames can come from.
- Add monitoring — Sentry for errors, Logtail/Axiom for request logs.
- Add Terms and Privacy pages — especially if users can paste content from their own businesses.
Questions?
If you have a specific security question about your deployment, check FAQ or reach out through your CodeCanyon support channel.
Next: Features: Dashboard →