Why You Should Never Call OpenAI Directly from the Frontend
The first mistake most no-code builders make: calling the OpenAI API directly from WeWeb or FlutterFlow, with the API key exposed in the client.
Never do this. The API key will be visible in browser dev tools, and anyone who finds it can generate thousands of dollars in API calls at your expense. OpenAI API keys have no rate limiting by default.
The correct pattern: all OpenAI calls go through your backend (Supabase Edge Function, Xano endpoint, or any server). The frontend calls your backend, which calls OpenAI with the key stored as a server-side environment variable. This adds one layer of indirection and protects your key completely.
Setting Up Supabase Edge Functions for OpenAI
Supabase Edge Functions are Deno-based serverless functions that run on the edge. They're the easiest way to proxy OpenAI calls.
```
// supabase/functions/ai-chat/index.ts
import { serve } from "https://deno.land/std@0.168.0/http/server.ts"
serve(async (req) => {
const { messages, systemPrompt } = await req.json()
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${Deno.env.get("OPENAI_API_KEY")}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "gpt-4o",
messages: [{ role: "system", content: systemPrompt }, ...messages],
max_tokens: 1000
})
})
return new Response(response.body, { headers: { "Content-Type": "application/json" } })
})
```
Deploy with `supabase functions deploy ai-chat`. Set the key: `supabase secrets set OPENAI_API_KEY=sk-...`.
Calling the Edge Function from WeWeb
In WeWeb, create a REST API data source pointing to your Edge Function URL (found in Supabase dashboard β Edge Functions). Configure it as a POST request with a JSON body.
Create a variable `chatMessages` (array) and `aiResponse` (string). On button click: 1. Append the user's message to `chatMessages` 2. Call the Edge Function action with `messages: chatMessages` 3. Bind `aiResponse` to the response body 4. Display `aiResponse` in a text element
For streaming responses (text appearing word by word), use the `stream: true` parameter in the OpenAI call and handle the SSE stream in the Edge Function β more complex but creates a much better UX.
Prompt Engineering for Product Features
The quality of your AI feature depends entirely on your system prompt. Generic prompts produce generic output.
**For a content generator**: "You are a copywriter for [Company Name], a [description] SaaS. Generate [output type] that matches this brand voice: [examples]. Always output in this format: [structure]. Never include [exclusions]."
**For a data analyst**: "You are an expert data analyst. The user will provide you with structured data. Analyse it and return insights as: 1) A one-sentence summary, 2) Three key findings as bullet points, 3) One recommended action. Always respond in valid JSON."
**Practical rules**: Be specific about output format (JSON when the frontend needs to parse it), include negative constraints ("never mention competitors"), and test with 20+ inputs before shipping.
Cost Control and Rate Limiting
OpenAI API costs add up fast in production. Three controls to implement before launch:
1. **Token limits**: Set `max_tokens` on every request. For most features, 500 tokens is enough. GPT-4o charges $2.50/M input tokens + $10/M output tokens β a 500-token limit keeps cost under $0.005 per request.
2. **User rate limiting**: In your Edge Function, check how many calls the user has made in the last hour (store in Supabase). Return 429 if over the limit.
3. **Caching**: For deterministic outputs (same input β same output), cache responses in Supabase. A `ai_cache` table with a hash of the prompt as the key eliminates redundant API calls.
For a SaaS app with 500 MAU each making 20 AI requests/day: budget β¬150β300/month for GPT-4o.
Common No-Code AI Features We Build
**AI content generation**: Blog posts, product descriptions, email subject lines. WeWeb form β Edge Function β GPT-4o β display output.
**Intelligent search**: Embed user content with OpenAI text-embedding-3-small, store vectors in pgvector (Supabase), and run similarity search. Returns semantically relevant results instead of keyword matches.
**Document summarisation**: Upload PDF β extract text via Edge Function β summarise with GPT-4o β store summary in Supabase.
**AI-powered onboarding**: Ask 5 questions during signup, generate a personalised setup checklist with GPT-4o, store in user profile.
All of these run on the WeWeb + Supabase + OpenAI stack with zero custom code in the frontend.