WeWeb + OpenAI Integration Guide
Embedding AI capabilities into a WeWeb app means connecting to OpenAI's API — but never directly from the browser. A thin backend layer (Supabase edge function, Xano, or a serverless function) acts as a proxy, keeping your API key safe while enabling chat, content generation, embeddings, and image generation features in your WeWeb interface.
Why WeWeb + OpenAI?
OpenAI's API gives WeWeb apps access to GPT-4o for text generation, embeddings for semantic search, DALL-E for image creation, and Whisper for speech-to-text. Combined with WeWeb's dynamic UI bindings, you can build AI-powered features — chatbots, content assistants, smart search, document analysis — with no custom frontend code. The key is a secure backend proxy that holds the API key and streams responses back to WeWeb.
Setting up the integration
Create an OpenAI account and generate an API key. Build a backend endpoint (Supabase edge function recommended) that accepts a prompt or messages array, calls OpenAI's API, and returns the response. In WeWeb, use a REST API action to call this endpoint from a button or input event. Bind the response to a WeWeb variable and render it in a text element. For streaming, your backend can use Server-Sent Events and WeWeb can consume them via a custom JavaScript action.
Chat and conversation flow
For a multi-turn chat, maintain a messages array variable in WeWeb (role: "user" / "assistant" objects). On each user input, push the new message to the array, send the full array to your backend endpoint, receive the assistant's reply, push it to the array, and bind the array to a list component. This gives GPT the conversation context it needs for coherent replies. Store conversations in Supabase for history and to enable cross-device continuity.
Real-world use cases
App Studio has built WeWeb + OpenAI products including: AI document reviewers that let users upload PDFs and ask questions, customer support chatbots trained on product documentation, content generation dashboards for marketing teams, and internal knowledge-base search tools using embeddings. The no-code frontend dramatically reduces the time to a usable AI product.
Common pitfalls
Never call OpenAI directly from WeWeb — your API key will be exposed in the browser network tab. Always route through a backend proxy. Implement rate limiting in your proxy to prevent cost spikes from abuse. Use GPT-4o-mini for high-volume, low-complexity tasks and GPT-4o only where quality justifies the cost. Cache common responses in your database to reduce API calls. Set a token limit on inputs to avoid runaway costs from large pastes.
What you can build
- AI chatbots
- Content generation
- Document analysis
- Smart search
- Internal knowledge bases
Ready to build with WeWeb + OpenAI?
App Studio has built production apps on this exact stack. We can ship your project in 4–8 weeks and handle the full integration — architecture, setup, and launch.
Want expert help with this integration?
Book a free consultation →