The worlds of no code development and artificial intelligence are merging, making it easier than ever to build incredibly smart applications. With a visual development platform like WeWeb, you can combine the power of AI with a no code workflow to build production grade apps faster than you ever thought possible.
The Chat GPT API from OpenAI allows you to integrate its powerful language models directly into your own applications, unlocking dynamic content generation, intelligent conversational interfaces, and much more. When combined with a visual platform like WeWeb, you can use the API to build production-grade apps with sophisticated AI features, all within a no-code workflow. With the conversational AI market projected to hit an incredible $32.6 billion by 2030, learning to use these tools is a massive advantage. This guide will walk you through everything you need to know about setting up and using the OpenAI API in WeWeb, from the initial plugin setup to advanced architecture for security and scale.
Getting Started with the WeWeb OpenAI Plugin
Your journey into building AI powered apps on WeWeb begins with a simple setup process. The platform provides a dedicated plugin that handles the difficult parts of the integration for you.
OpenAI Plugin Setup in WeWeb
WeWeb released its official OpenAI plugin on March 23, 2023, creating a streamlined way to make secure API calls directly from your apps. This plugin is your all in one solution for accessing different OpenAI features, including:
- Chat Completions: The core of any ChatGPT like experience.
- Text Completions: For straightforward text generation tasks.
- Image Generation: Using DALL E to create images from prompts.
Setting it up is easy. You simply add the plugin from the WeWeb marketplace to your project. Once added, it provides a new set of actions you can use within your application’s workflows, removing the need to manually configure API calls. To see it in action, you can start building for free on WeWeb and follow along.
OpenAI API Key Configuration
Your API key is the secret credential that connects your WeWeb app to your OpenAI account. Think of it like a password; it should be treated with extreme care. You can get your key from your OpenAI account dashboard.
WeWeb’s OpenAI plugin provides a secure field to enter your API key during setup. This is crucial for security. When you use the plugin, your API key is kept private and is not exposed in the client side code that runs in a user’s browser. This secure handling prevents malicious users from stealing your key and using your OpenAI credits. As one developer noted, the only truly safe way to use OpenAI from a web app is to have your front end talk to a backend you control, which is exactly what the WeWeb plugin facilitates.
Understanding Core API Concepts
Once the plugin is ready, it helps to understand the different ways you can interact with the models. OpenAI offers a couple of different APIs for text generation, each suited for different purposes.
Text Completion API Usage
The Text Completion API was the original way to interact with models like GPT 3. It works like a powerful autocomplete. You provide a text prompt, and the model generates a continuation of that text. For instance, you could give it the beginning of a sentence, and it would finish it. While effective for single turn tasks like generating a slogan from a product description, this API is less common for new projects. The primary reason is cost. The older text-davinci-003 model was about ten times more expensive than the newer gpt-3.5-turbo model accessed via the chat API.
Chat Completion API Usage
The Chat Completion API is the modern interface designed for conversational interactions and powers experiences like ChatGPT. Instead of a single text string, it uses a sequence of messages, each with an assigned role (like user, assistant, or system). This structure is ideal for building back and forth dialogue.
Its launch led to widespread adoption in major applications. For example, Snap Inc. used this API to create its “My AI” feature for its 750 million monthly users, and Quizlet built its “Q Chat” AI tutor for 60 million students. These integrations became feasible thanks to a 90% cost reduction OpenAI achieved for its ChatGPT models, making the chat gpt api both powerful and economical.
System Prompt Configuration
A key feature of the chat gpt api is the system prompt. This is an initial instruction you provide to the model to set its behavior, persona, or rules for the entire conversation. For example, a system prompt could be, “You are a helpful travel assistant. Be concise and use emojis in your answers.” This message guides the AI’s tone and response style without the user ever seeing it. Crafting a good system prompt is one of the most effective ways to get consistently high quality, relevant responses from the model.
Secured Prompt Management
Because system prompts can contain proprietary logic or important instructions, you don’t want users to be able to see or modify them. This is where secured prompt management comes in. WeWeb’s OpenAI plugin allows you to define these system prompts in a secure way, so they are never exposed on the front end. This prevents prompt injection attacks, where a user might try to trick the AI into ignoring its original instructions. By keeping your prompts private, you ensure the AI behaves as intended.
Chat Message Array Structure
To maintain a conversation’s context, the Chat Completion API requires you to send the history of the conversation with each new request. This is done using a chat message array. This is a list of message objects, where each object has two key properties:
- role: Can be
user,assistant, orsystem. - content: The text of the message itself.
A conversation starts with a system message (your configuration), followed by alternating user and assistant messages. When a user sends a new message, you add it to this array and send the entire array to the chat gpt api. The model then generates the next assistant message based on that full context. As one WeWeb expert noted, each API call is a new conversation unless you re feed the old messages.
Building Your Application Logic in WeWeb
With the foundational concepts covered, let’s look at how to implement the logic within the WeWeb visual editor.
WeWeb Workflow Action to Call OpenAI
Workflows are at the heart of building applications in WeWeb. They are sequences of actions that run in response to events, like a user clicking a button. After setting up the OpenAI plugin, you will find new actions available, such as “Create Chat Completion”.
To build a chat feature, you would add this action to a workflow. The action’s settings will have fields for you to provide the message array. You can dynamically build this array using variables from your application, including the user’s latest input and the stored conversation history. A WeWeb user confirmed they were able to quickly get a response from the API and use it in their app with this simple workflow approach.
Display API Response in Chat UI
Once the workflow action receives a response from the chat gpt api, the next step is to display it to the user. A common pattern is to store the conversation history in a variable or collection that is bound to your chat interface. When a new response arrives, you simply add the assistant’s message to that collection. WeWeb’s visual editor automatically updates the UI to show the new message bubble. For a better user experience, you can also display a loading indicator while waiting for the API to respond.
Error Handling for API Calls
Things can and do go wrong. The user might have a poor internet connection, or the OpenAI API itself might be temporarily unavailable. A robust application needs to handle these errors gracefully. One WeWeb user reported that API calls would sometimes fail with a “Failed to fetch” error due to a spotty connection. Your workflow should include logic to catch these errors. Instead of letting the app hang, you can display a friendly message in the chat UI, like, “Sorry, something went wrong. Please try again.”
Advanced Architecture and Security
For production applications, you’ll want to think more deeply about security, state management, and scalability. This often involves moving some logic to a backend.
API Endpoint Design for ChatGPT Query
The most secure way to interact with the chat gpt api is to avoid calling it directly from the user’s browser. The recommended architecture is to have your front end application call a backend endpoint that you control. This backend then securely calls the OpenAI API with your secret key. This design prevents your API key from ever being exposed and allows you to add extra logic like rate limiting or input validation on the server.
Xano Proxy Integration for the ChatGPT API
You don’t need to be a traditional backend developer to create this secure endpoint. No code backend platforms like Xano are a perfect match for WeWeb. You can create a simple API endpoint in Xano that takes a user’s prompt, calls the chat gpt api from the server, and returns the response. This approach is not only more secure but also helps bypass front end limitations. For example, one user faced a timeout issue with a long running request from WeWeb, which they solved by running the call on a server where a longer timeout could be set. A team member even noted that Xano is “super easy” to use as a playground for testing new OpenAI features.
Authentication for Chat Session
If your app requires users to log in, you should implement authentication for your chat sessions (e.g., with Auth0). This ensures that only authorized users can interact with the AI, preventing abuse and allowing you to track usage per user. You can associate each conversation with a specific user account. This is also how you can implement premium features; for example, Snapchat made its “My AI” feature available exclusively to Snapchat+ subscribers.
Chat History Storage in Backend
To create a seamless user experience where conversations can be resumed across different sessions or devices, you need to store the chat history in a backend database. When a user sends a message, your backend can retrieve their past conversation, add it to the prompt for context, and save the new exchange. Without backend storage, the conversation is lost as soon as the user reloads the page. This persistence is key to making your chatbot feel truly intelligent and personal.
Frequently Asked Questions
Is my OpenAI API key safe in WeWeb?
Yes. When you use WeWeb’s official OpenAI plugin and enter your key in the designated secure field, it is not exposed to the client side. The plugin manages the key securely to protect it.
Can I build a chatbot that remembers past conversations?
Absolutely. To do this effectively, you should store the chat history in a backend database (like Xano or Airtable). Your app can then load this history when a user starts a new session and include it in requests to the chat gpt api to provide context.
What is the difference between the Text and Chat Completion API?
The Text Completion API is designed for single turn tasks where the AI completes a given piece of text. The Chat Completion API is designed for multi turn, conversational interactions, using a structured array of messages with roles like “user” and “assistant”. For most new applications, especially chatbots, the Chat Completion API is the better and more cost effective choice.
Do I need a backend like Xano to use the chat gpt api?
While you can make calls directly from WeWeb using the plugin, using a backend proxy like Xano is the recommended best practice for production apps. It enhances security by hiding your API key, allows for more complex server side logic, and helps you manage long running requests without front end timeouts.
How do I handle slow responses from the OpenAI API?
First, show a loading indicator in your UI so the user knows the app is working. For potentially long requests, consider using a backend proxy (like Xano) where you can set a longer timeout. You can also add instructions in your system prompt to ask the AI for more concise answers to speed up generation time.
By mastering these concepts, you are well equipped to build sophisticated, AI powered applications. The combination of WeWeb’s powerful visual editor and the intelligence of the chat gpt api provides a modern stack for turning your ideas into reality. Ready to see what you can build?


