AI Features

AI in Your Apps

Your Webase apps aren't just static pages — they can think. With built-in AI capabilities, your generated apps can complete text, hold conversations, and run custom AI workflows, all without you writing a single line of backend code.

No API keys needed to get started. Webase provides a built-in AI endpoint that your apps can call directly. AI usage is tracked per app and billed through your Webase plan or your own API keys (BYOK).

What Can AI Do in Your App?

Once your app is generated, it can use AI at runtime for a wide variety of tasks:

  • Text completion — Generate product descriptions, summarize long content, draft emails, or create any text on demand.
  • Chat conversations — Build chatbots, customer support assistants, or interactive tutors with multi-turn conversation support.
  • Custom AI services — Define structured workflows like sentiment analysis, content classification, or data extraction that run against your app's data.

The AI Completion Endpoint

Your app can call the AI endpoint to get completions. The endpoint is:

POST /api/v1/apps/:app_slug/ai/complete

Send a prompt and get back an AI-generated response. You can include a system message to guide the AI's behavior, choose which model to use, and control the response length.

Chat Sessions

For multi-turn conversations, Webase manages chat sessions automatically. When your app starts a conversation, a session ID is created. Send that session ID with subsequent messages, and the AI remembers the full conversation history.

  • Starting a session — Make your first AI call without a session ID. The response includes a new session ID you can store.
  • Continuing a session — Include the session ID in follow-up requests. The AI sees all previous messages in the conversation.
  • Managing sessions — Sessions persist across page reloads. You can clear a session to start a fresh conversation.

Available Models and Pricing

Webase supports multiple AI models from OpenAI and Anthropic. Each model has different capabilities and pricing:

  • GPT-4o — OpenAI's flagship model. Best for complex reasoning and detailed responses. Higher cost per request.
  • GPT-4o Mini — A smaller, faster, and more affordable OpenAI model. Great for simpler tasks like text formatting or quick answers.
  • Claude Sonnet — Anthropic's balanced model. Strong at analysis and creative writing. Mid-range pricing.
  • Claude Haiku — Anthropic's fastest and most affordable model. Ideal for high-volume, straightforward tasks.

Tip: Start with a smaller model like GPT-4o Mini or Claude Haiku for simple tasks. Use the larger models only when you need more sophisticated reasoning. This keeps your costs down.

Budget Tracking and Limits

Every AI call your app makes is tracked and counted against your app's AI budget. You can monitor spending in the editor toolbar, which shows token usage and costs in real time. If your app exceeds its budget, AI requests will return a 402 Payment Required error until the budget is increased or reset.

See AI Budget Management for details on setting limits and alerts.

Quick Example

Here is how a generated app might call the AI endpoint to summarize a piece of text. Inside your app's code, you would use a function like this:

const response = await DataService.completeAI({ prompt: "Summarize this article in 3 bullet points: ..." })

The DataService helper is automatically available in every generated app. It handles authentication, session management, and error handling for you. Just provide your prompt and use the response.

Next Steps