Overview
The AI Inference Node lets you use large language models like OpenAI’s ChatGPT and Google’s Gemini directly inside your workflows. Instead of sending data to external tools or building custom integrations, you can prompt an AI model as a workflow step and immediately use its output in downstream logic.
In this article, we’ll be covering how to set up AI provider integrations in gaiia, how to add and configure the AI Inference Node, and how to use AI output in real-world automation scenarios.
Setting up AI integrations in gaiia
Before you can use the AI Inference Node, you need to connect an AI provider to your gaiia tenant. gaiia does not provide AI models directly, so you must bring your own API key.
- Navigate to
Settings > Integrations - Select either OpenAI or Google AI
- Enter your provider API key
- Save the integration
Once saved, the integration becomes available to select inside the AI Inference Node.
You are responsible for any usage costs billed by your AI provider.
Finding the AI Inference Node
You can add the AI Inference Node from the Workflow Builder’s node library.
- Open a workflow in the Builder
- Click Add Node
- Open the AI category
- Select AI Inference
Once added, the node behaves like any other workflow step and can be connected to upstream and downstream nodes.
Configuring the AI provider and model
Each AI Inference Node is configured independently, allowing you to choose which provider and model to use per workflow.
- Select your AI provider (OpenAI or Google Gemini)
- Choose a model to run the inference
- Confirm the integration is selected from your configured providers
During early usage, we recommend starting with smaller, faster models to reduce latency and control costs.
Writing prompts for AI inference
The AI Inference Node supports two prompt inputs that control how the model responds.
- User prompt — the instruction or question you want the model to respond to
- System prompt — optional context that sets the role or perspective of the model
Prompt fields can reference outputs from earlier workflow steps, allowing the AI to reason over real operational data.
Using AI output in a workflow
The AI Inference Node is most effective when used to add context and intelligence to workflows, not replace deterministic logic.
A common example workflow looks like this:
- Trigger the workflow on an account event
- Use a gaiia node to retrieve customer notes or interaction history
- Prompt the AI to summarize the notes or suggest next actions
- Use the AI output to create tickets, update records, or drive routing logic
Because the AI response is treated as structured workflow data, it can be transformed, evaluated, or passed into additional nodes as needed.
Understanding requirements and limitations
Before using the AI Inference Node in production workflows, review the following constraints.
- You must supply your own OpenAI or Google AI API key
- Usage costs are billed directly by your AI provider
- There are no built-in guardrails for output correctness
- Usage analytics and cost tracking are not yet available inside gaiia
AI-generated content can be unpredictable. We recommend human review for customer-facing or compliance-critical workflows.