π‘ What is Frosty AI?
Frosty AI is an AI model routing and observability platform that helps you optimize AI model usage across multiple providers. With Frosty, you can:
-
Route prompts across OpenAI, Anthropic, Mistral, and more
-
Automatically select models based on cost, performance, or fallback
-
Track usage, costs, latency, and success rates
-
Reduce vendor lock-in and improve reliability with failover logic
This guide walks you through how to use the Frosty AI community node in n8n.
π Getting Started with Frosty AI in n8n
To use the Frosty AI n8n integration, you need:
β
A Frosty AI account β Sign up at console.gofrosty.ai
β
A router set up in the Frosty console β Use the Quick Start Wizard to create your first router
β
A self-hosted (locally running) instance of n8n β Follow the n8n self-hosting docs to get started
π οΈ Installing Frosty AI in n8n
β οΈ The Frosty AI node works with self-hosted n8n instances only. Community nodes are not currently supported in n8n Cloud.
Step 2: Install the Community Node
In your self-hosted n8n environment, run:
n8n install n8n-nodes-frosty-ai
Then restart n8n and open the editor.
π€ Using the Frosty AI Node in a Workflow
Example: Route a prompt and return an AI response
This basic workflow sends a user prompt through your Frosty router and returns the response.
Workflow steps:
-
Webhook Trigger β Accepts a POST request with a prompt
-
Frosty AI Node β Sends the prompt to your router
-
Respond to Webhook β Returns the AI-generated reply
[Webhook Trigger] β [Frosty AI Node] β [Respond to Webhook]
π Setting Up Credentials
When prompted by the Frosty AI node:
-
Click Add New to create a credential
-
Paste in your Router ID and Router Key
-
Click Save
n8n will securely store your credentials for future use.
π Inputs (Frosty AI Chat Operation)
Parameter | Description | Required |
---|---|---|
| The AI prompt to send | β Yes |
| Routing logic: | β Yes |
π€ Outputs (Frosty AI Node)
Field | Description |
---|---|
| Unique request identifier |
| AI model used |
| AI provider (e.g., OpenAI, Anthropic) |
| AI-generated response |
| Estimated cost for the request |
| Total tokens used (input + output) |
| True if request was successful |
π Error Handling
Frosty AI returns structured errors to help with workflow reliability:
Status Code | Meaning |
---|---|
| Unauthorized β Check Router ID/Key |
| Access Denied β Trial expired or access removed |
| Rate Limit β Too many requests |
| No Provider Available β All models failed |
| Internal Error β Unknown server issue |
Use n8nβs built-in error handling tools to retry, fallback, or log failed executions.
π‘ Example Use Cases
-
π€ Smart auto-replies in forms and webhooks
-
π° Cost-based routing for model selection
-
π‘οΈ Failover across multiple LLM providers
-
π Log and analyze prompt behavior over time
-
π§ A/B test different AI providers
π Need Help?
For questions or support, reach out to us:
π© Email: support@gofrosty.ai
π Console: https://console.gofrosty.ai
π Docs: https://docs.gofrosty.ai
β¨ Start building smarter AI workflows with Frosty AI + n8n today!