Frosty AI n8n Integration Guide πŸš€

Complete guide for using Frosty AI inside your self-hosted n8n workflows.


πŸ’‘ What is Frosty AI?

Frosty AI is an AI model routing and observability platform that helps you optimize AI model usage across multiple providers. With Frosty, you can:

  • Route prompts across OpenAI, Anthropic, Mistral, and more

  • Automatically select models based on cost, performance, or fallback

  • Track usage, costs, latency, and success rates

  • Reduce vendor lock-in and improve reliability with failover logic

This guide walks you through how to use the Frosty AI community node in n8n.


πŸ”— Getting Started with Frosty AI in n8n

To use the Frosty AI n8n integration, you need:

βœ… A Frosty AI account β†’ Sign up at console.gofrosty.ai
βœ… A router set up in the Frosty console β†’ Use the Quick Start Wizard to create your first router
βœ… A self-hosted (locally running) instance of n8n β†’ Follow the n8n self-hosting docs to get started


πŸ› οΈ Installing Frosty AI in n8n

⚠️ The Frosty AI node works with self-hosted n8n instances only. Community nodes are not currently supported in n8n Cloud.

Step 2: Install the Community Node

In your self-hosted n8n environment, run:

n8n install n8n-nodes-frosty-ai

Then restart n8n and open the editor.


πŸ€– Using the Frosty AI Node in a Workflow

Example: Route a prompt and return an AI response

This basic workflow sends a user prompt through your Frosty router and returns the response.

Workflow steps:

  1. Webhook Trigger – Accepts a POST request with a prompt

  2. Frosty AI Node – Sends the prompt to your router

  3. Respond to Webhook – Returns the AI-generated reply

[Webhook Trigger] β†’ [Frosty AI Node] β†’ [Respond to Webhook]

πŸ” Setting Up Credentials

When prompted by the Frosty AI node:

  1. Click Add New to create a credential

  2. Paste in your Router ID and Router Key

  3. Click Save

n8n will securely store your credentials for future use.


πŸ“ Inputs (Frosty AI Chat Operation)

Parameter

Description

Required

prompt

The AI prompt to send

βœ… Yes

rule

Routing logic: cost, performance, or none

βœ… Yes


πŸ“€ Outputs (Frosty AI Node)

Field

Description

trace_id

Unique request identifier

model

AI model used

provider

AI provider (e.g., OpenAI, Anthropic)

response

AI-generated response

cost

Estimated cost for the request

total_tokens

Total tokens used (input + output)

success

True if request was successful


πŸ›‘ Error Handling

Frosty AI returns structured errors to help with workflow reliability:

Status Code

Meaning

401

Unauthorized – Check Router ID/Key

403

Access Denied – Trial expired or access removed

429

Rate Limit – Too many requests

502

No Provider Available – All models failed

500

Internal Error – Unknown server issue

Use n8n’s built-in error handling tools to retry, fallback, or log failed executions.


πŸ’‘ Example Use Cases

  • πŸ€– Smart auto-replies in forms and webhooks

  • πŸ’° Cost-based routing for model selection

  • πŸ›‘οΈ Failover across multiple LLM providers

  • πŸ“Š Log and analyze prompt behavior over time

  • 🧠 A/B test different AI providers


πŸ™Œ Need Help?

For questions or support, reach out to us:

πŸ“© Email: support@gofrosty.ai
πŸ”— Console: https://console.gofrosty.ai
πŸ“š Docs: https://docs.gofrosty.ai


✨ Start building smarter AI workflows with Frosty AI + n8n today!

Updated on