What is Frosty AI?
Frosty AI is an AI model routing and observability platform that helps you optimize AI model usage across multiple providers. With Frosty AI, you can seamlessly switch between AI models, track costs, and improve performanceβall while maintaining reliability with built-in fallbacks and failover handling.
This guide walks you through how to use the Frosty AI Make App to integrate AI-powered responses into your workflows.
π Getting Started with Frosty AI in Make
Step 1: Sign Up & Set Up a Router
To use the Frosty AI Make Integration, you first need to create an account and set up a router.
β
Sign up for Frosty AI: console.gofrosty.ai
β
Create a router using the Quick Start Wizard
Once your router is created, copy your Router ID and Router Key from the Router Details page. These credentials will be required to connect Make to Frosty AI.
π οΈ Setting Up Frosty AI in Make
Step 2: Add the Frosty AI App in Make
-
Open Make and create a new Scenario.
-
Click Add Module and search for Frosty AI.
-
Choose one of the available modules:
-
Frosty AI Chat (for sending AI prompts and getting responses)
-
Universal API Call (for advanced API access to Frosty AI)
-
Step 3: Connect Frosty AI to Make
When using any Frosty AI module, you must first create a connection:
-
Click Add Connection when prompted.
-
Enter your Router ID and Router Key (found in Frosty AI).
-
Click Save to establish the connection.
Once connected, you wonβt need to enter these credentials againβMake will securely store them.
π€ Using the Frosty AI Chat Module
The Frosty AI Chat module allows you to send prompts to your AI router and receive intelligent responses.
How to Use:
-
Add the Frosty AI Chat module to your Scenario.
-
Select your Connection (this will automatically use your Router ID & Router Key).
-
Enter the Prompt (e.g., "Tell me a joke").
-
Run the Scenario to get an AI-generated response.
π Inputs (Chat Module)
Parameter | Description | Required? |
---|---|---|
prompt | The AI prompt to send through Frosty AI | β Yes |
rule | Route based on specific logic: | β Yes |
π€ Outputs (Chat Module)
Field | Description |
---|---|
trace_id | Unique request identifier |
total_tokens | Total tokens used (prompt + response) |
model | AI model selected by Frosty AI |
provider | AI provider (e.g., OpenAI, Anthropic) |
cost | Estimated cost for the request |
rule | Routing rule applied (e.g., "cost", "perf") |
response | AI-generated response |
success |
|
π Frosty AI supports both rule-based and auto-routing. You can manually route based on cost or performance, or enable Auto Router to dynamically select the best model using weighted scoring across success rate, cost, and latency.
π Using the Universal API Call Module (Advanced Users)
If you need more flexibility, the Universal API Call module lets you send any request to the Frosty AI API.
How to Use:
-
Add the Universal API Call module to your Scenario.
-
Select your Connection (Router ID & Key are handled automatically).
-
Enter:
-
Request URL (e.g.,
/chat
) -
HTTP Method (GET, POST, PUT, DELETE, PATCH)
-
Optional Headers
-
Optional Query Parameters (e.g.,
prompt=Hello
,rule=performance
) -
Optional Body (for POST, PUT, PATCH)
-
-
Run the Scenario to send the API request to Frosty AI.
π Inputs (Universal API Call Module)
Parameter | Description | Required? |
---|---|---|
URL | Path relative to | β Yes |
Method | HTTP method (GET, POST, PUT, DELETE, PATCH) | β Yes |
Headers | Custom headers (optional) | β No |
Query Params | Query string parameters | β Yes |
Body | Request body for applicable methods (optional) | β No |
π€ Outputs (Universal API Call Module)
Field | Description |
---|---|
body | API response body |
headers | Response headers |
statusCode | HTTP response status code |
π This module lets you access any Frosty AI feature beyond the default chat module.
π Error Handling in Frosty AI
Frosty AI returns structured errors so you can handle failures gracefully.
Status Code | Meaning | Possible Cause |
---|---|---|
401 | Unauthorized | Invalid Router ID or Key |
403 | Forbidden | Trial expired or access denied |
429 | Rate Limited | Too many requests in a short period |
502 | No Provider Available | All AI models failed or keys are invalid |
500 | Internal Error | Unknown server issue |
π How to handle errors in Make:
-
401/403/429 β Notify the user or retry later.
-
502 β Set up an alternative action if no AI provider is available.
-
500 β Use Make's error handling tools to retry or log the issue.
π‘ Example Use Cases
-
Chatbots: Power AI-based chat responses in workflows.
-
Cost Optimization: Route requests to the cheapest available AI model.
-
Failover Handling: Automatically switch providers when a model fails.
-
Automated Content Creation: Generate AI-powered content inside Make.
-
A/B Testing for AI Models: Compare outputs from different AI providers.
π Need Help?
For questions or support, reach out to us:
π© Email: support@gofrosty.ai
π Console: https://console.gofrosty.ai
π Docs: https://docs.gofrosty.ai