Frosty AI Make Integration Guide πŸš€

Complete guide for using Frosty AI in Make

What is Frosty AI?

Frosty AI is an AI model routing and observability platform that helps you optimize AI model usage across multiple providers. With Frosty AI, you can seamlessly switch between AI models, track costs, and improve performanceβ€”all while maintaining reliability with built-in fallbacks and failover handling.

This guide walks you through how to use the Frosty AI Make App to integrate AI-powered responses into your workflows.


πŸ”— Getting Started with Frosty AI in Make

Step 1: Sign Up & Set Up a Router

To use the Frosty AI Make Integration, you first need to create an account and set up a router.

βœ… Sign up for Frosty AI: console.gofrosty.ai
βœ… Create a router using the Quick Start Wizard

Once your router is created, copy your Router ID and Router Key from the Router Details page. These credentials will be required to connect Make to Frosty AI.


πŸ› οΈ Setting Up Frosty AI in Make

Step 2: Add the Frosty AI App in Make

  1. Open Make and create a new Scenario.

  2. Click Add Module and search for Frosty AI.

  3. Choose one of the available modules:

    • Frosty AI Chat (for sending AI prompts and getting responses)

    • Universal API Call (for advanced API access to Frosty AI)


Step 3: Connect Frosty AI to Make

When using any Frosty AI module, you must first create a connection:

  1. Click Add Connection when prompted.

  2. Enter your Router ID and Router Key (found in Frosty AI).

  3. Click Save to establish the connection.

Once connected, you won’t need to enter these credentials againβ€”Make will securely store them.


πŸ€– Using the Frosty AI Chat Module

The Frosty AI Chat module allows you to send prompts to your AI router and receive intelligent responses.

How to Use:

  1. Add the Frosty AI Chat module to your Scenario.

  2. Select your Connection (this will automatically use your Router ID & Router Key).

  3. Enter the Prompt (e.g., "Tell me a joke").

  4. Run the Scenario to get an AI-generated response.

πŸ“ Inputs (Chat Module)

Parameter

Description

Required?

prompt

The AI prompt to send through Frosty AI

βœ… Yes

rule

Route based on specific logic: cost , performance , or none by default

βœ… Yes

πŸ“€ Outputs (Chat Module)

Field

Description

trace_id

Unique request identifier

total_tokens

Total tokens used (prompt + response)

model

AI model selected by Frosty AI

provider

AI provider (e.g., OpenAI, Anthropic)

cost

Estimated cost for the request

rule

Routing rule applied (e.g., "cost", "perf")

response

AI-generated response

success

True if successful, False otherwise

πŸ” Frosty AI supports both rule-based and auto-routing. You can manually route based on cost or performance, or enable Auto Router to dynamically select the best model using weighted scoring across success rate, cost, and latency.


🌎 Using the Universal API Call Module (Advanced Users)

If you need more flexibility, the Universal API Call module lets you send any request to the Frosty AI API.

How to Use:

  1. Add the Universal API Call module to your Scenario.

  2. Select your Connection (Router ID & Key are handled automatically).

  3. Enter:

    • Request URL (e.g., /chat)

    • HTTP Method (GET, POST, PUT, DELETE, PATCH)

    • Optional Headers

    • Optional Query Parameters (e.g., prompt=Hello , rule=performance )

    • Optional Body (for POST, PUT, PATCH)

  4. Run the Scenario to send the API request to Frosty AI.

πŸ“ Inputs (Universal API Call Module)

Parameter

Description

Required?

URL

Path relative to https://api.gofrosty.ai (e.g., /chat)

βœ… Yes

Method

HTTP method (GET, POST, PUT, DELETE, PATCH)

βœ… Yes

Headers

Custom headers (optional)

❌ No

Query Params

Query string parameters

βœ… Yes

Body

Request body for applicable methods (optional)

❌ No

πŸ“€ Outputs (Universal API Call Module)

Field

Description

body

API response body

headers

Response headers

statusCode

HTTP response status code

πŸš€ This module lets you access any Frosty AI feature beyond the default chat module.


πŸ›‘ Error Handling in Frosty AI

Frosty AI returns structured errors so you can handle failures gracefully.

Status Code

Meaning

Possible Cause

401

Unauthorized

Invalid Router ID or Key

403

Forbidden

Trial expired or access denied

429

Rate Limited

Too many requests in a short period

502

No Provider Available

All AI models failed or keys are invalid

500

Internal Error

Unknown server issue

πŸ“Œ How to handle errors in Make:

  • 401/403/429 β†’ Notify the user or retry later.

  • 502 β†’ Set up an alternative action if no AI provider is available.

  • 500 β†’ Use Make's error handling tools to retry or log the issue.


πŸ’‘ Example Use Cases

  • Chatbots: Power AI-based chat responses in workflows.

  • Cost Optimization: Route requests to the cheapest available AI model.

  • Failover Handling: Automatically switch providers when a model fails.

  • Automated Content Creation: Generate AI-powered content inside Make.

  • A/B Testing for AI Models: Compare outputs from different AI providers.


πŸ™Œ Need Help?

For questions or support, reach out to us:

πŸ“© Email: support@gofrosty.ai
πŸ”— Console: https://console.gofrosty.ai
πŸ“š Docs: https://docs.gofrosty.ai


✨ Start Automating AI Workflows with Frosty AI in Make Today!

Updated on