Inference Logoinference.sh

Vercel

Deploy inference.sh apps to Vercel with automatic API key management.


Quick Setup with Vercel Integration

The recommended way to connect your Vercel app to inference.sh:

  1. Go to Vercel Integrations and search for inference.sh
  2. Click Add Integration
  3. Select the projects to connect
  4. Follow the authorization flow

Once connected, your INFERENCE_API_KEY environment variable is automatically configured.


Manual Setup

If you prefer manual configuration:

1. Create an API Key

  1. Go to Settings → API Keys
  2. Click Create API Key
  3. Copy your key (starts with inf_)

2. Add to Vercel Environment

  1. Go to your project in the Vercel dashboard
  2. Navigate to Settings → Environment Variables
  3. Add a new variable:
    • Name: INFERENCE_API_KEY
    • Value: Your API key
    • Environments: Production, Preview, Development

3. Redeploy

Redeploy your application to pick up the new environment variable.


Project Setup

Next.js App Router

bash
1# Install the SDK2npm install @inferencesh/sdk

Create the proxy route at app/api/inference/proxy/route.ts:

typescript
1import { route } from "@inferencesh/sdk/proxy/nextjs";2 3export const { GET, POST, PUT } = route;

Configure the client in your frontend:

typescript
1"use client";2 3import { inference } from "@inferencesh/sdk";4 5const client = inference({6  proxyUrl: "/api/inference/proxy"7});8 9export async function generateImage(prompt: string) {10  const result = await client.run({11    app: "infsh/flux",12    input: { prompt }13  });14  15  return result.output;16}

Next.js Page Router

Create the proxy at pages/api/inference/proxy.ts:

typescript
1export { handler as default } from "@inferencesh/sdk/proxy/nextjs";

Edge Runtime

The proxy works with Vercel Edge Functions:

typescript
1// app/api/inference/proxy/route.ts2import { route } from "@inferencesh/sdk/proxy/nextjs";3 4export const runtime = "edge";5export const { GET, POST, PUT } = route;

Environment Variables

VariableDescriptionRequired
INFERENCE_API_KEYYour inference.sh API keyYes

The proxy automatically reads INFERENCE_API_KEY from your environment.


Preview Deployments

Each Vercel preview deployment can use the same API key. All requests are authenticated through your server, so there's no exposure risk.

For different keys per environment:

  1. Go to Settings → Environment Variables
  2. Set different values for Production vs Preview

Troubleshooting

"Missing INFERENCE_API_KEY environment variable"

The proxy couldn't find your API key. Check that:

  • The variable is named exactly INFERENCE_API_KEY
  • You've redeployed after adding the variable
  • The environment (Production/Preview) matches your deployment

CORS Errors

If calling the proxy from a different domain, you may need to add CORS headers:

typescript
1// app/api/inference/proxy/route.ts2import { route } from "@inferencesh/sdk/proxy/nextjs";3import { NextResponse } from "next/server";4 5export async function OPTIONS() {6  return new NextResponse(null, {7    headers: {8      "Access-Control-Allow-Origin": "*",9      "Access-Control-Allow-Methods": "GET, POST, PUT, OPTIONS",10      "Access-Control-Allow-Headers": "Content-Type, x-inf-target-url",11    },12  });13}14 15export const { GET, POST, PUT } = route;

Timeout Errors

For long-running tasks, consider:

  • Using streaming for real-time updates
  • Increasing function timeout in vercel.json:
json
1{2  "functions": {3    "app/api/inference/proxy/route.ts": {4      "maxDuration": 605    }6  }7}

Example Project

See a complete Next.js + Vercel example:

bash
1npx create-next-app my-inference-app2cd my-inference-app3npm install @inferencesh/sdk

Then follow the Server Proxy guide to set up the proxy route.


Next Steps

we use cookies

we use cookies to ensure you get the best experience on our website. for more information on how we use cookies, please see our cookie policy.

by clicking "accept", you agree to our use of cookies.
learn more.