Inference Logoinference.sh

Hono

Set up the inference.sh proxy with Hono for edge and serverless deployments.


Installation

bash
1npm install @inferencesh/sdk hono

Basic Setup

typescript
1import { Hono } from "hono";2import { createRouteHandler } from "@inferencesh/sdk/proxy/hono";3 4const app = new Hono();5const proxyHandler = createRouteHandler();6 7app.all("/api/inference/proxy", proxyHandler);8 9export default app;

Environment Variable

Set your API key in your runtime environment:

bash
1# Cloudflare Workers (wrangler.toml)2[vars]3INFERENCE_API_KEY = "inf_your_key_here"
bash
1# Deno Deploy2export INFERENCE_API_KEY="inf_your_key_here"

Cloudflare Workers

typescript
1// src/index.ts2import { Hono } from "hono";3import { createRouteHandler } from "@inferencesh/sdk/proxy/hono";4 5const app = new Hono();6 7const proxyHandler = createRouteHandler();8app.all("/api/inference/proxy", proxyHandler);9 10export default app;

Deploy:

bash
1npx wrangler deploy

Deno Deploy

typescript
1// main.ts2import { Hono } from "https://deno.land/x/hono/mod.ts";3import { createRouteHandler } from "@inferencesh/sdk/proxy/hono";4 5const app = new Hono();6const proxyHandler = createRouteHandler();7 8app.all("/api/inference/proxy", proxyHandler);9 10Deno.serve(app.fetch);

Client Configuration

typescript
1import { inference } from "@inferencesh/sdk";2 3const client = inference({4  proxyUrl: "https://my-worker.workers.dev/api/inference/proxy"5});6 7const result = await client.run({8  app: "infsh/flux",9  input: { prompt: "A sunset" }10});

Custom API Key Resolution

Load from KV, secrets manager, or other sources:

typescript
1import { createRouteHandler } from "@inferencesh/sdk/proxy/hono";2 3const proxyHandler = createRouteHandler({4  resolveApiKey: async () => {5    // Load from Cloudflare KV, D1, or external service6    const key = await env.SECRETS.get("INFERENCE_API_KEY");7    return key;8  }9});

With Middleware

typescript
1import { Hono } from "hono";2import { cors } from "hono/cors";3import { rateLimiter } from "hono-rate-limiter";4import { createRouteHandler } from "@inferencesh/sdk/proxy/hono";5 6const app = new Hono();7 8// CORS9app.use("/api/*", cors({10  origin: "https://myapp.com",11  allowHeaders: ["Content-Type", "x-inf-target-url"],12}));13 14// Rate limiting15app.use("/api/*", rateLimiter({16  windowMs: 15 * 60 * 1000,17  limit: 100,18}));19 20// Proxy21const proxyHandler = createRouteHandler();22app.all("/api/inference/proxy", proxyHandler);23 24export default app;

Why Hono?

Hono is ideal for:

  • Edge deployments (Cloudflare Workers, Vercel Edge)
  • Minimal bundle size (~12kb)
  • Fast cold starts for serverless
  • Multi-runtime support (Node, Deno, Bun, Workers)

we use cookies

we use cookies to ensure you get the best experience on our website. for more information on how we use cookies, please see our cookie policy.

by clicking "accept", you agree to our use of cookies.
learn more.