The official inference.sh SDKs for Python and JavaScript/TypeScript.
Installation
1# Installation2pip install inferencesh3 4# With async support5pip install inferencesh[async]Quick Start
1from inferencesh import inference2 3client = inference(api_key="inf_your_key")4 5# Run an app and wait for result6result = client.run({7 "app": "infsh/flux",8 "input": {"prompt": "A sunset over mountains"}9})10 11print(result["output"])What's in the SDK?
Running Apps
Execute AI apps on inference.sh infrastructure.
- Running Apps — Basic task execution
- Streaming — Real-time progress updates
- Files — File uploads and downloads
Agent SDK
Build and interact with AI agents programmatically.
- Agent SDK Overview — Introduction
- Template Agents — Use existing agents
- Ad-hoc Agents — Create agents on-the-fly
- Building Tools — Define custom tools
Server Proxy
Protect API keys in frontend applications.
- Server Proxy — Proxy setup for all frameworks
- Vercel Deployment — Deploy with Vercel integration
Building Apps
Create your own apps to run on inference.sh. → See Extending Apps for the app development guide.
Environment Variables
1import os2from inferencesh import inference3 4client = inference(api_key=os.environ["INFERENCE_API_KEY"])TypeScript Support
The JavaScript SDK includes full TypeScript definitions:
typescript
1import type { Task, ApiTaskRequest, InferenceConfig } from '@inferencesh/sdk';Browser & Node.js
The JavaScript SDK works in both environments:
typescript
1// Node.js (CommonJS)2const { inference } = require('@inferencesh/sdk');3 4// ES Modules / TypeScript5import { inference } from '@inferencesh/sdk';6 7// Browser (ESM CDN)8import { inference } from 'https://esm.sh/@inferencesh/sdk';Note: Never expose API keys in client-side code. Use the Server Proxy in production.
Requirements
- Python 3.8+
requestsfor sync clientaiohttpfor async client (optional)
- Node.js 18.0.0+
- Modern browsers with
fetchsupport