Execute AI apps on inference.sh.
Basic Usage
1from inferencesh import inference2 3client = inference(api_key="inf_your_key")4 5result = client.run({6 "app": "infsh/flux",7 "input": {"prompt": "A sunset over mountains"}8})9 10print(f"Task ID: {result['id']}")11print(f"Output: {result['output']}")Parameters
| Parameter | Type | Description |
|---|---|---|
app | string | App identifier (namespace/name or namespace/name@version) |
input | object | Input matching app schema |
setup | object | Setup parameters (affects worker warmth) |
infra | 'cloud' | 'private' | Infrastructure type |
variant | string | App variant |
workers | string[] | Specific worker IDs (for private) |
Setup Parameters
Setup parameters configure the app instance. Workers with matching setup are "warm" and start faster:
1result = client.run({2 "app": "infsh/flux",3 "setup": {"model": "schnell"},4 "input": {"prompt": "A sunset"}5})Private Workers
Run on your own infrastructure:
1result = client.run({2 "app": "my-team/my-app",3 "input": {...},4 "infra": "private",5 "workers": ["worker-id-1"] # Optional: specific workers6})Task Status
1from inferencesh import TaskStatus2 3TaskStatus.QUEUED # 2 - Waiting4TaskStatus.RUNNING # 7 - Executing5TaskStatus.CANCELLING # 8 - Cancelling6TaskStatus.COMPLETED # 10 - Done7TaskStatus.FAILED # 11 - Error8TaskStatus.CANCELLED # 12 - Cancelled