apps/infsh/wan2-2-i2i-a14b

wan2-2-i2i-a14b

Creates videos from images and enhances video quality using a built-in upscaler.

run with your agent
# install belt
$curl -fsSL https://cli.inference.sh | sh
# view schema & details
$belt app get infsh/wan2-2-i2i-a14b
# run
$belt app run infsh/wan2-2-i2i-a14b

api reference

about

creates videos from images and enhances video quality using a built-in upscaler.

1. calling the api

install the client

the client provides a convenient way to interact with the api.

bash
1pip install inferencesh

setup your api key

set INFERENCE_API_KEY as an environment variable. get your key from settings → api keys.

bash
1export INFERENCE_API_KEY="inf_your_key"

run and get result

submit a request and wait for the final result. best for batch processing or when you don't need progress updates.

python
1from inferencesh import inference23client = inference()456result = client.run({7        "app": "infsh/wan2-2-i2i-a14b",8        "input": {}9    })1011print(result["output"])

stream live updates

get real-time progress updates as the task runs. ideal for showing progress bars, partial results, or long-running tasks.

python
1from inferencesh import inference23client = inference()456# stream=True yields updates as they arrive7for update in client.run({8        "app": "infsh/wan2-2-i2i-a14b",9        "input": {}10    }, stream=True):11    if update.get("progress"):12        print(f"progress: {update['progress']}%")13    if update.get("output"):14        print(f"output: {update['output']}")

2. authentication

the api uses api keys for authentication. see the authentication docs for detailed setup instructions.

3. files

file inputs are automatically handled by the sdk. you can pass local paths, urls, or base64 data.

automatic upload

the python sdk automatically detects local file paths and uploads them. urls are passed through as-is.

python
1# local file paths are automatically uploaded2result = client.run({3    "app": "infsh/wan2-2-i2i-a14b",4    "input": {5        "image": "/path/to/local/image.png",  # detected & uploaded6        "audio": "https://example.com/audio.mp3",  # url passed through7    }8})

manual upload

you can also upload files manually and use the returned url.

python
1# upload and get a hosted URL2file = client.files.upload("/path/to/file.png")3print(file.uri)  # https://cloud.inference.sh/...

4. webhooks

get notified when a task completes by providing a webhook url. when the task reaches a terminal state (completed, failed, or cancelled), a POST request is sent to your url with the task result.

python
1result = client.run({2    "app": "infsh/wan2-2-i2i-a14b",3    "input": {},4    "webhook": "https://your-server.com/webhook"5}, wait=False)

webhook payload

your endpoint receives a JSON POST with the task result:

json
1{2  "id": "task_abc123",3  "status": 9,4  "output": { ... },5  "error": "",6  "session_id": null,7  "created_at": "2024-01-15T10:30:00Z",8  "updated_at": "2024-01-15T10:30:05Z"9}
idstringtask id
statusnumberterminal status (9=completed, 10=failed, 11=cancelled)
outputobjecttask output (when completed)
errorstringerror message (when failed)
session_idstringsession id (if using sessions)
created_atstringiso timestamp
updated_atstringiso timestamp

5. schema

input

cache_thresholdnumber

cache threshold for transformer (0 to disable caching)

default: 0
desaturate_inputnumber

desaturate input image (0.0-1.0)

default: 0
guidance_scalenumber

classifier-free guidance scale

default: 2
heightinteger

target height (overrides scale)

imagestring(file)*

image

lorasarray

list of lora configs to apply

negative_promptstring

negative prompt to guide what to avoid

default: "oversaturated, overexposed, static, blurry details, subtitles, stylized, artwork, painting, still image, overall gray, worst quality, low quality, JPEG artifacts, ugly, deformed, extra fingers, poorly drawn hands, poorly drawn face, malformed, disfigured, deformed limbs, fused fingers, static motionless frame, cluttered background, three legs, crowded background, walking backwards"
num_inference_stepsinteger

number of denoising steps

default: 40
pre_downscale_factornumber

pre-downscale factor (0.1-1.0). values < 1.0 downscale the image first, then upscale to target size with latent noise generation. lower values add more details/creativity by creating more missing information for the transformer to fill in.

default: 1min:0.1max:1
promptstring

text prompt to guide upscaling

default: ""
scalenumber

upscale factor (if not specifying width/height)

default: 1
seedinteger

random seed for reproducibility

sharpen_inputnumber

sharpen input image (0.0-1.0+)

default: 0
strengthnumber

amount of noise to add (0.0-1.0)

default: 0.3
widthinteger

target width (overrides scale)

output

imagestring(file)*

image

ready to run wan2-2-i2i-a14b?

we use cookies

we use cookies to ensure you get the best experience on our website. for more information on how we use cookies, please see our cookie policy.

by clicking "accept", you agree to our use of cookies.
learn more.