Inference Logoinference.sh
apps/infsh/wan2-1-i2v

wan2-1-i2v

Generates high-quality, dynamic videos from a single static image.

run with your agent
# install belt
$curl -fsSL https://cli.inference.sh | sh
# view schema & details
$belt app get infsh/wan2-1-i2v
# run
$belt app run infsh/wan2-1-i2v

api reference

about

generates high-quality, dynamic videos from a single static image.

1. calling the api

install the client

the client provides a convenient way to interact with the api.

bash
1pip install inferencesh

setup your api key

set INFERENCE_API_KEY as an environment variable. get your key from settings → api keys.

bash
1export INFERENCE_API_KEY="inf_your_key"

run and get result

submit a request and wait for the final result. best for batch processing or when you don't need progress updates.

python
1from inferencesh import inference23client = inference()456result = client.run({7        "app": "infsh/wan2-1-i2v",8        "input": {}9    })1011print(result["output"])

stream live updates

get real-time progress updates as the task runs. ideal for showing progress bars, partial results, or long-running tasks.

python
1from inferencesh import inference23client = inference()456# stream=True yields updates as they arrive7for update in client.run({8        "app": "infsh/wan2-1-i2v",9        "input": {}10    }, stream=True):11    if update.get("progress"):12        print(f"progress: {update['progress']}%")13    if update.get("output"):14        print(f"output: {update['output']}")

2. authentication

the api uses api keys for authentication. see the authentication docs for detailed setup instructions.

3. files

file inputs are automatically handled by the sdk. you can pass local paths, urls, or base64 data.

automatic upload

the python sdk automatically detects local file paths and uploads them. urls are passed through as-is.

python
1# local file paths are automatically uploaded2result = client.run({3    "app": "infsh/wan2-1-i2v",4    "input": {5        "image": "/path/to/local/image.png",  # detected & uploaded6        "audio": "https://example.com/audio.mp3",  # url passed through7    }8})

manual upload

you can also upload files manually and use the returned url.

python
1# upload and get a hosted URL2file = client.files.upload("/path/to/file.png")3print(file.uri)  # https://cloud.inference.sh/...

4. webhooks

get notified when a task completes by providing a webhook url. when the task reaches a terminal state (completed, failed, or cancelled), a POST request is sent to your url with the task result.

python
1result = client.run({2    "app": "infsh/wan2-1-i2v",3    "input": {},4    "webhook": "https://your-server.com/webhook"5}, wait=False)

webhook payload

your endpoint receives a JSON POST with the task result:

json
1{2  "id": "task_abc123",3  "status": 9,4  "output": { ... },5  "error": "",6  "session_id": null,7  "created_at": "2024-01-15T10:30:00Z",8  "updated_at": "2024-01-15T10:30:05Z"9}
idstringtask id
statusnumberterminal status (9=completed, 10=failed, 11=cancelled)
outputobjecttask output (when completed)
errorstringerror message (when failed)
session_idstringsession id (if using sessions)
created_atstringiso timestamp
updated_atstringiso timestamp

5. schema

input

promptstring*

text prompt for video generation

input_imageobject*

input image for image-to-video generation

end_frameobject

optional end frame image for video generation

sizestring

size of the generated video (width*height)

default: "832x480"
num_framesinteger

number of frames to generate (should be 4n+1)

default: 81
fpsinteger

frames per second for the output video

default: 16
guidance_scalenumber

classifier-free guidance scale

default: 5
num_inference_stepsinteger

number of denoising steps

default: 30
seedinteger

random seed for reproducibility (-1 for random)

default: -1
negative_promptstring

negative prompt to guide generation

default: ""
sample_solverstring

solver to use for sampling (unipc or dpm++)

default: "unipc"
shiftnumber

noise schedule shift parameter

default: 5
tea_cachenumber

teacache multiplier (0 to disable, 1.5-2.5 recommended for speed)

default: 2
tea_cache_start_step_percinteger

teacache starting step percentage

default: 0
lora_filestring

url to lora file in safetensors format

lora_multipliernumber

multiplier for the lora effect

default: 1
vae_tile_sizeinteger

vae tile size for lower vram usage (0, 128, or 256)

default: 128
enable_RIFLExboolean

enable riflex positional embedding for longer videos

default: true
joint_passboolean

enable joint pass for 10% speed boost

default: true
attentionstring

attention mechanism to use for generation

default: "sage"
options:"sage""sdpa"
cfg_star_switchboolean

enable cfg* guidance

default: true
cfg_zero_stepinteger

step at which to switch to cfg* guidance

default: 5
add_frames_for_end_imageboolean

add frames for end image in image-to-video

default: true
temporal_upsamplingstring

temporal upsampling method

default: ""
options:"""rife2""rife4"
spatial_upsamplingstring

spatial upsampling method

default: ""
options:"""lanczos1.5""lanczos2"

output

videoobject*

generated video file

ready to run wan2-1-i2v?

we use cookies

we use cookies to ensure you get the best experience on our website. for more information on how we use cookies, please see our cookie policy.

by clicking "accept", you agree to our use of cookies.
learn more.