apps/bytedance/seedance-2-0

seedance-2-0

Professional multimodal video generation from text, images, video, and audio references using ByteDance's Seedance 2.0 model via BytePlus ARK API. Supports up to 1080p, text-to-video, image-to-video, and multimodal reference-to-video with synchronized audio.

run with your agent
# install belt
$curl -fsSL https://cli.inference.sh | sh
# view schema & details
$belt app get bytedance/seedance-2-0
# run
$belt app run bytedance/seedance-2-0

api reference

about

professional multimodal video generation from text, images, video, and audio references using bytedance's seedance 2.0 model via byteplus ark api. supports up to 1080p, text-to-video, image-to-video, and multimodal reference-to-video with synchronized audio.

1. calling the api

install the client

the client provides a convenient way to interact with the api.

bash
1pip install inferencesh

setup your api key

set INFERENCE_API_KEY as an environment variable. get your key from settings → api keys.

bash
1export INFERENCE_API_KEY="inf_your_key"

run and get result

submit a request and wait for the final result. best for batch processing or when you don't need progress updates.

python
1from inferencesh import inference23client = inference()456result = client.run({7        "app": "bytedance/seedance-2-0",8        "input": {}9    })1011print(result["output"])

stream live updates

get real-time progress updates as the task runs. ideal for showing progress bars, partial results, or long-running tasks.

python
1from inferencesh import inference23client = inference()456# stream=True yields updates as they arrive7for update in client.run({8        "app": "bytedance/seedance-2-0",9        "input": {}10    }, stream=True):11    if update.get("progress"):12        print(f"progress: {update['progress']}%")13    if update.get("output"):14        print(f"output: {update['output']}")

2. authentication

the api uses api keys for authentication. see the authentication docs for detailed setup instructions.

3. files

file inputs are automatically handled by the sdk. you can pass local paths, urls, or base64 data.

automatic upload

the python sdk automatically detects local file paths and uploads them. urls are passed through as-is.

python
1# local file paths are automatically uploaded2result = client.run({3    "app": "bytedance/seedance-2-0",4    "input": {5        "image": "/path/to/local/image.png",  # detected & uploaded6        "audio": "https://example.com/audio.mp3",  # url passed through7    }8})

manual upload

you can also upload files manually and use the returned url.

python
1# upload and get a hosted URL2file = client.files.upload("/path/to/file.png")3print(file.uri)  # https://cloud.inference.sh/...

4. webhooks

get notified when a task completes by providing a webhook url. when the task reaches a terminal state (completed, failed, or cancelled), a POST request is sent to your url with the task result.

python
1result = client.run({2    "app": "bytedance/seedance-2-0",3    "input": {},4    "webhook": "https://your-server.com/webhook"5}, wait=False)

webhook payload

your endpoint receives a JSON POST with the task result:

json
1{2  "id": "task_abc123",3  "status": 9,4  "output": { ... },5  "error": "",6  "session_id": null,7  "created_at": "2024-01-15T10:30:00Z",8  "updated_at": "2024-01-15T10:30:05Z"9}
idstringtask id
statusnumberterminal status (9=completed, 10=failed, 11=cancelled)
outputobjecttask output (when completed)
errorstringerror message (when failed)
session_idstringsession id (if using sessions)
created_atstringiso timestamp
updated_atstringiso timestamp

5. schema

input

promptstring*

text prompt describing the video content. supports english, japanese, indonesian, spanish, and portuguese.

example: "A cat stretches lazily on a sunlit windowsill, yawning as golden afternoon light filters through sheer curtains."
imagestring(file)

first-frame image for image-to-video generation. mutually exclusive with reference_image/reference_video/reference_audio.

end_imagestring(file)

last-frame image for first+last frame video generation. requires image to be set as the first frame.

reference_imagestring(file)

reference image for multimodal reference-to-video. use prompt to describe how to use it.

reference_image_2string(file)

second reference image for multimodal reference-to-video.

reference_image_3string(file)

third reference image for multimodal reference-to-video.

reference_videostring(file)

reference video for multimodal generation. max 15s, formats: mp4/mov.

reference_audiostring(file)

reference audio for multimodal generation. max 15s, formats: wav/mp3. requires at least one image or video.

resolutionstring

video resolution. 1080p for highest quality, 720p for balanced, 480p for fastest.

default: "720p"
options:"480p""720p""1080p"
ratiostring

aspect ratio. 'adaptive' auto-selects based on input content.

default: "adaptive"
options:"adaptive""21:9""16:9""4:3""1:1""3:4""9:16"
durationinteger

duration in seconds (4-15), or -1 for auto-select.

default: 5
generate_audioboolean

whether to generate synchronized audio with the video.

default: true
seedinteger

seed for reproducibility (-1 for random).

default: -1
watermarkboolean

whether to add watermark to the output video.

default: false

output

videostring(file)*

the generated video file.

output_metaobject

structured metadata about inputs/outputs for pricing calculation

ready to run seedance-2-0?

we use cookies

we use cookies to ensure you get the best experience on our website. for more information on how we use cookies, please see our cookie policy.

by clicking "accept", you agree to our use of cookies.
learn more.