Generate Dynamics
POST /v1/dynamics/generate
Generate dynamic movements and animations for an agent. Returns immediately with a “processing” status — use the GET endpoint to check completion.
Request Body
| Parameter | Type | Required | Default | Description |
|---|
agent_id | string | Yes | — | Agent ID to generate dynamics for |
image_url | string | No | (from agent) | Agent image URL (fetched from agent data if not provided) |
duration | number | No | 5 | Duration of each motion in seconds |
model | string | No | seedance | Model to use (seedance, kling) |
Response
{
"success": true,
"message": "Dynamics generation started",
"agent_id": "A91XMB7113",
"status": "processing"
}
import requests
response = requests.post(
"https://api.bithuman.ai/v1/dynamics/generate",
headers={
"Content-Type": "application/json",
"api-secret": "YOUR_API_SECRET"
},
json={
"agent_id": "A91XMB7113",
"duration": 5,
"model": "seedance"
}
)
print(response.json())
Get Dynamics
GET /v1/dynamics/{agent_id}
Retrieve the current dynamics configuration and available gestures for an agent.
Response (dynamics generated)
{
"success": true,
"data": {
"url": "https://storage.supabase.co/dynamics-model.imx",
"status": "ready",
"agent_id": "A91XMB7113",
"gestures": {
"mini_wave_hello": "https://storage.supabase.co/mini_wave_hello.mp4",
"talk_head_nod_subtle": "https://storage.supabase.co/talk_head_nod_subtle.mp4",
"blow_kiss_heart": "https://storage.supabase.co/blow_kiss_heart.mp4"
}
}
}
Response (not yet generated)
{
"success": true,
"data": {
"url": null,
"status": "ready",
"agent_id": "A91XMB7113",
"gestures": {}
}
}
Response Fields
| Field | Type | Description |
|---|
url | string | null | URL to the dynamics model file, or null if not generated |
status | string | generating while in progress, ready when complete |
agent_id | string | The agent ID |
gestures | object | Map of gesture action names to video URLs (e.g. mini_wave_hello, talk_head_nod_subtle) |
Gesture names like mini_wave_hello and talk_head_nod_subtle are the action identifiers you pass to VideoControl(action=...) or the RPC trigger_dynamics method. See Avatar Sessions for integration examples.
agent_id = "A91XMB7113"
response = requests.get(
f"https://api.bithuman.ai/v1/dynamics/{agent_id}",
headers={"api-secret": "YOUR_API_SECRET"}
)
print(response.json())
Update Dynamics
PUT /v1/dynamics/{agent_id}
Update dynamics configuration for an agent. After a successful update, movements regeneration is automatically triggered in the background.
Request Body
| Parameter | Type | Required | Description |
|---|
dynamics | object | Yes | Dynamics configuration to merge with existing data |
dynamics.enabled | boolean | No | Enable or disable dynamics for this agent |
dynamics.batch_results | object | No | Map of gesture names to video generation results |
dynamics.result | object | No | Result model path and hash (set when dynamics generation completes) |
dynamics.talking | object | No | Default talking model path and hash (used when dynamics are disabled) |
toggle_enabled | boolean | No | true to switch to dynamics model, false to restore default talking model |
Example: Enable dynamics after generation
{
"dynamics": {
"enabled": true
},
"toggle_enabled": true
}
Response (with regeneration)
{
"success": true,
"message": "Dynamics updated successfully and movements regeneration started",
"agent_id": "A91XMB7113",
"regeneration_status": "started"
}
Response (regeneration failed to start)
{
"success": true,
"message": "Dynamics updated successfully, but movements regeneration failed to start",
"agent_id": "A91XMB7113",
"regeneration_status": "failed",
"regeneration_error": "Connection refused"
}
Gesture Names
When dynamics are generated, the available gestures use descriptive action names:
| Gesture Action | Category | Typical Use |
|---|
mini_wave_hello | wave | Greeting |
talk_head_nod_subtle | nod | Agreement, acknowledgment |
blow_kiss_heart | expression | Playful reaction |
laugh_react | expression | Humor response |
idle_subtle | idle | Background movement |
The exact gesture names depend on what was generated. Use GET /v1/dynamics/{agent_id} to discover available gestures for each agent.
Configuration Options
Duration Settings:
1-3 seconds: Quick gestures (waves, nods)
3-5 seconds: Standard motions (default)
5-10 seconds: Extended animations
Model Options:
seedance: High-quality motion generation (default)
kling: Alternative motion model
Integration Example
import requests
import time
headers = {"Content-Type": "application/json", "api-secret": "YOUR_API_SECRET"}
# Step 1: Create an agent
resp = requests.post(
"https://api.bithuman.ai/v1/agent/generate",
headers=headers,
json={"prompt": "You are a friendly customer service representative."}
)
agent_id = resp.json()["agent_id"]
# Step 2: Wait for agent to be ready
while True:
status = requests.get(
f"https://api.bithuman.ai/v1/agent/status/{agent_id}",
headers={"api-secret": "YOUR_API_SECRET"}
).json()
if status["data"]["status"] in ("ready", "failed"):
break
time.sleep(5)
# Step 3: Generate dynamics
resp = requests.post(
"https://api.bithuman.ai/v1/dynamics/generate",
headers=headers,
json={"agent_id": agent_id, "duration": 5, "model": "seedance"}
)
print("Dynamics generation started:", resp.json())
# Step 4: Check available gestures
time.sleep(30) # Wait for generation
resp = requests.get(
f"https://api.bithuman.ai/v1/dynamics/{agent_id}",
headers={"api-secret": "YOUR_API_SECRET"}
)
gestures = resp.json()["data"].get("gestures", {})
print(f"Available gestures: {list(gestures.keys())}")
Error Codes
| HTTP Status | Meaning |
|---|
200 | Success |
400 | Invalid parameters |
401 | Unauthorized |
402 | Insufficient credits |
404 | Agent not found |
500 | Internal server error |