Wan 2.1 I2V 480P
wan-2.1-i2v-480p
Wan 2.1 14B is an image-to-video model, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "wan-2-1-i2v-480p","version": "0.0.1","input": {"seed": 0,"image": "your image here","prompt": "your prompt here","max_area": "832x480","fast_mode": "Off","num_frames": 81,"sample_shift": 3,"sample_steps": 30,"frames_per_second": 16,"sample_guide_scale": 5}})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~50 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Wan 2.1 I2V 480P is a model designed for generating high-quality videos from images and text prompts. It operates at a resolution of 480P and supports multiple configurations for frame rate, sampling steps, and generation parameters. The model is optimized for smooth frame interpolation and detailed motion rendering, making it suitable for creative video generation tasks.
Technical Specifications
Resolution: The model generates video outputs at 480P resolution, providing a balance between video quality and processing time.
Frame Rate: Adjustable frames_per_second (FPS), ranging from low frame rates for slower, more cinematic videos to higher FPS for smoother, faster videos.
Video Length: Users can specify the number of frames to generate with num_frames, allowing flexibility in creating shorter or longer videos.
Processing Speed: Offers three performance modes: Fast, Balanced, and Off, which control the rendering speed and output quality. Off provides the best quality but takes longer, while Fast renders quickly but may reduce visual detail.
Video Transition Smoothness: The model’s ability to produce smooth transitions between frames depends on settings such as num_frames and sample_steps, affecting the fluidity and quality of the video output.
Sample Quality: Adjustments to sample_guide_scale and sample_shift allow fine-tuning of the video’s visual quality and the consistency of transitions between frames.
Seed Control: The seed parameter provides a level of randomness in the output, allowing users to generate diverse video outputs from the same input image.
Key Considerations
- The model works best with high-quality input images and well-structured prompts.
- Lower sample steps result in faster outputs but may reduce smoothness and detail.
- Fast mode sacrifices some quality for speed, whereas Balanced and Off modes provide higher fidelity.
- Choosing an appropriate frame rate (e.g., 16 FPS) ensures fluid motion without unnecessary processing overhead.
- Using a fixed seed can help maintain consistency across multiple generations.
Tips & Tricks
- Max Area (832x480 or 480x832): Selecting the appropriate resolution maintains aspect ratio integrity.
- Fast Mode (Off, Balanced, Fast): Off provides the best quality, Balanced is a good trade-off, and Fast prioritizes speed.
- Sample Steps (1-40, higher for detail): More steps improve coherence and reduce flickering but take longer.
- Sample Guide Scale (0-10, higher for guidance): Controls how closely the output follows the prompt. Higher values ensure better adherence to input descriptions.
- Sample Shift (1-10, for subtle variations): Small values provide slight adjustments without major visual distortions.
- Seed (Fixed or Random): Use a fixed seed for consistent results when iterating on a specific effect.
Capabilities
- Converts static images into animated sequences with smooth transitions.
- Generates videos based on descriptive text prompts.
- Allows customization of frame rate, quality settings, and motion interpolation.
- Supports multiple quality and speed configurations for different needs.
What can I use for?
- Artistic Video Creation: Turn static images into animated video clips for storytelling.
- Concept Visualization: Generate short motion sequences from text descriptions.
- Experimental Animation: Create dynamic visual effects using different prompt styles.
Things to be aware of
- Experiment with different prompts and image styles to see how they influence motion generation.
- Compare outputs at various sample steps and guide scales to balance quality and speed.
- Use different fast mode settings to optimize for either speed or detail.
- Try using a fixed seed for generating consistent variations of an animation.
Limitations
Fast Mode Quality Trade-Off: While fast_mode offers faster video creation, selecting the "Fast" mode could compromise the video’s visual quality. Always opt for Balanced or Off modes when quality is paramount.
Limited Resolution Support: The model outputs at 480P resolution, meaning that videos might not meet high-definition standards. For HD quality, consider adjusting the output settings accordingly or using a higher-resolution model if available.
Seed Variability: Using the same seed with identical inputs will result in varied outputs. If you need consistent results, ensure to document and reuse specific seed values.
Input Image Quality: Poor-quality input images (low resolution, blurry, or noisy images) will lead to lower-quality video outputs. Always use clear, high-resolution images for best results.
Output Format: MP4