LCM animation Time Lapse Generator
lcm-animation
Fast animation using a latent consistency model
L40S 45GB
Fast Inference
REST API
Model Information
Response Time~25 sec
StatusActive
Version
0
Updated23 days ago
Live Demo
Average runtime: ~25 seconds
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Cost is calculated based on execution time.The model is charged at $0.0011 per second. With a $1 budget, you can run this model approximately 36 times, assuming an average execution time of 25 seconds per run.
API Reference
View Full DocumentationPrerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "lcm-animation","version": "0","input": {"seed": null,"image": "your_file.image/jpeg","width": "512","height": "512","end_prompt": "Self-portrait watercolour, a beautiful cyborg with purple hair, 8k","iterations": "12","start_prompt": "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k","return_frames": false,"guidance_scale": "8","zoom_increment": "0","prompt_strength": "0.2","canny_low_threshold": "100","num_inference_steps": "8","canny_high_threshold": "200","control_guidance_end": "1","use_canny_control_net": "True","control_guidance_start": "0","controlnet_conditioning_scale": "2"}})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~25 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion