Transform live video with AI, at scale
Livepeer Pipelines enable developers to build, deploy, and scale real-time AI video processing workflows. By combining multiple AI models into a single workflow, you can create sophisticated video experiences—from style transfer to object detection to live translation—without managing complex infrastructure. Pipelines abstract away the complexities of video processing, inference management, and scaling, letting you focus on creating unique video experiences.
A Pipeline is a composable workflow that applies AI processing to live video in real-time. Think of it as an assembly line where each station performs a specific AI operation on your video stream—like style transfer, object detection, or live translation.
Unlike traditional video AI that works with pre-recorded files, Pipelines operate on live video streams with minimal added latency. This enables interactive experiences like:
Live AI-powered video filters
Real-time content moderation
Dynamic visual effects
Instant language translation
Pipelines are built from smaller, reusable units that can be chained together. This modular approach means you can:
Combine multiple AI capabilities into a single workflow
Reuse common processing patterns
Modify individual components without rebuilding the entire Pipeline
Right now, these units are represented as ComfyUI nodes.
Pipelines generally fall into three categories:
Transformation Pipelines
Change how video looks (style transfer, filters)
Modify video properties (resolution, framerate)
Add visual elements (overlays, effects)
Analysis Pipelines
Typically outputs JSON or text (high volume)
Detect objects or activities
Track movement
Generate metadata
Generation Pipelines
Create new video content
Add synthetic elements
Produce alternative views
Pipelines can be composed, so you could use the output of one pipeline to inform another.
A Pipeline moves through several states:
Creation
Built using ComfyUI (available now) or custom Python code (coming Q1 2025)
Defined processing steps and parameters
Set resource requirements
Publication
Usage
Integrated into applications
Processes live video streams
Monitored for performance
Remixing
Behavior can be adjusted through configuration
Supports runtime parameter updates
Enables dynamic control during streaming
Reports health and performance metrics
Provides detailed error information
Enables monitoring and debugging
There are two ways to work with Pipelines:
Use existing Pipelines in your applications
Focus on integration and user experience
Minimal AI/ML expertise required
Create new Pipelines for others to use
Build with ComfyUI or custom Python
Deeper technical involvement
Join our Discord
Browse the Showcase
Contribute to open source
Transform live video with AI, at scale
Livepeer Pipelines enable developers to build, deploy, and scale real-time AI video processing workflows. By combining multiple AI models into a single workflow, you can create sophisticated video experiences—from style transfer to object detection to live translation—without managing complex infrastructure. Pipelines abstract away the complexities of video processing, inference management, and scaling, letting you focus on creating unique video experiences.
A Pipeline is a composable workflow that applies AI processing to live video in real-time. Think of it as an assembly line where each station performs a specific AI operation on your video stream—like style transfer, object detection, or live translation.
Unlike traditional video AI that works with pre-recorded files, Pipelines operate on live video streams with minimal added latency. This enables interactive experiences like:
Live AI-powered video filters
Real-time content moderation
Dynamic visual effects
Instant language translation
Pipelines are built from smaller, reusable units that can be chained together. This modular approach means you can:
Combine multiple AI capabilities into a single workflow
Reuse common processing patterns
Modify individual components without rebuilding the entire Pipeline
Right now, these units are represented as ComfyUI nodes.
Pipelines generally fall into three categories:
Transformation Pipelines
Change how video looks (style transfer, filters)
Modify video properties (resolution, framerate)
Add visual elements (overlays, effects)
Analysis Pipelines
Typically outputs JSON or text (high volume)
Detect objects or activities
Track movement
Generate metadata
Generation Pipelines
Create new video content
Add synthetic elements
Produce alternative views
Pipelines can be composed, so you could use the output of one pipeline to inform another.
A Pipeline moves through several states:
Creation
Built using ComfyUI (available now) or custom Python code (coming Q1 2025)
Defined processing steps and parameters
Set resource requirements
Publication
Usage
Integrated into applications
Processes live video streams
Monitored for performance
Remixing
Behavior can be adjusted through configuration
Supports runtime parameter updates
Enables dynamic control during streaming
Reports health and performance metrics
Provides detailed error information
Enables monitoring and debugging
There are two ways to work with Pipelines:
Use existing Pipelines in your applications
Focus on integration and user experience
Minimal AI/ML expertise required
Create new Pipelines for others to use
Build with ComfyUI or custom Python
Deeper technical involvement
Join our Discord
Browse the Showcase
Contribute to open source