Local Development Environment for Real-Time AI Video Pipelines with ComfyUI and ComfyStream
Local development guide
This guide walks through setting up a local real-time AI video pipeline using ComfyUI and ComfyStream. You’ll learn how to process live video with AI models, using depth map generation as an example.
Prerequisites
-
A RunPod account with access to GPU instances
-
Basic familiarity with terminal commands
-
SSH key pair for secure connections
-
Node.js and npm installed locally
Technical Architecture
Step 1: Set Up ComfyUI Environment
- Launch a RunPod instance with ComfyUI-Launcher
- Configure workspace directories
Note: Install at least one model through the ComfyUI manager GUI to enable the “Load Checkpoint” node.
Step 2: Install ComfyStream
- Set up Conda environment
- Install ComfyStream
Step 3: Install DepthAnything TensorRT Node
- Clone and install the node
- Build TensorRT engine
Step 4: Configure Network Tunneling
- Set up UDP forwarding on the remote server
- Create SSH tunnel from local machine
Step 5: Launch ComfyStream Server
- Install dependencies and initialize workspace
- Start the server
Step 6: Run the Frontend
- Set up the local development environment
- Access the interface
-
Open
localhost:3000
in your browser -
Set stream URL to
http://127.0.0.1:8888
-
Select the “depth anything” workflow
Testing Your Setup
-
Open the ComfyUI interface and verify the DepthAnything node appears
-
Test with a static image first to ensure model loading works
-
Switch to live video input through the ComfyStream interface
Troubleshooting
-
If custom nodes don’t appear, restart ComfyUI and refresh the browser
-
Check RunPod container logs for node installation issues
-
Verify SSH tunnels are active using
netstat
or similar tools
Next Steps
-
Experiment with different ComfyUI workflows
-
Explore other real-time AI models
-
Join the Livepeer Real-time AI Video Showcase waitlist for updates