Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video in -> images -> video out #213

Draft
wants to merge 7 commits into
base: realtime-ai-experimental
Choose a base branch
from

Conversation

j0sh
Copy link

@j0sh j0sh commented Sep 25, 2024

Includes:

  • Multi-stage ffmpeg build with cuda support
  • MediaMTX for video in and out + config
  • Python listener to handle ffmpeg / video (frames2video.py)
  • Runtime dockerfile using multiprocess (can not shut this down cleanly)
  • README updates explaining how to set up and run the containers

Probably should be merged with the other WIP PR - #212

There is not much of an interface yet for the python parts

given an input <stream-name> transcoded outputs will be available under <stream-name>/out

Also see #209 for more details on how to use / test this

This command builds and configures ffmpeg and [MediaMTX](https://github.com/bluenviron/mediamtx) for ingest. Until we have docker-compose or similar orchestrator, it will need to be run manually in order to listen for incoming connections

```bash
docker run --gpus all --runtime=nvidia -eNVIDIA_DRIVER_CAPABILITIES=all --rm -it -p 8189:8189/udp -p 8889:8889 -p 1935:1935 -p 9998:9998 livepeer/ai-runner:realtime
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants