Skip to content

Conversation

grzegorz-roboflow
Copy link
Contributor

Description

Proof of concept enabling hosted webrtc in inference server
Let aiortc handle the pace
Remove all intermediate queues to reduce latency
Starting multiple webrtc connections to the same server, server is still capable of handling requests

Type of change

  • New feature (non-breaking change which adds functionality)

How has this change been tested, please provide a testcase or example of how you tested the change?

Tested locally

Any specific deployment considerations

Each webrtc connection is handled in separate process, these processes are automatically disposed when peer disconnects
Proper load balancing is desirable to evenly distribute load (RAM, GPU, etc)

Docs

N/A

request: InitialiseWebRTCPipelinePayload,
) -> InitializeWebRTCResponse:
logger.debug("Received initialise webrtc inference pipeline request")
*_, answer = await start_worker(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe lightweight process liek this is better for STREAM API in general? like should we use this even if not streaming video to the inference server via webRTC video track...a video stream process could still be tied to webRTC connection...and if user is processing RTSP or usb device can use just data channel to manage the process / stream the results back?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants