-
Notifications
You must be signed in to change notification settings - Fork 1
Implementing Video Receiver Pipelines
For general information about receiver pipelines refer to the Receiver Pipelines section section.
This document describes a video processing pipeline designed to handle network-streamed video data efficiently by applying various transformations to optimize visual quality and reduce network and computational load. The ssdk::video::MonoscopicVideoInput
and ssdk::video::VideoReceiverPipeline
components, combined with the processing slots, provide a flexible infrastructure to handle format changes, enhance quality, and manage resolution.
Video data is obtained from the ssdk::transport_common::ClientTransport
interface and relayed to the ssdk::video::VideoDispatcher
object through the transport_common::ClientTransport::VideoReceiverCallback
. The ssdk::video::VideoDispatcher
object is responsible for routing various video streams to their respective video inputs, represented by the ssdk::video::MonoscopicVideoInput
class. Upon receiving the video buffers, ssdk::video::MonoscopicVideoInput
decodes compressed video frames and forwards the decoded frames to the ssdk::video::VideoReceiverPipeline
objects.
Once decoded or passed through, video data is submitted to the processing pipeline using the VideoReceiverPipeline::SubmitInput()
method. The pipeline consists of a series of slots, each containing an amf::AMFComponent
object.
The slots can be implemented as either synchronous or asynchronous, based on the component’s processing characteristics:
-
Synchronous Slots (
ssdk::util::SynchronousSlot
): ExecutesSubmitInput
andQueryOutput
in a single thread. This approach is compatible with components that produce a matching output frame for every input frame. -
Asynchronous Slots (
ssdk::util::AsynchronousSlot
): ExecutesSubmitInput
andQueryOutput
in separate threads, ideal for high-complexity components with varied input-output ratios.
The pipeline contains the following key components, each serving a distinct purpose in video enhancement, resolution handling, and color space conversion:
This component, implemented using the AMFVQEnhancer
in the AMF runtime, improves visual clarity by removing noise and compression artifacts from the video. It enables rendering and streaming at lower resolutions and bitrates, reducing the load on the network and encoding/decoding resources while enhancing quality.
- Performance Benefits: By reducing required bitrate and resolution, this component improves streaming efficiency and reduces bandwidth.
- Hardware Requirements: Available only on Windows systems with AMD GPUs/APUs.
- Performance Considerations: This process is computationally intensive, especially on lower-end GPUs, and can be a bottleneck at high resolutions or frame rates.
The AMFHQScaler
component provides advanced upscaling for scenarios where the stream resolution is lower than the display resolution. This upscaling is applied only when necessary.
- Performance Benefits: Allows video streaming at a reduced resolution, with high-quality upscaling on the client side, optimizing both network and GPU resource usage.
- Hardware Requirements: Available only on Windows systems with AMD GPUs/APUs.
- Performance Considerations: Like the denoiser, this component is computationally heavy, potentially causing bottlenecks on lower-end GPUs with high resolutions or frame rates.
The Video Converter performs essential color space conversion and can also perform resolution scaling (either upscaling or downscaling) when required. This component operates as follows:
- Color Space Conversion: Converts the YUV format output by the decoder to the RGB format required by the presenter.
- Scaling: Uses bilinear or bicubic scalers but defers scaling to the presenter itself when the high-performance upscaler is not being used. This approach maintains performance by only applying essential conversions within the pipeline.
When a format change occurs in the video stream (e.g., codec, resolution, frame rate, color space), MonoscopicVideoInput
invokes VideoReceiverPipeline::OnInputChanged()
to reinitialize pipeline components for compatibility with the new format. This flexibility enables seamless playback across a variety of network conditions and device capabilities.
The video pipeline, which sits between MonoscopicVideoInput
and the presenter, processes video as follows:
-
Input Handling:
MonoscopicVideoInput
submits video frames toVideoReceiverPipeline
. - Sequential Processing: The pipeline applies denoising, upscaling, and conversion, depending on the stream’s resolution, display requirements, and available hardware.
- Synchronous and Asynchronous Execution: Each component is evaluated to determine whether synchronous or asynchronous execution is optimal.
- Final Output: The presenter receives frames for display, with resolution and color matching as required.
To optimize performance, consider the following:
- Denoiser and Upscaler on Capable Hardware: Ensure these components are executed asynchronously if they require heavy processing, especially at high resolutions.
- Experiment with Slot Type: For components with stable input-output behavior, synchronous slots may perform best; otherwise, asynchronous slots provide flexibility and responsiveness for components with dynamic output requirements.
This structure allows the video pipeline to adapt to different hardware and network conditions, providing a high-quality streaming experience on AMD GPUs while efficiently managing computational and network resources.