Guest post - Reliable Video Streaming over Next Generation Networks

Foreword

Ever wondered how our communication networks are going to scale bandwidth to match the needs of next generation multimedia application (VR/AR and ultra high video quality)? In this post, we invited Michele Polese (web: personalresearch group) to present the work he co-authored with Matteo Drago, Tommy Azzino, Cedomir Stefanovic and Michele Zorzi as part the manuscript “Reliable Video Streaming over mmWave with Multi Connectivity and Network Coding”, accepted for publication at IEEE ICNC 2018 and available online at Arxiv.

Drago, Azzino, Polese, Stefanovic and Zorzi’s research show how next generation 5G networks can improve network performance by combining multiple radio technologies such as millimeter wave and LTE. As a key component in their system, they’ve used Random Linear Network Coding (RLNC) provided by Steinwurf’s Kodo library to make the utilization of multiple communication links possible.


Millimeter wave (mmWave) links offer a very high but intermittent data rate, due to blockage. In this project, we studied an efficient way to offer high-quality and low-latency video streaming in next generation 5G cellular networks, also leveraging mmWaves. We designed an application-layer solution, which can be deployed without the need to modify the 3GPP standards and which is based on multi connectivity across LTE (below 6 GHz) and 5G at mmWaves. Random Linear Network Coding (RLNC), provided by the Kodo library, is used to easily manage the transmission process over the different links and to provide error correction. In the next paragraphs we will give some details on the architecture we propose, and show the gain it achieves in terms of QoE for the end user.

Video architecture

In our design, the main component is the Video Streaming Server (VSS), which interacts with the video streaming app in the devices (e.g.:,smartphones, VR/AR headsets). The VSS encodes the video (possibly using advanced features such as spatial and temporal scalability) and splits the bitstream into packets. Then, an additional middle-layer uses RLNC to generate encoded packets and distributes them over the LTE and mmWave interfaces.

The joint usage of LTE and mmWave combines the reliability of the LTE channel (as compared to mmWave) with the high data rate of the mmWave channel. Moreover, the management of the multiple interfaces is simplified by network coding: since all RLNC encoded packets are equivalent when it comes to decoding the original video sequence, it is possible to transmit (or retransmit) an encoded packet using any of the available interfaces.

The results obtained by our framework clearly show the benefit of combining multi connectivity and network coding. In particular, the performance evaluation has been carried out using the ns-3 mmWave module, and the Kodo APIs for ns-3. We used on-the-fly encoder/decoder pairs to reduce latency and, if needed, provide additional redundancy. Moreover, we injected real video traces in the simulator, and reconstructed the output in order to evaluate the PSNR (a quality metric of the video which is inversely proportional to the distortion of the received stream with respect to the original one).

PSNR arrow

As shown in this figure, the network coding forward error correction manages to achieve a high PSNR, at the price of additional latency. When multi connectivity is also used, it is possible to reduce latency without compromising video quality.

References

More details can be found in the paper where we describe this project, namely: Matteo Drago, Tommy Azzino, Michele Polese, Cedomir Stefanovic, Michele Zorzi, “Reliable Video Streaming over mmWave with Multi Connectivity and Network Coding”, IEEE ICNC 2018 (available on arXiv: https://arxiv.org/abs/1711.06154). The ns-3 mmWave module can be found at this link: https://github.com/nyuwireless-unipd/ns3-mmwave.

Previous
Previous

What about the latency, stupid

Next
Next

Streaming Video and Audio with Low Delay