Deploying Content Aware FEC Coding to deliver ultra-low latency video streaming

iStock-1170483461_11zon.jpg

There are plenty of challenges when sending latency sensitive video content over a network. The Internet really doesn’t care about your content. In general, it will pat itself if data gets from one end to the other, regardless of the delay or order in which the data is received. But performance really matters to the end user, as it seriously affects their Quality of Experience.

In this post we will consider what can happen when we want to protect the traffic coming from a latency sensitive application such as real-time video streaming, where the structure of the content is important. Services delivering this type of content need to be optimised for no interruptions. 

It won’t matter if the service was great 99.9% of the time, because that means .01% of the time there was something missing. If we’re streaming video, that means for every 999 seconds, we’ll have 1 second where the video drops out, or lags, or pauses, or has to dial down the quality of the stream. That means every 16 minutes or so there’s going to be a problem, and many users will think that’s just not good enough, so customers will flock to another platform for their video fix.  This is especially true in a B2B or professional scenario, like if the user wants to record an interview for later broadcast.

In practice, no network is perfect, and there will regularly be issues to fix and packet losses to overcome. The trick is to package a solution into the service which can correct these issues without the user being aware, on the fly. That’s only possible if the solution can adapt to whatever problems occur on the network, whilst being aware of how the content needs to be delivered, consumed and optimized for it.

The economics of a service are also at stake from both a revenue generation, and a cost savings standpoint. There are literally hundreds of competing services out there, so if the QoE is poor, even intermittently, users will shy away from the service and give up their subscriptions and custom for a different service without the connectivity issues, directly affecting your revenue stream. 

If the video streaming service includes the latest technology to make it more efficient, it will be less costly to actually deliver the service at scale, meaning more customers can be reached with the same cost base and infrastructure.

Content-aware coding (example: video streaming)

Let’s stick with video streaming as an example, but in general this should hold true for most other types of content delivery where boundaries in the content and in order delivery are important.

The problem:

Video content, whether live or recorded, is designed to be consumed in order, frame by frame. Let us consider a live video streaming service using H264 video compression. The typical frame rate for such an application is 30 fps (frames per second). This means that every 1/30 = 33 ms a frame is generated. You can’t make sense of video if you receive the frames all jumbled up so 33ms becomes the latency budget for receiving each frame. The frames will have different sizes and therefore translate into a variable number of network packets (in ECC/FEC terminology we refer to these as source symbols). The reason H264 produces different sized frames is that the compression depends on the amount of activity in the video. If lots of stuff is happening on screen, the compression will be less efficient and the frames will be larger, if very little is changing, frame to frame, the compression can be very efficient and the frames very small. In addition H264 can be configured to regularly output what is called an I-frame that contains all information needed for a receiver to start decoding the video and is required for on-the-fly joining or to catch up after a loss of data. These frames tend to be quite large.

Let look at an example output from a H264 video encoder:

H264 video encoder

As can be seen each source symbol belongs to a specific frame (indicated by their colour).

Let’s look at how this maps to an ECC/FEC algorithm. In this specific example we’ve configured the encoder to produce one repair packet for every five input packets. This would translate into the following network flow:

ECC/FEC algorithm - Network Flow

The problem you can see here is that the repair for frame 0 is not scheduled until after frame 1 is produced by the H264 video encoder. This means that if one of the packets from frame 0 was lost we would have to wait 33 ms to get repair for it. In the meantime if there’s inadequate buffer of frames on the device, a user will experience a jittery and laggy video, or suffer from a quality reduction until all the necessary packets to view the next video frame at full resolution are delivered.

Of course this problem gets even worse if the repair interval is large and the frame sizes are small. An example of this problem is illustrated here:

99ms delay

In this more realistic case we have accumulated a 99 ms delay before we are able to generate any repair for the video frames. This means a user will be even more exposed to a jittery and unsynchronised video stream.

The solution: 

Adapt the FEC to the content, what we call content aware coding.

 
Steinwurf’s rely
 

In order to avoid this accumulated latency in the block code ECC/FEC algorithm, we need the ability to adapt the coding to the structure of the content. One possible solution is to use an RLNC based sliding window code such as Steinwurf’s Rely.

Using Rely we can trigger repair after each frame, ensuring that a specific frame is protected immediately and without delay. So, in the above example Rely would insert a repair packet at least after s2, s4, s7 and s10. This adaptable ‘content aware’ approach ensures that the end user is never aware of packet loss issues over the network, as they experience completely smooth video playback. In face the repair created in the above example covers 2 frames, so if packets are lost from the most recent 2 frames the injected repair will fix it. Data concerning a video frame before then is irrelevant, so it doesn’t need to be covered by the repair packet.

The video is protected on a per frame basis, so that even if there are any packet losses which would normally affect a video frame, because Rely is coding in the optimum way to deliver the video content, each frame has protection built into it, so the end user can be sure to enjoy an uninterrupted video, and after experiencing none of the usual connection issues, continue to recommend the service to their colleagues, friends and family.

A Reminder of why this matters

By including Rely and its content aware coding features in a video streaming solution, you can be sure that the service deployed can provide super low latency streaming, and be flexible and adaptable to overcome any connection issues to the customer. Since the end customer wont notice any connection issues with Rely working in the background, customer retention due to technical issues wont be a problem. Furthermore, the cost and network traffic savings from avoiding retransmissions to each client can stack up and allow you to service more customers with the same network utilisation, but with a much higher quality of experience. 

So even though the Internet doesn’t care about the quality of your video stream, you can make sure your customers know that you care about their User Experience by using the latest and most flexible FEC solution to protect the stream and ensure video can be enjoyed the way it was meant to be.

Serve your customers better with Rely

You can read more about Rely here. To try out Rely or to discuss other ways Rely could improve your low latency video streaming application please email us on contact@steinwurf.com.

 
Previous
Previous

Overcoming packet loss – the bane of online gamers

Next
Next

Hardware Acceleration