In some of our previous posts we have discussed some of the challenges when implementing a reliability scheme targeting low latency applications. For example in Coding for low latency (block codes) we discuss how large block codes such as RaptorQ or LDPC can be problematic in low latency communication. Similarly in In our post What about the latency, stupid we discuss the latency penalty of using a retransmission based reliability scheme. In this post we will consider what can happen when we want to protect the traffic coming from a latency sensitive application where the structure of the content is important.
Content-aware coding (e.g. video streaming)
Here we will use video streaming as an example, but in general this should hold true for most other types of content delivery where boundaries in the content are important.
Let us consider a live video streaming service using H264 video compression. The typical frame rate for such an application is 30 fps (frames per second). This means that every 1/30 = 33 ms a frame is generated. The frames will have different sizes and therefore translate into a variable number of network packets (in ECC/FEC terminology we refer to these as source symbols). The reason H264 produces different sized frames is that the compression depends on the amount of activity in the video. If lots of stuff is happening the compression will be less efficient and the frames will be larger, if only very little is changing frame to frame the compression can be very efficient and the frames very small. In addition H264 can be configured to regularly output what is called an I-frame that contains all information needed for a receiver to start decoding the video and are required for on-the-fly joining or to catch up after a loss of data. These frames tend to be quite large.
Let look at an example output from a H264 video encoder:
As can be seen each source symbol belongs to a specific frame (indicated by their color).
Let’s look at how this maps to an ECC/FEC algorithm. In this specific example we’ve configured the encoder to produce one repair packet for every five input packets. This would translate into the following network flow:
The problem we can observe here is that the repair for frame 0 is not scheduled until after frame 1 is produced by the H264 video encoder. This means that if one of the packets from frame 0 was lost we would have to wait 33 ms to get repair for it.
Of course this problems gets even worse if the repair interval is large and the frame sizes are small. An example of this problem is illustrated here:
In this extreme case we have accumulated a 99 ms delay before we are able to generate any repair for the video frames.
In order to avoid this accumulated latency in the ECC/FEC algorithm we need the ability to adapt the coding to the structure of the content. One possible solution is to use an RLNC based sliding window code such as Rely.
Using Rely we can trigger repair after each frame ensuring that a specific frame is protected immediately and without delay.
You can read more about Rely here https://rely.steinwurf.com. If you have any questions or you would like to discuss how Rely could work for your application please feel free to contact us at email@example.com.