These benchmarks were run by Steinwurf ApS to measure the performance of the Kodo erasure coding library implementing Random Linear Network Coding (RLNC) codecs. Our results show that, for low delay applications, Kodo significantly outperforms Codornices RaptorQ.
In our previous post What about the latency, stupid we showed how an erasure code is a valuable tool for any latency sensitive application (video conferencing, online gaming, etc.). In this post we will look at a popular family of erasure coding algorithms and show how choosing the wrong algorithm might increase latency instead of decreasing it.
When it works you never notice it - but when it fails the user experience is terrible. It is a fundamental property required by many applications on the internet today - namely reliable and low delay communication. If you’ve ever tried video conferencing you know the feeling of distress when the picture and audio constantly breaks up.
Today, low-latency communication is an important feature - and for good reason! Many applications require low latency communication networks to function properly. But latency, a.k.a. the delay between when a data packet is sent and when it is received is not an easy metric to improve.
Millimeter wave (mmWave) links offer a very high but intermittent data rate, due to blockage. In this project, we studied an efficient way to offer high-quality and low-latency video streaming in next generation 5G cellular networks, also leveraging mmWaves. We designed an application-layer solution, which can be deployed without the need to modify the 3GPP standards and which is based on multi connectivity across LTE (below 6 GHz) and 5G at mmWaves. Random Linear Network Coding (RLNC), provided by the Kodo library, is used to easily manage the transmission process over the different links and to provide error correction. In the next paragraphs we will give some details on the architecture we propose, and show the gain it achieves in terms of QoE for the end user.
Data sent over the Internet will traverse a large number of smaller links. The combination of these links forms a path from the sender to the receiver. Sometimes data is lost when it traverses this path. For video and audio streams such losses can lead to playback glitches if not corrected.
Imagine the following problem: you have to communicate a long and important message to a group of friends in a noisy environment. But your friends have no way of communicating back to you. How can this be solved? Because your friends cannot communicate back to you, you cannot know which parts of the message have been heard and which are missing. One simple way to solve this problem would be to repeat the message again and again - eventually the entire message will get through. For humans this would probably be a reasonable solution, but if the entities communicating were computers we could actually solve this problem in a much more efficient way.
An important goal in Steinwurf is to provide software libraries that enable others to accelerate time-to-market or build out proof-of-concept demos. One of our key software libraries is Kodo which implements erasure correcting codes (ECC) also known as forward erasure correcting (FEC) algorithms. You can read more about the features of Kodo here.
An increasing number of services and products are made viable by what is essentially slow computers with connectivity provided by slow wireless networks e.g. as in IoT systems. Typically, in such systems deployment costs and battery-life are important and often communicating with such devices is expensive e.g. due to a high bandwidth cost or the drain on the device’s battery. Fleet management, equipment monitoring, metering etc. are areas where some of these costs can be substantial. One specific problem in such a system is file (or object) delivery to multiple deployed units, e.g. firmware/software updates which are occurring more and more frequently, or distribution of some data that needs to be available locally on the unit.
Where Are We?
Often it is critical for network operators to be able to answer the question How good is good enough? or in other words, What is the absolute minimum level of service reliability that can be provided?. This issue may appear trivial but it is not. Especially, when it comes to the case of broadcast or multicast communications.
Today many wireless networks are still used as if they were wired i.e. even though the same data is sent to all receivers it is done so in a sequential manner - to one receiver at time. This method of communication is called unicast. However, in most wireless networks an alternative and more bandwidth efficient method for transmitting the same data to many receivers simultaneously is to use broadcast or multicast. Using multicast we can parallelize the transmission and send to all receivers at the same time.
What does a packet loss mean? Packet losses observed at higher layers of the IEEE 802.11 protocol stacks are produced in part by data packets dropped by the network interfaces at lower layers. One of the reasons behind it is the presence of errors during wireless transmissions that could not be recovered with standard channel codes, i.e., the channel code tried to correct but mapped to an erroneous message. Unsuccessful error corrections are usually detected at the data link layer, and as a consequence, these packets are dropped, and there we have a loss!. However, a question arises; are these packets completely useless as the data link layer wants us to believe? actually not.
Watching football at the stadium is a captivating experience, which draws thousands of people, despite the fact that the viewing angle and replays you get from your TV at home often are superior. But it does not need to be this way. It is quite easy to imagine that a drone could spot the right angle and feed the video directly to every spectators’ smartphone. This requires multicasting which in its standard form is inherently unreliable and therefore unfit for applications such as live video streaming.
Two of my coworkers Sreekrishna Pandi and Robert-Steve Schmoll (TU Dresden) from 5G Lab Germany and I, Patrik J. Braun (BME-AUT), were demonstrating three examples of our work at CES’17 and CCNC’17 in Las Vegas. The three demonstrations showed different aspects of next-generation networking. It is for the first demonstration that I use Steinwurf’s Kodo JS library to make a browser based application for Peer to Peer assisted Video on Demand. It was this demonstration that won us the award at CCNC’17:
As a Linux C/C++ developer did you ever encounter the
error while loading shared libraries error when launching an executable, even though the apparently missing library is located in the same folder as the executable? In this blog we investigate WHY that happens, and how to solve it in an alternative way than the “rpath option of the linker”-method.
We are pleased to announce the first release of our new open source lightweight C++ library Bitter, which is a library for writing bit/byte fields into a data container. Here I will present our motivation for creating Bitter and go through a simple example for writing and reading bit/byte fields using Bitter
The below video illustrates systematic decoding of data, in this case a bitmap image. The left side is a view of the “received” data during the decoding, and the right side is a view of the decoding matrix. In this case we are using Erasure Correcting Codes (ECC).