Kodo Throughput Benchmarks Comparison (RaptorQ)

These benchmarks were run by Steinwurf ApS to measure the performance of the Kodo erasure coding library implementing Random Linear Network Coding (RLNC) codecs. Our results show that, for low delay applications, Kodo significantly outperforms Codornices RaptorQ.

Benchmark Description

All tests are performed over a block of size of K x 1280 bytes, containing K symbols each 1280 bytes large. From each block the encoder generates a set of coded symbols, this process is called encoding. Decoding is when the coded symbols are processed by the decoder, and the original block of data is reconstructed. The performance is measured in Gbps (Gigabits per second) as the size of the original block divided by the average time needed for encoding and decoding, respectively. The reported number is the average of 100 test runs. All benchmarks are single-threaded, i.e. they only utilize a single CPU core.

Results

Test System: Debian 10, Dell Optiplex / Intel i7-4770@3.4GHz, Systematic coding (10% packet loss):

Number of symbols (K) Throughput (Gbps)
Encoding Decoding
16 88 56
50 64 44
100 40 26.4
500 11.2 7.2
1000 5.68 3.68
 

We’ve included the performance numbers from Codornices RaptorQ (release 2) implementation for comparison available here: https://www.codornices.info/performance. Observing the numbers reported for RaptorQ, performance of RLNC is equal when the number of symbols (K) is 1000 but RLNC is faster for K ≤ 500. Note that for low latency applications a smaller K is desirable to avoid latency building up in the erasure coding (e.g. 2 seconds of latency in the case of 1000 symbols). In the above chart the latency numbers are calculated for a symbol size of 1280 bytes and a 5 Mbit/s stream, find more information about latency and erasure coding here: Coding for low latency (block codes).

This blog is available as leaflet here (pdf).

If you want to experiment with erasure codes for your use-case - or have a chat about how they could improve your users’ quality of experience get in touch at contact@steinwurf.com.

Previous
Previous

Efficient multicast of updates to satellite connected devices

Next
Next

IETF 105 in Montreal