Scalable Content Distribution in Wireless Networks

Today many wireless networks are still used as if they were wired i.e. even though the same data is sent to all receivers it is done so in a sequential manner - to one receiver at time. This method of communication is called unicast. However, in most wireless networks an alternative and more bandwidth efficient method for transmitting the same data to many receivers simultaneously is to use broadcast or multicast. Using multicast we can parallelize the transmission and send to all receivers at the same time.

The basic principle behind the unicast and multicast is illustrated in the figures below. First for unicast where packets are sent to each receiver one at a time.

 
Unicast example
 

Second, for multicast where each packet sent is received by all receivers at the same time.

 
Multicast example
 

As can be seen in the figure multicast offers a significantly more efficient way of send data to many receivers simultaneously. So why isn’t multicast commonly used to deliver data? The answer to this question typically becomes a discussion about reliability, namely whether it can be guaranteed that all data will reliably reach all the intended receivers. When unicasting most communication systems provide some form of reliability mechanism. This means that application writers can rely on the communication system to reliable deliver the sent data to the intended application - most applications used over the Internet today such as web browsing, email, video streaming etc. rely on this fact. However, when multicasting such guarantees often does not exist. This makes it hard for applications to take advantage of the improved efficiency of multicast in networks such as WiFi.

To efficiently implement reliability in multicast systems, one would need an efficient way to retransmit lost data to multiple receiver at the same time - even if the receivers lost different parts of the data. Luckily such algorithms exist, and are called erasure correcting codes. Erasure correcting codes represent the data as mathematical equations and make it possible for multiple receivers to recover different lost data segments from the same equations. In the following video we describe the basic principle of an erasure correcting code.

At Steinwurf we use a next generation erasure correcting code called Random Linear Network Coding (RLNC). RLNC allows us to efficiently implement reliability in multicast networks and thereby build highly scalable content distribution systems (with a practically unlimited number of receivers). To illustrate the scalability difference between unicast and multicast we made a small experiment. In the experiment we demonstrate what happens when adding more receivers than the WiFi network can handle.

In the initial setup we compare video streaming using the reliable unicast based protocol TCP versus our reliable multicast protocol Score. With TCP devices will start fall behind as the WiFi capacity is exhausted. The more receivers we add the further the playback will drift. If this was a file transfer application we would notice that the time it would take us to transmit the data to all the receivers would increase linearly as we add receivers. On the other hand, with Score we use multicast and RLNC for reliability. Since we rely on multicast the time it takes us to transmit the data is roughly independent of the number of receivers.

Finally, in the experiment we also compare Score with the unreliable UDP protocol running over multicast. Here it can be seen that the lack of reliability severely deteriorates the video quality. This is not very surprising since many video codecs do not handle loss of data very well, but this is true for most applications - so reliability is more often than not, a real necessity.

Previous
Previous

Visiting the 2017 IBC (International Broadcasting Convention)

Next
Next

Consumer electronics at IFA 2017 in Berlin