Simula Research Laboratory / Center for Resilient Networks and Applications / NorNet
Homepage of Thomas Dreibholz / Thomas Dreibholz's Flow Routing Project Page


Welcome to Thomas Dreibholz's
Flow Routing Project Page
The Flow Routing Project


📰 News


💡 What is our Flow Routing project?

Introduction

The Internet was originally designed as a best effort network to support simple applications like electronic mail and file transfers via low-speed links. Since high-speed gigabit backbones have become widespread within the last few years and high-speed links are even available for home-users in the form of DSL (digital subscriber line), new multimedia applications like audio on demand (AoD), video on demand (VoD), audio/video conferences and Internet telephony are possible. But while a best effort delivery is sufficient for classical TCP/IP-based applications like web browsing, the new applications require strict delivery guarantees. For example, the perceptual quality of a video conference may be poor when it does not get a constant bitrate of 2 Mbit/s.

Flow-State Aware Routing

An example scenario
A Network with Flow Routing – Click here to show picture in full size!

In the early ages of the Internet memory had been scarce and extremely expensive. Therefore, routing had been realized on a per-packed basis. While this scheme achieved a cost advantage in reducing the amount of memory required in routers, it has become a severe problem in today's routers. Nowadays, memory is quite inexpensive – therefore, per-packet routing does not achieve a significant cost advantage – but networks are very fast. That is, a tremendous amount of CPU power is needed to efficiently route packets at speeds of multiple gigabits per second. Flow-state aware routing can significantly reduce this effort: instead of making a routing decision for every packet, a router memorizes flow identities and only makes a routing decision once for each flow. Maintaining flow tables containing even millions of flows is significantly cheaper than routing each packet (routing tables for BGP routers can grow very large!), resulting in flow-state aware routing becoming even less expensive than classical per-packet routing. Flow-routing may be implemented incrementally, that is flow-based routers can co-exist with classical routers in the same network as shown in the figure above.

Our QoS Concept

Flow-state aware routing not only offers the advantage of inexpensive routing, it also makes it possible to implement Quality of Service (QoS) mechanisms. The goal of our project is a simple and efficient QoS protocol to provide QoS assurances to certain flows, which can be implemented for flow routers (and also for classical, packet-based routers with some additional effort). A key assumption for our QoS concept – based on observations by IP broadband providers – is that service providers (e.g. audio and video media libraries) are connected via a high-speed internetwork to the access providers of the customers as shown here:

An example scenario
An Example Scenario – Click here to show picture in full size!

The customers may be connected via an edge node to Access Providers by e.g. DSL links, TV cable, Wireless LAN (IEEE 802.11), UMTS or ATM. Different service providers may simultaneously deliver content to a single customer. Core bandwidth is usually over-provided, therefore, it is further assumed that the link to the customer becomes the main bottleneck of the system and the edge node is the place where congestion occurs.

As long as the user does not request more media flows than available link bandwidth, there should be no problem. But let us consider the following scenario as shown in the figure here:

An overload example
An Overload Example – Click here to show picture in full size!

One family member requests a sports video at 5 Mbit/s, another one requests a soap opera at 3 Mbit/s and yet another requests an action video at 5 Mbit/s via a single 10 Mbit/s link. This is a situation where packet loss is likely to occur. Since all flows have equal priority, the packet loss is likely to affect all flows. It is possible that the quality reduction is so severe that all flows are of unacceptable quality. Clearly, an edge node that was able to apply an intelligent discard policy, focusing loss on a single (or as few as possible) flows, would minimise disruption to the total number of flows (see [Smith91]).

The basic principle may be illustrated using the simplest scenario: all content is constant bitrate. Assume that the customer adds new flows one at a time with some short interval (e.g. some seconds) before the next flow is added. Then congestion happens through the "last straw" (i.e. "Its the last straw which breaks the camel's back" – Proverb) principle. In other words, there is no congestion until a flow is added which takes the combined bitrate above the capacity available for that customer. It is easy to see that by simply deleting the packets of this latest flow, congestion would be removed. This suggests, that the QoS protocol needs to memorise the identity of the latest flow and be prepared to delete the packets of this flow. Then, it would be possible to protect the remaining flows from congestion.

Our QoS Protocol

We can now extend this scenario to include variable bitrate flows; here, the QoS protocol needs to memorize the last r flows. When congestion occurs, the device still tries to delete the packets of the latest flow but is prepared to drop packets of the other r-1 flows if congestion persists. Further complexity is added as we move from a strictly "last flow suffers" world into a policy-based world where the latest flow is not necessarily the one that will be targeted.

Having set out the principle of the device, we can now describe the functionality.

The main functions of our QoS Protocol
The Main Functions of Our QoS Protocol – Click here to show picture in full size!

As shown here, the device contains the following functions for each of its customer links:

A key assumption is that service providers (e.g. audio and video media libraries) are connected via a high-speed internetwork to the access providers of the customers as shown here: We propose that at start up a new flow begins with the sending of a "Start Packet", which contains all information necessary to identify the packets of the flow (e.g. source and destination addresses, flow label or port numbers), a Rate Advisory, that is the (estimated) peak rate of the flow, a policy field which contains a policy for the flow (e.g. to move it directly to the Guaranteed Area, see below), a priority field that contains the priority of the flow and finally a sub-components field that contains information about different layers of the flow (so called layered transmission, see [Dre01]). Notice also that this device fits with the connection-less paradigm in that the sources are only required to transmit a Start Packet and then, without waiting further, start to transmit their data. There is no negotiation, unlike classical RSVP.

When the edge node receives a Start Packet for a new flow, the Control Logic adds the flow to the Flow Register Policies allow a refinement of this behaviour.. This register contains the identities of all flows that are in the Drop Window, and are a potential focus of packet discard in the case of congestion. The Drop Window has a limited number of entries (configured by the administrator) and the sum of all flows' Rate Advisories within the window should be >= N% of the link bandwidth.

With the duration of time, flows gradually move through the Drop Window into the Guaranteed Area. This movement within the Drop Window actually consists of a re-classification of flows so that they become less likely to be targeted. The default policy is that the latest flow will be the subject of immediate discards in the event of congestion and only when discards on this flow fail to reduce congestion will the additional flows be targeted.

After arriving in the Guaranteed Area, flows will not be the focus of packet discard any longer, except in cases of rare and very extreme congestion where the amount of packets necessary to drop exceeds the amount of packets currently sent by flows within the Drop Window. (The fact that up to N% of the link bandwidth is available for discard should make such an action very rare.) Even in these circumstances, the Guaranteed Area is not subjected to random losses. Rather, the device randomly selects by choosing a packet from the Output Buffer and obtaining its flow identity. one new flow at a time from the Guaranteed Area and adds it temporarily to the Drop Window, thereby increasing the packets available for discard to > N%.

Returning to the description of the normal movement of flows through the Drop Window to the Guaranteed Area, the default policy is to move the flow that has been within the Drop Window for the longest time into the Guaranteed Area, when the remaining additional flows themselves constitute >= N% of the link bandwidth. Other useful policies are e.g. to move video conferences first.

Note that it is a special property of our device that it is not necessary to save the identities of flows within the Guaranteed Area. It is implicitly assumed that all packets not belonging to flows of the Drop Window are flows of the Guaranteed Area. Therefore, only a constant space is required for each customer link's Drop Window; this significantly improves the device's scalability. Normally, flows within the Drop Window are just overwritten by new flows; however the device also has a mechanism, based on time and/or packet count, to remove flows from the window. This property ensures efficiency and scalability.

As stated, the flows within the Drop Window are the focus of packet discard in case of congestion. The Control Logic detects congestion by monitoring the buffer fill level. If the congestion reaches a certain configured threshold, it triggers packet discard by the Packet Handling unit. This is applied to all packets of (usually) the latest flow added to the Drop Window. Furthermore, the source and receiver of the flow are notified about the congestion by a Congestion Notification message in the form of an Alarm Packet. If congestion continues after this action and the buffer fill level reaches a second threshold, then the Control Logic instructs the Packet Handling unit to begin discarding on all the other flows in the Drop Window. Again alarm messages are sent to the sources and receivers. Since the Rate Advisory sum of all flows in the Drop Window is <= N of the link bandwidth, there should be sufficient droppable packets. Only in some rare cases of extreme congestion (for example by a malicious user), may it be necessary to drop packets from flows within the Guaranteed Area.

Beside simplicity, efficiency and scalability, our described edge node further provides the following advantages compared to RSVP: a guarantee is provided for the admitted flows except under extreme and very rare traffic conditions. Selected flows will be targeted for packet loss, other flows can continue without any loss or undesirable packet delays. It is possible to admit flows without knowing the remaining capacity of the link and admit variable bitrate flows without being constrained to accept only a set of flows whose peak rates are less than the available capacity, but it is not necessary to keep active/ceased state information on admitted flows. The admission of flows does not require a suspension of higher-level session control protocols; a sender is only required to send a Start Packet.


🖋️ Publications

The complete BibTeX references in a single file can be found here!

2024

2009

2008

2005

2004

2003

2001

2000


📖 Standardisation

The complete BibTeX references in a single file can be found here!

2024

2004

2003

2002

2001


💾 Implementation

Currently, our QoS protocol has not yet been implemented. Our plan is to implement the device's functionality as a Queuing Discipline (QDisc) for Linux Traffic Control. This implementation is offered as a student project.