On Path Metrics and Routing Protocols for Wireless Networks

De Couto’s et. al. on the expected transmission count (ETX) metric works on the premise that the minimum hop-count metric does not necessarily provide the best performance. One direct contribution of this paper is their examination of the effects (or side-effects) of solely relying on the minimum distance. They also contribute the design and implementation of ETX in DSDV and DSR routing protocols and showed that it yields better performance on multi-hop wireless networks than a scheme based solely on minimum hop counts. Using bi-directional per-link packet loss measurements, ETX is a scheme that finds high quality paths to reduce transmissions. ETX incorporates in its formulation the wide range of link loss ratios, the asymmetry of these loss ratios and the interference between hops. One thing that stands out is the overhead of computing for the ETX would prevent it from being used when the network changes either through mobility or changes in the signal strength of the nodes.
Broch et al. compare the peformance of four ad-hoc network protocols namely DSDV, TORA, DSR and AODV.  They provide extensions to the ns-2 simulator to model IEEE 802.11 MAC and physical layers, node mobility and radio network interfaces. The study was very detailed with the assumption clearly explained. I also appreciate the discussion of the effects and results of the simulation which showed that depending on the need and cost (overhead), the nature of the application and problem must be taken into account when selecting an appropriate protocol to use.

On Modeling, Wireless Links, Transport Protocols and Roofnots

The paper by Gurtov and Floyd argues that the design and implementation of transport protocols can be improved by carefully modeling specific characteristics and behaviors of wireless links. It seems an appropriate next paper to cover since the “tussle” and the interplay of wireless links and transport protocols have been introduced in previous discussions. They discuss modeling and the effects on transport protocols of specific wireless link characteristics such error losses, delay and bandwidth variation, packet reordering, on-demand resource allocation, and asymmetry. Queue management and mobility models and its effects are also covered. A nice take away from this paper is that both designers of the transport and wireless protocols must consider each other’s concerns to provide a better and more efficient end user experience.

Bicket and et. al paper examines how wireless mesh architectures with minimal planning and management can be effective in providing end-to-end quality Internet access. It uses as a case study the Roofnet mesh network. The results are generally positive such as inter-node dsl-like throughput and delay characteristics; and the usable multi-hop throughput and coverage. I was first disappointed at some of things the study did not include. In particular, the study does not include the evolution of the network (users entering and leaving) which is I think is very important. While my initial reaction was perhaps “clouded” (influenced)  by Gurtov and Floyd’s discussions on assumptions and modeling, I do realize that the contributions of this work is substantial and that it is within the scope of what they have set out to do. I also appreciate Gurtov and Floyd’s take on modeling must be tempered in certain contexts such as the case of examining and evaluating behavior in test-beds.

On TCP and Link-Layer Wireless Notworking

Bharghavan et. al. first investigates media access protocols for “new” (circa 1994) mobile computing devices as applied to Xerox PARC’s radio technology. They evaluate these algorithms based on high throughput and fairness deferring to the latter when necessary. Based on the result of this evaluation, they introduce MACAW with an additional ACK to RTS-CTS-DATA (enhancement from Karn’s and Biba’s work) and a different backoff algorithm.  In MACAW, the backoff algorithm includes in the packet header a field for the backoff counter. This enables a station, upon every successful transmissions in the cell,  to have the same value for the counter as everyone else. They have showed that this slight modification results in fairness. Also to prevent oscillations (as seen in the original MACA design), they adopted a less aggressive adjustment algorithm called multiplicative increase and linear decrease (MILD). Another enhancement introduced in MACAW is the use of ACKs. The use of ACKs is to introduce link layer recovery rather than solely relying on the transport layer for this. This enables faster recovery at the link layer with minimal overhead. In the context of wireless networking, an interesting discussion point could be the concept of service differentiation in such environments. While MACAW improves on the link layer, how and where should QoS be provided in this more constrained environment can be discussed.

The next paper looks at wireless networking at the transport point of view. They first classify transport (TCP) schemes into three namely: end2end, link-layer protocols and split-connection protocols and then evaluate them based on throughput and goodput. Interestingly, a link-layer protocol that is TCP-aware  (as compared to TCP protocol that is aware of the link-layer protocol) is reported to have very good performance. Maybe a throwback and in relation to the first few papers in class, for this particular scenario, it is much better to push some intelligence to the lower layers to possibly improve efficiency and utilization.

On Real-time Scheduling, Utilities and Frameworks

The premise of Shenker’s paper is that while the Internet of the mid-90’s have scaled to support a growing number of users, a new set of applications with radically different traffic requirements were beginning to appear at that time. Shenker claims that with such applications in the horizon, the design and architecture of the Internet needs to be reexamined in the context of possibly introducing a new service model, its associated invocation and the use of admission policies similar to what is used in circuit switched networks. There were a number of  future design considerations presented but I will concentrate on the utility function and how Shenker described the abstract framework around this economic model. It is important to point out that while mechanisms developed at the time can be  fair and have nice network properties such as efficiency and stability, they  are unable to differentiate one type of traffic from another. In short the existing mechanisms  lack a user-centric criteria to determine which users require more resources and more importantly, whether the user’s expectations were met. Shenker proposes the use of utility functions to capture this expectation. With users specifying their expectations through this function, the goal of the network can therefore be simply to maximize overall utilities (of course there could be some variation on this optimization). With the utility function, Shenker was able to motivate the use of more than one service class using a simple example. He then begins to introduce the notion of incentives and how one who requires less resources could be compensated by paying less. This allows the network to potentially be both efficient and economically viable while meeting the resource demands of different classes of users. In addition, it is proposed that some form of admission control be in place to avoid overloading. Since this paper was written in 1995, we have the benefit at looking at what has happened over the last 10 years. While the idea of using utility functions sound reasonable, it seems to have not taken off. Personally, when using Voip or streaming video applications, I do not specify my utility functions but I do give feedback in some cases (Skype, YM). From what I understand, these functions seem to be subjective and differ from one user to another. So I will assume that either my preferences were guessed by the network correctly or the Internet is still the same except that we have better links and higher capacities available.

The next reading by Clark and et. al. (interestingly Shenker is also in here and I will go back to this note in a moment) describes an architecture that defines two types of service commitments for real-time applications and the mechanisms to implement these commitments. The first type is the guaranteed service class which may be considered as having hard requirements — that is the delay estimation is based on worst-case static bound. The second type is the predictive class or the tolerant class. From an initial estimate of the source behavior, this class introduces the idea of measurement-based admission control. This allowed for the adaptation or flexibility to admit new connections and therefore improve the utilization. They then begin to describe the mechanisms to implement these service classes and detail the results of their experiments wherein they can offer differentiated services to a number of defined user classes.

From the point of view of the authors, it is both important to put in place admission control mechanisms to limit load and to introduce pricing to allow for incentives. Going back to my note, I believe some of the experiences of one of the authors (Shenker) allowed him to generalize the ideas in this paper and therefore developed the framework in the first paper (2nd paper if we are strict with the chronology) which incorporated the service classes, admission control, the user criteria and the incentives.

On RED and XCP

The next two papers we will be discussing are strategies that use explicit congestion signals. Routers along the path implement marking that enables responsive flows to discern levels of congestion in the network. The first paper Random Early Detection (RED) algorithm makes changes in the routers while retaining the behavior of the transport protocol. In the EXplicit Congestion Protocol (XCP), some changes in the core nodes and at the end-points are assumed.

The RED gateway implements a marking strategy based on observing the average queue length in a gateway. There are two parts in RED’s algorithm. The first part involves the computation of the average queue size. This value gives an indication of the level of burstiness allowed in the gateway. The second part involves the marking algorithm. Packets are marked with a probability proportional to their share of the bandwidth if the average queue size exceeds a specified threshold range. Once this range (the maximum threshold) is breached, all subsequent packets are marked. This explicit signal allows senders to throttle back their transmissions. This allows RED to avoid congestion by controlling the average queue size. Results show that the algorithm avoids global synchronization and is not biased towards bursty traffic. There are several discussion points here: first is the selection of the optimum operating point (average queue size, thresholds); next is the effect of linking several RED gateways with different configurations in a chain; lastly, is the use of several RED queues in a single gateway.

The XCP paper uses the idea that routers in the network provide, as the protocol is aptly called,  explicit feedback to the endpoints. This is in contrast to the TCP practice of using drop packets as indicators of congestion.  One of the motivations of XCP is the instability of TCP especially if the path includes both high-bandwidth and large-delay links. In XCP, the sender sends details of its congestion window and round trip time estimate in an XCP congestion header. Depending on the input traffic, routers in the path uses information in this header to explicitly transmit adjustments (increase or decrease) to a sender by modifying the feedback field. As this information is seen and can be changed by any routers in the path, it will contain the feedback from the bottleneck router. The receiver upon receipt of this header, copies and sends it back to the sender for its appropriate action. Since the sender receives this information explicitly. the convergence  has been shown to occur quickly than TCP. In addition, XCP has been shown to work well under several different types of scenarios. However, I would have wanted to see how XCP performs when mixed with other types of transport traffic like TCP and even UDP. This would have probably given insights on how XCP and the network in general will work and would have been a nice step towards a possible incremental deployment rather than purely an academic result.

One of the key points in XCP is that it looks to minimize packet drops by preventing queues from building up. XCP also aims to decouple utilization from fairness control. The idea is that the efficiency controller uses all the spare bandwidth while the fairness controller sends the feedback values proportionally to ensure all flows are receiving the equivalent bandwidth. Potentially, with these additional capabilities, one can also include policies in place to allow for some sort of QoS differentiation in an XCP network.

On Fair Queueing Algorithms and Core-Stateless Routers

Previously, I commented on the possible unfairness in the congestion avoidance papers specifically when the users are uncooperative. Interestingly, the set of readings for today talk about putting intelligence inside the network for traffic control. Both papers recognize that while end-host control can contribute to congestion avoidance, it is somewhat ineffective when some users are misbehaving. These misbehaving users can either be simply misconfigured or intentionally opportunistic resulting in the unequitable share in bandwidth. These users will grab more bandwidth because TCP-friendly users will graciously throttle back during congestion. Both DKS90 (Demers, Keshav, Shenker, 1990) and SSZ98 (Stoica, Shenker, Zhang 1998) carefully show the interplay between flow controlled sources, uncooperative sources and basic FIFO scheduling. Their results clearly indicate that bandwidth protection is impossible in an environment of mixed users with simple scheduling. With this premise, DKS90 and SSZ98 present fair algorithms and describe the conditions where they work best.

In the DKS90 paper, the authors emulate the bit-by-bit round-robin algorithm and uses the ordering generated from this to schedule packets in a gateway. In this way, the algorithm is fair and also addresses the fixed-packet size constraint of Nagle’s solution. Nagle’s round-robin solution is biased toward flows with large packets. DKS90 also notes that this work is very similar to the work of Zhang. In Zhang’s work, the Virtual Clock (which is similar to the algorithm developed in DKS90), together with an appropriate admission control policy, can be used to establish bounds on delay. This is an important result in terms of assuring end-user service commitments.

While DKS90 shows significant improvement in fairness, the drawback is that the algorithm may be difficult to implement. The premise of DKS90 requires per flow information to be kept at each router which is quite unrealistic. The SSZ98 paper suggests a solution to this problem. They claim that state information is important but should not be placed in all nodes. In their paper, they differentiate the roles of exterior routers and interior routers. They believe that in the core (or interior), no state information except for the labels should be kept. They also assert that maintenance of per-flow state information should be placed as close to the source as possible – in this case the edge routers. This is a classic trade-off scenario wherein you balance complexity and cost (overhead) considerations.

Clearly from these materials and previous readings, it appears that no one algorithm or mechanism would be able to answer the problems of the network. What we may see is that different mechanisms will be working together to achieve a common goal. In the case of these papers, congestion avoidance and bandwidth protection goes hand and hand.

On adding, decreasing and congestion

The paper by Chiu and Jain looks at the problem of congestion avoidance. The paper begins by illustrating the  relationship of network load to both response time and throughput. In this illustration, Chiu and Jain clearly identify the operating regions of concern namely the cliff and the knee point which are important to understanding, preventing and managing congestion. The cliff can be viewed as the point where any further increases in load would have a negative impact on throughput. This is also accompanied by the increase in response time of the network. The knee point on the other hand is the moment were gains in throughput start to taper off as compared to the increase in delay. This can viewed as the  point where performance gradually starts to degrade thus making the knee point the ideal region where the control function must operate in. With this as backdrop, several algorithms were analyzed using efficiency, fairness, distributedness and convergence as metrics. Their analysis show that the decrease policy should be multiplicative.

The paper  mentioned four items for future considerations which I would like to comment on but I will limit it to only the first two items. The first one involves the effects of delayed feedback. For some systems, this might be a problem especially in networks that transport both data and control packets in the same channel. Interestingly for TCP, delayed or even loss ACK packets is feedback in itself. Depending on the current state, TCP is able to adjust its send/congestion window in response to this implicit feedback. Given this environment, the use of the absence of feedback actually fits in nicely as part of the control function of TCP. The second item is about the question of utility of additional bits to carry more information instead of the binary model. Surmising on the matter, I also agree in its role in potentially minimizing the oscillations and thus increasing convergence time. However I also think that like TCP, more feedback information would not necessarily require more “bits”. As a final comment on the paper, I would like to look at the assumption that users would be cooperative or can react to the feedback information. We know that this is not the case for UDP. I would have been curious on how they would have simulated (in the related work) this dynamic given that some sources or users do not react to the feedback information and continue to send data. In terms of fairness, I would assume that UDP would be able  to take advantage of the fact that TCP sessions are throttling back and therefore the service will become unfair to those cooperating users.

On BGP and on Inferring Autonomous System Relationships

Continue reading