Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Networks

Service level agreements breed traffic management schemes

Posted: 01 Jul 2003 ?? ?Print Version ?Bookmark and Share

Keywords:traffic management? service level agreement? qos? telecom? service provider?

In networks at the core and edge, service level agreements (SLAs) that offer premium services lead to new ways of revenue generation. Implementing differentiated services requires sophisticated traffic management schemes, which can only be implemented by a combination of hardware and software.

Quality of service (QoS) is a set of specifications that allow a telecom service provider, business enterprise and individual telecom consumer to receive predictable service levels, which are measured in bandwidth (throughput), delay (latency) and delay variation (jitter). Traffic management is a critical component of QoS and communications network equipment efficiency.

But, not all types of traffic are created equal. While voice traffic is less bandwidth intensive and less sensitive to random drops, it is sensitive to delay and jitter. On the other hand, file transport protocol traffic is bandwidth intensive, sensitive to random drop and less sensitive to jitter as well as delay. To maximize network efficiency, packets are classified and grouped into flows and delivered with differentiated services according to SLAs.

SLAs offer a mechanism to assign policies to classes or flows. For example, a service provider may offer three classes of service: platinum, gold and silver. Platinum service offers guaranteed bandwidth and latency. Gold guarantees delivery. And silver offers best effort service with no guaranteed bandwidth. During congestion, packets coded as silver get dropped while platinum receives highest priority service. Implementations of such SLAs require efficient traffic management functions such as queue scheduling techniques and link fragment interleave mechanisms - all at wire speed.

Although most markets impose common requirements on traffic management, networks on the edge, in particular, have their own unique requirements. For example, they impose stringent size and power constraints due to space constraints imposed by wiring closets.

This is forcing NPU vendors to integrate more traffic management functions with traditional network processing functions such as classification.

Integrating traffic management into network processors also has the advantage of reducing total system cost due to reduced board space and power. Lower power consumption means low heat dissipation, which lowers cooling costs and increases system reliability. This allows switches/routers on the edge to be cost effective without compromising performance.

There are basically three approaches to implementing the traffic management component of traffic processors: completely in software, completely in hardware, or a hybrid solution. Each of these approaches trade-off in performance (speed), ease of programming, scalability and cost.

In a software only approach, all of the algorithms are implemented in software. The processor provides some basic primitives such as lock/unlock and mutexes. In this approach, the network equipment system designer implements in software the algorithms and all the associated resource blocks such as queues and tables. Although this type of architecture appears to be flexible, in reality, queuing disciplines implemented only in software are suitable only in the lowest speed routers where low traffic volumes do not stress the software implementation.

Because traffic management involves simultaneously examining the status of thousands of queues and their related parameters, it can be effectively implemented only in hardware. Software implementation dictates successive inspecting of thousands of individual queues resulting in poor performance, complex programming and inaccurate provisioning of QoS.

On the other end of the spectrum, pure hardware solutions implement queuing algorithms in hardware. Typically, these types of solutions are configurable but not programmable. Although a hardware only solution is suitable for a segment of the application, it is not suitable for a multiservice network involving internet protocol (IP) and ATM. In large IP networks, service providers constantly innovate queuing disciplines to differentiate them from the competition and seek new ways to generate revenue by creatively allocating and managing bandwidth.

Meanwhile, hybrid solutions allow software programmability while maintaining the speed advantage of ASIC solutions by implementing certain blocks in hardware. In this type of architecture, key functional blocks are implemented in hardware.

- Vinoj Kumar

Product Architect

Agere Systems Inc.

Article Comments - Service level agreements breed traff...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top