Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Controls/MCUs
?
?
Controls/MCUs??

Network programming can be tamed

Posted: 17 Nov 2003 ?? ?Print Version ?Bookmark and Share

Keywords:network programming? nisc? npu? kernel? asic?

Software-programmable NPUs, until recently the domain of ASICs, now provide designers an alternative that offers lower development costs, faster time-to-market and future-proofing programmable flexibility. However, ever-increasing "wire-speed" performance and scalability requirements in these environments usually call for the use of multiple-processor architectures, which can significantly increase the complexity of programming.

Encapsulated networking protocols typically demand a sequential programming approach, and sequential programming within a multiple-processor environment can involve challenges such as complex algorithm partitioning and load balancing.

An ideal solution would place these challenges on the programming environment, not the programmer. Achieving this goal requires the blending of a number of key factors.

First, such a solution should allow designers to leverage multiple processor cores for performance, but with the more-straightforward programming model of a single CPU - that is, a single-stage, single-image programming model. In essence, the programmer sees this kind of multiple-processor device as if it were a single, sequentially programmed entity, even though many packets or cells are simultaneously "in flight" across multiple tasks and multiple cores. Next, the underlying OS or kernel software itself should handle all of the required interprocessor coordination.

Finally, common networking functions that nevertheless require a large number of instructions to implement - such as traffic management, queuing/ scheduling, packet transformations, classification, search/lookup, statistics collection and so on - should be offloaded to specialized, on-chip co-processor engines.

At the same time, the software architecture should provide simple, single-instruction access to each of these functions via an application programming interface. Taken together, these characteristics can enable designers to embed high-performance network-processing capabilities while simplifying and minimizing the number of lines of code to be created.

Reducing lines of code

To deliver maximum performance while reducing program size and complexity, an underlying hardware architecture can divide packet processing between RISC-based NPU cores and special-function, on-chip co-processors. From the programmer's perspective, each co-processor implements a single-instruction operation as program tasks post requests to and receive data from, the appropriate co-processing elements.

This activity creates a network-optimized instruction-set computing (NISC) device. The combination of RISC and co-processing engines gives programmers the ability to offload common yet complex tasks, while preserving software-programmable flexibility in terms of the structure and flow of packet processing.

An NISC architecture yields several benefits. It balances performance and flexibility, delivers comparable application performance with smaller, lower-power hardware, and most importantly, offers simpler software with fewer lines of code.

In contrast to NISC, less-flexible NPU approaches achieve performance by partially hardwiring the flow order of certain functions, leaving them potentially incapable of deploying unforeseen algorithm designs or future protocols. It also contrasts with architectures that achieve performance by ganging together a larger number of more general-purpose RISC cores. The latter offer flexibility but typically increase complexity by requiring more functions to be implemented purely in software.

Comparisons have shown that such architectures require an order-of-magnitude greater number of lines of code to be written, debugged and performance-tuned to implement similar applications.

Embedding network processor functionality for high-performance systems is not trivial. It requires a carefully structured blend of programming model, facilities and a unifying software infrastructure that enables performance and flexibility. It must support fast and easy code development, with a high degree of code reuse from one generation to the next.

By combining the application-creation and debug facilities of a NISC architecture with a development-simplifying programming model for parallel multiprocessors and an API-based embedded kernel to speed up application development, industry-leading NPU architectures give system designers the best of both worlds: high-performance, specialized embedded functionality and the ability to efficiently access it.

- Robin Melnick & Eric Cowden

Applied Micro Circuits Corp.





Article Comments - Network programming can be tamed
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top