Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Embedded
?
?
Embedded??

Technical computing trends drive distributed, parallel computing apps

Posted: 12 Oct 2007 ?? ?Print Version ?Bookmark and Share

Keywords:technical trends? distributed apps? parallel computing?

Several trends in the technical computing market are driving the rapid creation and use of distributed and parallel computing applications. These trends include commercial off-the shelf (COTS) computer clusters that provide affordable high-performance distributed environments; OS that add features to simplify cluster management, with many schedulers now supporting multiple cluster configurations; and the availability of software libraries and tools for distributed and parallel computing. Such new and expanded capabilities enable engineers and scientists to interactively develop distributed or parallel applications and adapt existing applications to run in distributed environments. This article examines significant trends in the technical computing market and their impact on software vendors and developers of technical and high-performance computing (HPC) applications.

Technical computing applications
Technical computing applications can generally be classified as serial applications, distributed applications or parallel applications. Serial applications, in which one instance of an application runs on one computer, describe the vast majority of software that people use on their PCs. Distributed applications leverage multiple computational engines, which can run on one or many computers, and contain independent tasks that do not interact with each other. Distributed applications, such as Monte Carlo simulations, typically execute the same algorithm over and over with different input parameters. Parallel applications also leverage multiple computational engines, which can run on one or many computers, but contain interdependent tasks that exchange data when the application executes. These tasks often involve working on large data sets. Each of the major types of technical computing applications is undergoing significant changes.

Instead of using technical computing solely for algorithm development, there is a trend to quickly convert ideas into production-ready algorithms that can be included in commercial products. Researchers who previously focused on problem analysis are moving into data analysis to design applications, applying tools to gain theoretical insight from their data. Application development and delivery, which involves the creation of enabling tools and methods that are deployed and used throughout the organization, is also becoming standard. All of these trends contribute to an increase in the size and complexity of technical computing applications. As a result, researchers find that applications often either exceed the memory capacity of their computers or exhibit extremely long run times. Such limitations are driving serial applications to become distributed or parallel applications.

Low-cost hardware
High-performance computing was in the past limited primarily to large organizations with the resources to purchase supercomputers. Over the past decade, the performance of standard COTS computing platforms has developed to nearly approach that of supercomputers. The same computing power in million-dollar HPC platforms from ten years ago is now available for a few thousand dollars in COTS computer clusters. The fact that the lower end of the HPC market has grown significantly, while demand for larger enterprise clusters has declined, indicates how rapidly clusters are being adopted for interactive personal use. The interactivity of personal clusters is significant because it enables interactive prototyping; in the past prototyping had to be done in a batch manner that involved submitting and retrieving work.

Today?s affordable clusters come in different forms. Standard computers have moved from having a single processor to having multiple cores on one processor or even multiple processors. This shift alone, toward multiple cores, is driving rapid change in the requirement that a single machine support distributed and parallel applications. Because today?s complex problems demand that an application scale beyond a single processor, engineers and scientists must address whether applications should use:

  • Multiple threads on one machine

  • Multiple processes, which may each contain multiple threads, on one machine

  • A cluster of many machines

    At the same time that issues are being raised about applications using a single machine or a single cluster, many clusters are being linked across organizations in large multi-enterprise grids, such as Teragrid in the United States or EGEE in Europe. These grid solutions represent a rapidly growing trend, especially in the academic community.

    Shedulers, OS for Foundation Services
    A variety of additional resource management solutions are coming to the aid of engineers and scientists who want to take advantage of distributed computing. OS companies are working to simplify cluster management and reduce the reliance on IT resources. For example, Microsoft Cluster Computing Service (CCS) targets departmental and workgroup-level users who typically rely on dedicated IT support groups. Dozens of commercial and many freeware schedulers now support multiple cluster configurations. Schedulers offer advanced scheduling capabilities, batch workflow support, utilization and performance increases, and improvements in scalability, reliability and security. Platform Computing, which develops and commercializes the LSF scheduler, now serves 1,700 Fortune 2000 companies.

    Distributed, parallel engineering software tools
    Taking advantage of clusters has historically required an advanced programming degree, which many engineers and scientists lack. Engineers and scientists developing HPC applications have had to either develop applications using Message Passing Interface (MPI) in low-level languages, such as C and FORTRAN, or program in higher-level languages and then recode their applications to use MPI in C or FORTRAN. These approaches were costly, time consuming, and error prone. Even traditional HPC users are looking for better programming tools.

    The MathWorks has responded by providing tools that make it possible to interactively develop distributed and parallel applications with MATLAB and to distribute Simulink models for execution on a cluster or on a multicore or multiprocessor computer. Distributed Computing Toolbox enables users to prototype distributed and parallel applications in their desktop computers. MATLAB Distributed Computing Engine lets users scale their application to a cluster without changing their applications. Both 4 tools support the interactive use of computational resources and the traditional batch use of the resources. They also take advantage of industrystandard interfaces, such as MPICH2 and ScaLAPACK. Additionally, MATLAB now supports multithreading of some of its core algorithms so that serial applications can also benefit from hardware changes.

    With Distributed Computing Toolbox, distributed applications can be segmented into different independent tasks that can be distributed to cluster nodes. In the simplest case, where the problems can be divided into tasks consisting of the same function with the same number of input and output variables, a single function call parallelizes the problem. In more complex cases, only several lines of code are required.

    For programming parallel applications, Distributed Computing Toolbox provides support for parallel for loops and global array semantics via distributed arrays. Distributed arrays store segments of an array on participating labs and appear as regular arrays on all labs. Distributed arrays enable users to develop parallel applications without having to manage the low-level details of message passing.

    Interactive programming
    This new generation of programming tools has made it possible for the first time to interactively program both new and existing applications for distributed and parallel computers. Often, with little or no change to existing code, users can interactively prototype their applications on a cluster of machines. This capability has historically been reserved for serial applications, but it is now available for distributed and parallel applications. For example, in a parallel application, programmers may simply use the transpose function on an array, D, that is distributed across the processors in the cluster. The transposed matrix, E, is also distributed across the processors:

    >> E = D'

    This is the same code that users will write to transpose a matrix for a serial application and, in both cases, it can be interactively typed and executed.

    A Real-world example
    The International Linear Collider (ILC) project, currently run out of the University of London, provides an example of how distributed computing can benefit large-scale computations. The ILC consists of two linear accelerators, each 20km long, that accelerate beams of electrons and positrons toward each other to produce collision energies of up to 1,000 giga electron volts.

    At less than 5nm thick, the ILC particle beams can become misaligned by such tiny disturbances as small seismic events, the tidal pull of the moon, and ground motion caused by trains and motor vehicles. To ensure that the particle beams collide head-on, researchers are developing a real-time beam alignment control system. This development effort relies upon accurate simulations requiring a comprehensive model of the entire ILC. Each simulation tracks millions of individual particles through the accelerator and incorporates the effects of ground motion.

    Running a single simulation on a high-end PC used to take up to three days of processing time. The ILC now uses MathWorks distributed computing tools to run more than 100 simulations in parallel on a computer cluster running a Maui scheduler with a portable batch queue system. This approach can run a hundred simulations in the time previously required to run one, and it will reduce the total simulation time for the project by hundreds of days.

    Conclusion
    Several trends have converged to enable engineers and scientists to develop applications that can access many computing resources simultaneously. As a result, they can more quickly solve larger and more computationally intensive problems. Application design must also be considered because applications that are developed on a single machine should be able to scale to many machines Using MathWorks distributed computing tools, engineers and scientists can now much more easily develop distributed and parallel applications that can scale from one machine to many.

    - The MathWorks




  • Article Comments - Technical computing trends drive dis...
    Comments:??
    *? You can enter [0] more charecters.
    *Verify code:
    ?
    ?
    Webinars

    Seminars

    Visit Asia Webinars to learn about the latest in technology and get practical design tips.

    ?
    ?
    Back to Top