Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > EDA/IP

Implement virtualization in mobile devices

Posted: 10 Sep 2008 ?? ?Print Version ?Bookmark and Share

Keywords:virtualization design? mobile devices? multicore processors?

Despite virtualization's obvious appeal to embedded software developers and OEMs, adoption of the technology may stall due to inherent limitations in virtualization platform architecture. Here is a look at the limitations and how they can be overcome by a different approach to building embedded virtualization software.

Over the last five years virtualization has evolved from an obscure technology to become a key enabler of enterprise server and desktop applications. More recently, virtualization has begun to play a comparable pivotal role in embedded development and deployment. Segments leading this wave of adoption are mobile telephony, telecommunications and network infrastructure, and secure embedded computing. In all these areas, developers and integrators look to virtualization to address needs for increased reliability and security, to ease maintenance and forward-migration of legacy code, and to optimize hardware utilization for both multi-OS processor partitioning on a single CPU and managing execution across multiple CPUs and multicore processors.

Despite the obvious appeal of virtualization to embedded software developers and original equipment manufacturers (OEMs), adoption of the technology could stall due to inherent limitations in virtualization platform architecture. This article examines those limitations and how they can be overcome by a different approach to building embedded virtualization software.

Challenges to embedded virtualization
Let's examine the areas in which virtualization can fall short in embedded design, including those areas targeted by advocates of virtualization as ripest for adoption.

Managing software complexity!The most strident clarion call for adoption of virtualization in embedded systems arises from dramatic increases in the size and complexity of embedded software. For the past decade, embedded software content has been doubling annually, such that today embedded systems boast source code bases of tens of millions of lines of code, equaling and sometimes surpassing the volume of enterprise program source. The challenge of managing and maintaining this volume of code is compounded by the inherently complex, multi-threaded and latency-sensitively nature of embedded software.

Advocates of embedded virtualization cite this present and growing complexity as the prime motivator for adopting virtualization platforms. Unfortunately, virtualization falls short in addressing this primary challenge to embedded software development. While segmenting and isolating software components into distinct virtual machine (VM) containers can enhance reliability, VM-level granularity is too coarse to make a serious dent in addressing creeping complexity. Guest OSs and hosted applications running in separate VMs can actually increase overall complexity, especially when virtualization platform software lacks insight and integration into embedded systems architecture and is not harmonized with embedded software engineering practices.

Isolation vs. Integration!The clearest and most immediate benefit that embedded applications realize from virtualization is improved reliability and security from strict, hardware-enforced separation among guest operating systems (Linux, WindowsCE, RTOS, etc.) and other execution contexts (lightweight in-house kernels, device drivers, etc.). This isolation helps prevent unintended corruption of code and data across independent functional areas in intelligent devices (e.g., baseband radio stacks and user interface code in mobile phones), and also erects barriers to malicious access by code downloaded by end users.

The robustness afforded by virtualization to embedded applications, however, runs counter to traditional embedded design practices. Such practices emphasize efficient data sharing among embedded software components, but are obstructed or even disabled by strict partitioning of code into virtual machines. Moreover, without streamlined communication among code running in different virtual machines (VMs), virtualization can degrade embedded systems performance to unacceptable levels and impact existing integration between OEM and third-party software.

Scheduling Opacity!Embedded systems software routinely involves complex mixes of multi-process and multi-threaded programs. Both mobile and stationary systems can require scheduling and synchronization of hundreds of tasks; even quiescent or suspended equipment can still boast dozens of running threads. Moreover, unlike much enterprise and desktop software, embedded applications can involve rich prioritization schemes and disciplines (e.g., Rate Monotonic Analysis). Designing, debugging and tuning scheduling priority, execution policy and real-time event response are system-wide activities that demand fine-grained visibility and control.

Imposition of virtualization runs counter to this detailed, system-wide perspective. Segregation of software components places each in a virtual "black box," with its own OS-specific scheduling priority and policy characteristics. Without the ability to synchronize and normalize scheduling priority and policy across VMs, embedded software devolves into a collection of "wheels within wheels." Each OS in each VM runs according to its own scheduling scheme, and prioritization across virtual machines occurs at the level of the entire VM or guest OS, not at the required global level of individual tasks or other schedulable entities. This VM-level opacity not only runs counter to common embedded design practices, but it can also completely impair the development, debugging and deployment of even nominally complex embedded systems.

Energy Management!Optimizing energy utilization in embedded systems stems from requirements specific to design domains: in mobile devices like phones and media players, energy management yields longer battery life and helps OEMs differentiate their products in crowded marketplaces. In stationary equipment, like networking equipment (routers, gateways, security appliances) and consumer electronics devices (television sets, DVRs, IP phones, and durable appliances), energy management helps lower electric bills and meet emerging needs for energy conservation.

Energy management in intelligent devices involves a mix of hardware and software techniques. Most often it involves the OS kernel recognizing both reduced user interaction (fall off in keyboard/keypad and other input device events) and quiescent program states (waiting for external events or long pauses in execution profile). When systems enter such idle states, energy management software can selectively shut down power-hungry devices like LCD displays, scale back CPU and bus clocks, and lower operational voltages. Conversely, that same software must able to ramp performance back to full-throttle levels to service new events and user input.

Effective energy management requires extensive cooperation among OS, device drivers and even application software " it is a global discipline. Unfortunately, as with scheduling, opacity across VM contexts prevents effective energy management: one quiescent guest OS lacks visibility into other VMs needed to make energy management policy decisions. In complementary fashion, hypervisors lack sufficient understanding of the internal states of the guest OSs they manage, preventing energy management at the virtualization platform level.

Information flow control!The theme of scope mismatch between embedded design and virtualization is a recurring one in this article, and applies to the flow of information and execution control among VMs as it does to the areas described above. Embedded systems typically need to share data and synchronize activities at a global level, yet virtualization emphasizes compartmentalization at a local VM level. Moreover, traditional embedded inter-process and inter-processor communications mechanisms (IPCs) tend to eschew formal safeguards in favor of low-latency and high throughput.

Emerging requirements for embedded systems to accommodate fine-grained access and security policies for types of users, roles, and content owners seem to beg imposition of virtualization and strict partitioning. However, the model of cross-VM communication provided by hypervisors is based on virtual network devices. This scheme fails to meet the performance requirements of embedded designs, as well as the need for specific restrictions on communication (a subsystem may be allowed to communicate with some of its peers but not others).

Security- and safety-oriented design!With embedded systems being increasingly deployed in mission- and life-critical functions, concerns about their correct operation become paramount. Large software systems are inherently faulty, and the time-honored approach to reducing the likelihood of failure is to keep the safety- or security-critical software to a minimum. This critical part of the system is called its trusted computing base (TCB). Minimizing the amount of TCB code is crucial to ensuring security and safety.

Traditional hypervisors fail to support this imperative. The hypervisor, which is inherently part of the TCB, contains drivers for all physical devices, and thus becomes part of the TCB. Alternatively, drivers for real devices are often hosted by a driver OS in a special, trusted VM (frequently called "Dom0"), which increases the TCB by the complete scope of the driver OS. Compared to a design without virtualization, this approach increases the size of the TCB, and thus provides reduced security or safety.

Embedded virtualization shopping list
The above litany of limitations is not intended to scare readers away from the benefits of embedded virtualization, but instead to ground reader expectations firmly in reality. Taking these limitations into consideration, then, what should we seek in a more ideal embedded virtualization platform? Strong encapsulation that:

? Preserves embedded stack attributes, including real-time responsiveness

? Allows cross-VM scheduling and harmonizes execution policies

? Accounts for real-world embedded software engineering practices

? Accommodates energy management at a global level

System-wise security policy and small trusted computing base (TCB) that:

? Supports low-latency, high bandwidth IPCs among VMs and guest applications

? Is sufficiently granular to support complex embedded design needs

Building on microkernel technology
To fulfill this tall order for embedded virtualization, we need to look no further than the humble and ubiquitous microkernel technology. A microkernel is a minimal OS kernel that provides only the most essential "bare bones" mechanisms needed to implement core services, including address space management, threading/scheduling, and inter-process communication. For systems and CPUs with distinctions between user and kernel mode operation, the microkernel is the only part of the system executing in a kernel mode. OS-level services are instead provided by user-mode daemons or server programs (picoservers), supporting device drivers, protocol stacks, file systems, user-interfaces, etc.

Microkernels emerged in the 1970s as an alternative to large, monolithic operating systems kernels. Instead of vertically-layered OS architectures with system call interfaces, microkernels typically feature a horizontal structure, with IPCs connecting applications and picoservers. In the three decades since their introduction, microkernels have enjoyed some commercial success, acting as the underpinnings for high-level OSs, with deployment in both enterprise and embedded settings. Microkernels also provide the focus for much academic research, resulting in formal theory and proof of their operations absent from commercial, mass-market kernels and OSs. Given the minimalist approach to design and their emphasis on efficient IPC mechanisms and lightweight user-space device drivers, microkernels provide an ideal base technology for virtualization. Indeed, microkernels enjoy a solid track record at the core of widely deployed hypervisors.

Building on microkernel technology helps embedded virtualization overcome the listed limitations by:

Additional benefits of virtualization with microkernel
Microkernels bring additional benefits to hypervisor design, especially for security and quality assurance. Their small TCB is more amenable to formal certification regimes for security (e.g., Common Criteria) and also for mission-critical and life-critical scenarios requiring rigorous certification.

The same attributes make it possible to subject microkernels and microkernel-based hypervisors to formal verification, i.e., mathematical proof of functional correctness. The ability to verify correctness of implementation and execution of hypervisors places this software technology on a par with underlying hardware, opening up virtualization to an ever-greater array of application possibilities.

This article has demonstrated how stateless, passive hypervisors fail to meet the everyday challenges faced by embedded designers. Attempts to accommodate real world embedded needs can end up just "punching holes" in essential isolation offered by virtualization, or introducing added complexity or instability by force-fitting legacy RTOS into partitioning schemes that don't pass true virtualization muster. Such ad hoc approaches violate the principles of separation essential to virtualization and address only a subset of the limitations listed earlier in this article. By contrast, microkernel-based virtualization platforms are both small and flexible enough to meet embedded design requirements and also emerging needs for rock solid and verifiable security and safety.

About the author
Gernot Heiser
is CTO and cofounder of OK Labs, founded the company in August of 2006. As CTO, his specific responsibility is to set the strategic direction of the company's R&D.

Article Comments - Implement virtualization in mobile d...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top