Embedding security into servers
Keywords:embedded system? internet security? server security? unix server? windows server?
Embedded systems control much of the world's critical infrastructure, which makes them a prime target for attack by everyone from hackers to terrorists. However, embedded systems have at their disposal an impressive set of defenses, mechanisms, and procedures that are in common use for operations other than security, but that result in security mechanisms that prove stronger in some cases than traditional enterprise systems like Windows or Linux.
In the early days of my career as an embedded systems developer, I worked on critical communications systems. Every aspect of the software and hardware had to be perfect - any failure could prove disastrous - updating software sometimes involved climbing hurricane-proof towers in the Everglades, brushing aside various lizards and insects, manually plugging in PROMs, and you had a team of people highly-motivated to get it right the first time.
When I left embedded development briefly to develop Unix-based Internet systems for enterprise environments, I discovered an entirely different world. In the enterprise environment, things go wrong - and that is OK with users; it is even expected. In this world, developers are not responsible for errors, and they can even charge money to upgrade to a version that fixes a bug.
Sensing the gap between the stability of an embedded system and the functionality of the Unix Web servers, a few colleagues and I decided to close it. We developed an Internet server based on hard, real-time methodologies, with the hope of showing the enterprise world what we embedded developers knew existed: simple, small, reliable systems with all the power of a modern Unix server.
A little more than a year ago, the U.S. government helped us realize that all the features used to keep the device running were actually security features. For example, memory scans meant to detect voles chewing on traces also prevented malicious Web content modification.
Hydra, the product with which I am currently involved, offers dozens of features that we developers took for granted as being necessary for reliability but which became security features. A number of these are simple enough to integrate into embedded systems immediately.
Many systems feature their own memory manager, whereby processes allocate and de-allocate fixed-size blocks of memory. One simple mechanism to make these memory managers more secure is to use protection bits. These bits surround the memory and are filled with a distinct pattern.
CPU-constrained systems could use a fixed pattern. However, a simple hash on the address is more difficult for a malicious task to guess at. Even if you use a multicast-address allocation provided to you by your RTOS, you can simply wrap the call with code to allocate a few extra bytes, and keep a linked list of the memory addresses to monitor.
Systems such as Linux and Windows effectively isolate the memory for a process, and a typical RTOS' flat memory model would seem to increase the likelihood of a buffer overrun. Further examination, however, yields a few advantages to the flat memory model. In a flat memory model, a single diagnostic task (or other process) can examine the protection bits at any time to see if a task has run off the end of a buffer.
Protecting buffers In a segmented memory model, all processes must watch out for themselves in the event that a process clobbers its own buffer. Similar protection word mechanisms are sometimes used by debuggers and memory managers in any kind of memory model, but it is possible to use the bits for more. For example, rather than using just a pattern in those protection bits, why not employ a simple checksum? Some RTOSes will detect when you try and write outside memory that you allocated, but I know of none that can detect whether the contents of a buffer have been modified by anything other than the code that should be modifying that buffer.
Checksums are also useful to verify that no one has modified your code image. Say, for example, your image is stored in a PROM or Flash memory, and you move it into RAM to run. During the copying to RAM, why not perform a checksum? Then a low-priority task can periodically recompute the checksum to verify that no one has found a way to modify your image in RAM. While this mechanism is not foolproof, it provides a level of protection far beyond that of traditional operating systems.
- Eric Uner Co-Founder Bodacion Technologies Inc. |
Related Articles | Editor's Choice |
Visit Asia Webinars to learn about the latest in technology and get practical design tips.