Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Optoelectronics/Displays

Developing video image stabilisation IP for FPGAs

Posted: 07 Mar 2013 ?? ?Print Version ?Bookmark and Share

Keywords:Image stabilisation? electro-optic sensors? Digital video?

Image stabilisation is a crucial capability for many electro-optic sensors, where an operator or user is required to view the output imagery. The technique can therefore enhance many practical viewing systems, spanning a very broad range of applications including those found in defence and security sectors.

Stabilisation provides a means for reducing both image blur and unwanted frame-to-frame image shifts and rotations, thereby aiding image interpretation and reducing the operator's workload. For those systems that require the operator to locate or classify features within the video stream (typically recognition and identification), then a stabilised image stream will help improve the accuracy of these tasks.

There are a number of techniques for stabilising an image which are either based on mechanical correction or image processing. Mechanical stabilisation techniques include those that gyroscopically stabilise the whole camera system or use elements within the camera to effectively move the lens or detector array.

Mechanical stabilisation techniques are well-established, although they can have a limited rate of response. Furthermore, they tend to be more expensive, consume more power and are physically larger and heavier. Mechanical stabilisation techniques used within the camera housing are generally less expensive and are physically more compact. However, they can have performance limitations such as an inability to correct for roll, and may operate over a restricted range of unwanted camera movements. In addition, such integrated camera techniques are less well-established for infrared cameras and those cameras that use interchangeable lenses. Finally, it should be noted that mechanical stabilisation corrects for movement associated with the camera, but does not correct for other effects such as atmospheric scintillation.

Figure 1: Unstabilised image set. The 5 frames have been false-coloured and superimposed to illustrate the movement effect.

Digital video stabilisation techniques provide image correction by using information from within the video-stream and this includes movements of the camera, any atmospheric effects, and movement within the scene itself. The approach offers a potentially significant performance gain with minimal impact on power, weight, and size. However, to realise these benefits, the stabilisation algorithm complexity can be high, which translates into a high computational load. Although electronic stabilisation can be achieved using a low-cost CPU architecture, the limited processing bandwidth restricts the maximum input image size and frame rate. Consequently, the capability of the stabilisation algorithms has to be compromised to facilitate real-time operation. GPU architectures can be used to reduce the limitations associated with CPU-only devices and provide higher processing bandwidths that enable more complex processing. However, GPU implementations consume more power and often still need an additional system host for designs based on commercial-off-the-shelf products.

The approach taken by RFEL has been to specify a stabilisation system that can readily support high input resolutions and frame rates, while maintaining low latency and power consumption. Also, the solution was required to be compatible with cameras that operate over different spectral bands, with support of multiple camera interfaces.

Physically, a flexible and compact hardware implementation was required that supports both stand-alone and networked applications. Furthermore, the stabilisation solution should allow rapid integration into third-party hardware, including retro-fitting into in-service equipment.

To meet these challenging requirements, RFEL elected to base the implementation on the latest FPGA architectures which have embedded ARM processors. Compared with a GPU implementation, the primary drawback was the required engineering development time which is significantly higher when compared with a CPU / GPU software module implementation.

Fortunately, RFEL has been developing signal and video processing modules for many years, which allowed substantial re-use of pre-existing functions and development tools. Initially, functional requirements were captured by liaising with major customers in the military and security markets.

The system was then designed and developed using RFEL's methodology of floating and fixed-point modelling in Matlab that allows performance testing, debugging and substantially de-risks all aspects of system implementation.

1???2?Next Page?Last Page

Article Comments - Developing video image stabilisation...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top