Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Sensors/MEMS

What you need to know about sensor fusion

Posted: 12 Oct 2015 ?? ?Print Version ?Bookmark and Share

Keywords:sensor fusion? gyroscope? micro-electro-mechanical systems? MEMS? Kalman filter?

Imagine these were angles in radians. The difference between those two methods of comparison is 0.036666667 radians, which is 2.1 degreesa major issue for our little drone, which would now have been helplessly drifting sideways had we not corrected such a major compensation error.

To gather all the pieces so far, we can now understand that the drone is fusing angular data from three gyros to keep itself stabilised up in the air, and it fuses additional acceleration and magnetic field data to compensate for gyro instability. To achieve this fusion and keep it precise,ÿthe drone maintains a mathematical model of the different sensors' behaviours. This shows how even simple model of inertial sensors fusion can give interesting and delicate results.

Another form of modern sensor fusion deals with vision, or rather with different ways in which the same object can be seen. This is called image fusion.

Image fusion can happen in various ways, the most familiar of which can be found on many modern smartphones and cameras. It is usually abbreviated as "HDR" (short for "High Dynamic Range") and refers to a method for creating an optimal image despite extreme differences in lighting. For instance, let's pretend we are taking a photograph. If part of the scene has deep shadows and part of the scene is brightly lit, it is very difficult to find the optimal exposure, and due to the limited dynamic range of most consumer cameras available today, we will get one of three equally frustrating results:

Shadow: By pushing the exposure up, we will be able to see the details in shadow, but the highlights will be overexposed. This usually brings the pixels in this area to saturation, or (even worse) bleeds the overexposure over the edge of the object and creates a halo around it.

Strong light: By pulling the exposure time down, we will optimise the highlights but underexpose the shadows, making them pitch black.

Flat or weighted average: Unless we are making an artistic choice, this is usually the preferred method for classical photography because it allows us to make the most out of the situation. By giving some level of detail in shadow and some level of detail in highlights, we arrive at a compromise, but given enough contrast strength in the scene, we will get a greyish picture and usually lose some detail on both extremes of the curve.

This is where HDR comes into the picture. HDR can be a) a more advanced sensor that can "see" a higher range of contrast values (thus having a higher dynamic range), or b) an algorithm that leverages a standard sensor by fusing several exposures. The latter is a more interesting case, because it pertains more to our subject and lets us get more from less specialised equipment.

So what does HDR do? To get an HDR picture, we take the different exposure options similar to the ones discussed earlier and combine them together in a "smart" way. In other words, assuming we only did three exposures, one will have interesting details in shadowed areas, one will have interesting details in highlighted areas, and one will be generic and have details in the areas that are not highlighted or shadowed. Taking the best part of each picture and combining them together will give us a picture which does the impossiblecreates a single photo with rich details and colours in all the areas of the picture, and is closer to what our eyes are used to seeing in real life. (This is because the eye itself performs a continuous fusion of images by frequently scanning a scene and reconstructing a picture in the brain.)

Another technique not unlike the HDR imaging is a fusion of images from different imaging sources and (usually) technologies. Think of the way various night vision imaging techniques combine to produce one clear image or hyperspectral imaging that allows us to examine different features that are normally unavailable to standard optical sensors.

?First Page?Previous Page 1???2???3???4???5?Next Page?Last Page

Article Comments - What you need to know about sensor f...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top