Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Sensors/MEMS

What you need to know about sensor fusion

Posted: 12 Oct 2015 ?? ?Print Version ?Bookmark and Share

Keywords:sensor fusion? gyroscope? micro-electro-mechanical systems? MEMS? Kalman filter?

Let's say we have several cameras, each sensitive to different wavelengths of the infrared spectrum. We want to expose the same scene with those cameras and then combine the images to obtain maximum details or features in a final composite image. There are, of course, different fusion considerations that depend on the use case. Image fusion intended for machine processing is not the same as image fusion intended for human viewingit may sound obvious, but it is worth remembering that human brains process images differently than machines do. Our discussion is directed towards machine processing oriented fusion, and so our emphasis is on maximising the amount of features present in the image and not the perceived quality. To do that, we need to accomplish three basic tasks:

1. Align the different cameras to each other so the fusing of images can happen. That can be obtained by taking precise mechanical measurements and calculating a four-by-four translation matrix that can be applied to each of the source images to align and transform them as needed. We will want to calculate a similar matrix for each camera or sensor so they can all be aligned to the same point. These matrices can also take care of perspective correction and given additional resources, can be obtained dynamically by performing image correlation.

2. Calculate the pixel weights that will be used to determine the individual contribution of each source pixel to the combined target pixel. The weights are there to ensure the preservation of high frequency signals from each of the images (e.g. lines, corners, etc.) and can be obtained by subtracting the smoothed image from the original one so that details can be accentuated while low frequency signals are suppressed. This can be achieved in several ways, and two common methods include anisotropic diffusion and bilateral filtering, both of which smooth the image without distorting the edges or blurring features (unlike, for example, simple gaussian smoothing).

3. Combine weights into a target image. This way, the destination pixels will contain the maximal amount of features from all the source images.

We will end up with an image that has the most "interesting" information from all the source images combined. In case of infrared sensing, it might include objects that would have been missed had we used only one camera. On a foggy day, a normal camera will be able to see close objects well (like it always does), but as it tries to see into the distance, it will only register fog because visual light is diffused by water vapor. However, a thermal (or infrared) camera is not normally the optimal way to see things up close because they might be overexposed or poorly detailed, but on this same foggy day, the thermal camera will see through the fog, providing an image with good detail of distant objects.,

Together, these two source images give us a readable picture that would otherwise be completely impossible to get. Even more so, the same thing can be done for video frames, making image fusion possible for motion video as well!

Overall, the idea of sensor fusion is foundational to many of our modern day technologies, and the more advanced it becomes, the more interesting technologies can emerge in the consumer market. In consumer electronics, the possibilities of sensor fusion are getting a serious boost by gaining the machine learning capabilities triggered by improvements in processor technologies and the omnipresence of cloud services that allow deep learning to fuse data on higher abstraction levels. We already see some of the results of these technologies and the effects they have: better cameras, smarter devices, and more areas of technological engagement with our daily lives.

Whenever we capture a pretty picture with a phone or see the screen rotating to react to a hand gesture, it should make us think about the different sensor fusion technologies that worked to help us so casually enjoy these things. In the words of Edgar Allan Poe, "It is by no means an irrational fancy that, in a future existence, we shall look upon what we think our present existence, as a dream." Given that these technologies were barely imaginable just a decade ago, think about all the possibilities that are yet to come when we are able to do even more with the data available to us using different, perhaps still unthought-of sensors.

?First Page?Previous Page 1???2???3???4???5

Article Comments - What you need to know about sensor f...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top