?
EE Times-Asia > Sensors/MEMS
?
?
Sensors/MEMS?? # What you need to know about sensor fusion

Posted: 12 Oct 2015 ?? ? Print Version ?

Keywords:sensor fusion? gyroscope? micro-electro-mechanical systems? MEMS? Kalman filter?

Now, assuming that, for a stable horizontal position, the x and y axes would read zero acceleration and the z axis would read 9.8m per second squared downwards [Earth's gravitational acceleration, or (0,0,-9.8) in a vector form], we can calculate the angular deviation from that value whenever the accelerometer assumes a different orientation. If we were to use the magnetic compass to supplement our measurement system, we would read the angle at which our drone is oriented towards magnetic north and find the angular deviation of that initial orientation whenever the drone starts to drift.

Together, these three inputs can be fused in several different ways—catching moments where angular velocity (the one that is unaffected by summation error) measured by the gyro is approximately zero, solving orientation equations based on accelerometer and compass readings, and finally resetting the nominal orientation of the drone to the freshly calculated angles (the new ?0, if you will). Of course, this is all a very simplistic approach, but it is enough for our demonstration purposes. (A much more complicated and robust fusion system would be based on the famous Kalman filter—introducing all the sensor readings together with different weights and trying to perform an optimisation based on the covariance matrix.)

One last aspect I would like to address is the importance of precise time measurement for the fusion. Consider the different readings we are getting from our sensors: we take the acceleration and gyro samples (assuming simultaneous readings), and try to calculate pitch angles based on each one of them. In the real-world application, each and every one of them is sampled at a slightly different time, so we take an acceleration sample at time ti and a gyro sample at time tj. Then, we calculate the pitch angles Φi and Φj based on these readings. Now, comparing Φi and Φj would introduce an error directly proportional to the angular velocity and time difference, or Two solutions come to mind; one calls for precise synchronisation, such that the samples are truly acquired simultaneously, and this is usually achievable via a hardware solution. The other solution suggests disconnecting the information from "the metal" by running an interpolation window on the measurements, so instead of the real-world value, we have a continually adjusted mathematical model that represents the sensor data. Then, whenever the time comes to use the different sensor readings together, we can take an extrapolation of each of the sensors' data to the exact same point of time. If the mathematical behaviour model of each of the sensors is precise enough, the level of time synchronisation we can achieve is very high.

To illustrate the interpolation technique with a very basic model:

???Sensor A has readings 0.4 and 0.5 acquired at times 1 and 2 seconds.
???Sensor B has readings 0.41 and 0.52 acquired at times 1.2 and 1.8
If we were to compare the raw samples according to their order of appearance, we would have compared A at time t=2 with B at time t=1.8, getting the difference of 0.02.

Now, let's pretend we decided to use the interpolation, and the chosen behavioural model for the sensors is the linear equation y= ax+b.

To find the value of sensor B reading at time t=2 seconds, we would use the two samples to figure out the coefficients getting Solving this gives us y = 0.183333333x + 0.19.

Now plugging in the desired extrapolation time t=2 we get y=0.556666667, so comparing sensor A and sensor B at the time t=2 gives us a more accurate difference of 0.056666667. ?
?
Webinars Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?