Latest news

Sensor Fusion

No comments

Hello dear reader! Following the line of demystification of some areas of knowledge related to embedded systems, today we talk about the concept of sensor fusion. This article is divided into two parts: For now, we talk about the concept behind the fusion of sensors, why you should consider its use. Also, in the next article, we’ll introduce an application with the use of sensor fusion, comparing the before and after so you can use as a reference in your projects. So, first of all, let’s get into the problem.

Example of problem where sensor fusion can help.

Showing the problem and correlating it with a new concept always helps to understand why adding new knowledge to the developer’s “toolkit”. With the merger of sensors would not be different, Let’s enter the following scenario, imagine, reader, that you have on the bench:

Hardware with a set of inertial sensors (accelerometer, gyroscope and magnetometer);
This hardware performs the acquisition through these sensors, pre-processes and sends to a host responsible for performing an action based on these results;
The host needs such data to come with known and already properly scaled units (e.g., speed in m / s);
The hardware contains a processor connected to these sensors to perform this pre-processing.

Now that we already have our device making acquisitions and sending periodically to a host application, everything seems to work. It reads the accelerometer and from it determines the velocity by discretely integrating the acceleration samples and obtaining the current position by integrating the velocity previously obtained. In an ideal world, if we were to perform a process of position change, the data from the accelerometer would be sufficient to determine such a calculation.

However, in the real world, when we consult the calculated position, it does not match the accelerometer estimated value, and worse, if we return our device to the initial position, we see that the measure does not correspond to reality, it increase and increase as we move the device. Why does it happen? To begin with, let’s look at the figure from [3] and imagine that our hardware consists of the use of the well-known Bosch BMI160 inertial sensor.

Figure 1 – Bosh BMI160 Inertial Sensor Parameters

The sensitivity parameter relates the scale of the signal to bits per g, where g is related to the acceleration of gravity. So if I leave the device stopped at rest, it reports a value in bits that corresponds to the acceleration and not changes until it suffers some external force, correct?

Wrong! Notice that after the sensitivity parameter several others parameters cause variations of the value reported in the sensor output that even if it is at rest cause changes in the final value read. The Problem is that this value is read as valid data in the calculations, and between scale factors and multiplications cause several measurement errors. Measurement uncertainty is how we refer to this kind of error.

That is, however good the sensor may have been, it always has one or more sources of measurement error. To these combined errors we call a process error. How much this error can influence the real measure, we give him the qualifier of a degree of uncertainty of the measurement. From these parameters we will add some mathematics (nothing substantial, only for formal purposes), we can relate the degree of certainty of the measure to the expected value and the value reported by a sensor (or infrastructure of acquisition) to be:

E.Q. 1 Uncertainty Degree

The small function p is called a probability density function or comically abbreviated by FDP. This, in particular, tells us that it is a question of the probability of a given value xt (expected measure) falling on a range of values ​​zt (value reported by the sensor), that is, the higher the probability, the lower the degree of uncertainty of the reported measurement by the sensing infrastructure.

In the case of reading a sensor, this probability tends to be large, but not to the point of ensuring that the measurement alone is reliable. What causes our hypothetical device to present some problems during its operation:

Information obtained from the sensor is unreliable and has an error even at steady state;
Information estimated from the sensor data is not reliable, have a cumulative error;
Sensor and estimated information vary in the steady state causing drift (progressive offset) problems in other derived information.
 

Graphically, what happens in an even stopped sensor can be observed in the graph below extracted from the MATLAB simulation described in [1]. See how much the measured value moves as time goes by:

Figure 2 – Stationary position simulation calculus

Notice that the example in this article takes the use case as inertial sensors, but in practice, every sensor has the same problem, but with the magnitude relative to its measurement. To minimise or eliminate these errors and increase confidence in the measurement obtained, here comes the concept of sensor fusion, which we see below.

Fusion sensors? Literally?

Sounds strange, does not it? However, the concept of sensor fusion assumes that each sensor has its advantages and disadvantages and that different scaling factors can present the same measure with their particular sources of errors. The sensor fusion then literally gets the data from more than one type of sensor, applies a model (a set of scaling factors and estimation of the next states and methods of correction at runtime) and draws in its output:

Information measured directly from running and clean sensors;
Information calculated from the most accurately measured and run sensors.

So it’s the process of getting the best of each sensor’s worlds to combine them into a single set of measurements by making the FDP previously displayed in equation 1 report a higher degree of reliability.

Analogous to the sensor fusion criterion, we can illustrate this process based on the following case, consider that we want the degree of reliability of a person P living in the state of São Paulo E to be as high as possible, thus:


E.Q. 1.1 Uncertainty Degree

We can have as an initial source of data, that the person can live in São Paulo. Based on their nationality, if Brazilian, there is a probability of this being true, but Brazilians live all over the world, so only nationality and naturalness do not guarantee the high degree of reliability. Now, if we get their current cell number, we know that phone number should live in Brazil, so that the probability of being from São Paulo increases, as well as the degree of reliability, but we know that a cell phone with regional code 11 works in all Brazilian states with different regional codes, then we can not guarantee that this person lives in São Paulo.

Let’s get the address of a document, water bill, surely that person lives or must live in São Paulo, right? However, the house may be rented, our degree of reliability comes close but still does not give absolute certainty. So let’s add more documents, credit card, light bill, and vehicle fines, realize that there are three channels of similar measures (such as our accelerometer + gyroscope) provide the same information, but if a person pays water, credit and fine of vehicles (which needs to be transferred to the city of residence).

In this scenario, the possibility of person P residing in the state of São Paulo is close to absolute certainty and only in some particular case (corner cases) the information is false, and if so, the distance of the expected information is small. In the example, P can live in São Paulo but be travelling or out of state on the job.

The concept of sensor fusion is to get as much information as possible from the environment that the object resides in and combine that information so that the certainty of the measure being taken tends to be true. To do this, several sensor channels and commonly of different nature are combined in the most diverse mathematical models, and their outputs correspond to corrected measures and with a high assertiveness degree and safe for the use of data acquisition and processing application.

Awesome! How does sensor fusion work?

In embedded systems environment, the sensor fusion works with a module implemented according to the application, but it has a common core that is based on correcting and estimating a measurement based on the knowledge of its mathematical model of behaviour in specific. For example, specific sources of accelerometer errors, and specific sources of gyro error are corrected. Then a (specific) merging operation combines the results with what can literally be called “the best in each world”.

If in both cases the desired confidence level should be the angle of inclination estimated concerning that measured by an extra source (sensor), we can then calculate both from the sensors, then apply the specific corrections of each source, in then the measures are merged. In the exemplified case, a simple and properly scaled sum merges the corrected results. The process described here depicts one of the simpler sensor fusions known as complementary filtration. See the figure below, which illustrates the process described:

Figure 3 – Sensor fusion by complementary filtering

The beautiful example of the figure above solves a common problem which involves obtaining the angles of local coordinate systems practically error-free. Thus the current angle along with the angular velocity can feed a rotation matrix to a global coordinate system (referenced to Earth) and create a stable spatial orientation system. Without the above mechanism, the errors generated by the gyroscope data integration and reverse tangent tilt calculation would enter into the subsequent calculations causing dangerous errors for the user navigation system of that data source.

The simple fusion system illustrates well what we want to explain, but in more complex navigation structures (and with higher processing capacity) the complimentary architecture ends up being limited when the mathematical model of the sensor to be acquired is little known. For this, rather than just correcting the measures (or states) of the systems, the model must constantly adapt until all sources of errors in the specific channel are eliminated. Then nothing beats a …

… Kalman filter, an adaptive filter for sensor fusion
 

Explaining all the mathematics and derivation of this nice filter is outside (at least for now) the scope of this article; however, this type of filter has to be mentioned, since it is present in most of the sensor fusion architectures. The use of a Kalman filter changes the framework of the fusion of sensors that we present, see:

Figure 4 – Kalman filter sensor fusion

You must be wondering where the sum block shown in the first example of sensor fusion went. It is implicit in the Kalman filter itself. In a nutshell, the Kalman filter will:

Obtain an estimate of the next state of the system (understand state as one of the variables of interest);
Get the same status through the sensor;
Using the predicted and measured state, it applies the so-called Kalman gain (an updated continuously matrix based on the knowledge of the model the filter obtains) to these variables;
The corrected state present at the system output;
The corrected state and the new ideal state will enter the Preditor. The next state estimation is updated as well as the Kalman filter;
A new sample is obtained from the sensor and the cycle will start again.
 

Yes, in educational materials there is so much mathematics involved that we forget to understand how this tool works. Realise that the fusion occurs when we apply the so-called correction step to the estimated state and the measured state. In this way, we obtain the corrected state in which its FDP concerning the real state of the system results in a low and low variable degree of uncertainty. This fusion architecture is mostly employed in navigation systems since the Kalman filter alone is prepared to deal with multiple states.

Let’s take a look at what the Kalman filter does graphically:

Figure 5 – Kalman filter data flux

Consider the vectors xk and pk as being the model to be estimated. The suffix k denotes that the system is discretised (samples separated by equal and known time intervals), that is, we have xk-1 and pk-1 denoting the last corrected state of the system. These variables are returned to one of the filter inputs, and go through their first execution step, the prediction that estimates through former states (Bayes Rule) which may be a possibility of a future state.

These intermediate values ​​then feed the second block, which contains from the current state estimated the variable zk that carries the state estimated by the sensor measurement. Together with the other variables we have the new corrected state in the output along with the current corrected model Pk, that is, the Kalman filter searches for knowledge in the future state, and memorises a portion of the previous state to help calculate the current state, fantastic no? Its implementation complexity varies according to the model and the number of variables to be exploited, but there are several open implementations, and for simple mergers, one can resort to the complementary filtering presented previously.

Are there other architectures?
 

Yes, there are, the fusion of sensors we detail here, belongs to the class of fusion called complementary (already implicit in our first example), that is, its function is to obtain a complete view of a particular state combining the measurements of sensors that are not directly related but which can provide the same type of data (gyroscope and accelerometer). However, besides we can quickly cite two other ways to merge sensor data based on the project requirement:

Competitive, this type of fusion is applied when the requirement becomes robustness and precision, typical of life support systems. In this case what exists are the same sensors (same type of data) but in respective quantities followed by an uncertainty block. The merger here occurs by weighing the sum of the corrections where the most considerable weight is always pro sensor with the lowest degree of uncertainty at that instant of time;
Cooperative, the coolest of this type of fusion is the fact that the desired state of the system to be obtained has no direct relationship with what is being measured, ie, it needs a network of sensors, and only with the result of the reading of several sensors it will be possible to obtain the relevant data. The cooperative fusion individually treats the measurements and error removal of the sensors only to then evaluate which “piece” of the data of interest that given measure corresponds.
 
Thus the topics presented here are graphically visualized in the figure below:

Figure 6 – Sensor fusion methods

C

Conclusion

The purpose of this article was to popularise the mind of the reader with a simplistic view of sensor fusion and why it is so important, I believe that with this material the reader will be able to create courage and explore more academic texts containing the analytical approach of a specific form of fusion data.

However, the goal here is to bring practice, the part no one shows. In the next article we will do our first application of sensor fusion using an IMU (Inertial Measurement Unit) exploring the complementary filtering technique that will prepare the third article to fuse sensors using the Kalman filter approach, so stay connected reader! Leave your comment below what I would like to see related to sensor fusion, let’s discuss, see you next time.

References

[1] – NXP Sensor Fusion Guide

[2] – Sensor Data Fusion using Kalman Filters – Antiono Moran

[3] – Bosch BMI160 Inertial Measurement Unit datasheet

[4] – Sensor Fusion for Automotive Applications – Chrstian Lundquist

FelipeSensor Fusion

Leave a Reply