UnscentedKalman¶
- class UnscentedKalman(self, Q: visp._visp.core.Matrix, R: visp._visp.core.Matrix, drawer: visp._visp.core.UKSigmaDrawerAbstract, f: Callable[[visp._visp.core.ColVector, float], visp._visp.core.ColVector], h: Callable[[visp._visp.core.ColVector], visp._visp.core.ColVector])¶
Bases:
pybind11_object
This class permits to use Unscented Kalman Filter (UKF) to tackle non-linear problems. Non-linearity can arise in the process function \(f: {R}^n \rightarrow {R}^n\) , which makes evolve the internal state \(\textbf{x} \in {R}^n\) of the UKF over time, or in the measurement function \(h: {R}^n \rightarrow {R}^m\) , which expresses the internal state of the UKF in the measurement space of dimension \(m\) .
We will briefly explain the principles of the UKF and the maths behind the wheel. We refer the interested readers to the web-book by R. Labbe, chapter 10, for more details.
The UKF is performed in two steps. First, the prediction step, during which we draw the sigma points \(\chi\) and compute their corresponding weights \(\textbf{w}^m \in {R}^{2n + 1}\) and \(\textbf{w}^c \in {R}^{2n + 1}\) . Be \(\textbf{x} \in {R}^n\) the internal state of the UKF and \(\textbf{P} \in {R}^{n\text{ x }n}\) the process covariance matrix. We have:
\[\begin{split}\begin{array}{lcl} \chi &=& sigma-function(\textbf{x}, \textbf{P}) \\\textbf{w}^m, \textbf{w}^c &=& weight-function(n, parameters) \end{array} \end{split}\]There are different ways of drawing the sigma points and associated weights in the litterature, such as the one proposed by Julier or the one proposed by E. A. Wan and R. van der Merwe.
Be \(\textbf{u}\) the vector containing the known commands sent to the system, if any. Then, we pass each sigma point through the process function \(f(\chi, \Delta t)\) , the command function \(b( \textbf{u}, \Delta t )\) and the command function depending on the state \(bx( \textbf{u}, \chi, \Delta t )\) to project them forward in time, forming the new prior:
\({Y} = f( \chi , \Delta t ) + b( \textbf{u}, \Delta t ) + bx( \textbf{u}, \chi, \Delta t )\)
Then, we apply the Unscented Transform to compute the mean \(\boldsymbol{\mu}\) and covariance \(\overline{\textbf{P}}\) of the prior:
\[\begin{split}\begin{array}{lcl} \boldsymbol{\mu}, \overline{\textbf{P}} &=& UT({Y}, \textbf{w}^m, \textbf{w}^c, \textbf{Q}) \\\boldsymbol{\mu} &=& \sum_{i=0}^{2n} w_i^m {Y}_i \\\overline{\textbf{P}} &=& \sum_{i=0}^{2n} ( w_i^c ({Y}_i - \boldsymbol{\mu}) ({Y}_i - \boldsymbol{\mu})^T ) + \textbf{Q} \end{array} \end{split}\]where \(\textbf{Q}\) is the covariance of the error introduced by the process function.
The second step is the update step. It is performed in the measurement space, so we must convert the sigma points of the prior into measurements using the measurement function \(h: {R}^n \rightarrow {R}^m\) :
\({Z} = h({Y})\)
Then, we use once again the Unscented Transform to compute the mean \(\boldsymbol{\mu}_z \in {R}^m\) and the covariance \(\textbf{P}_z \in {R}^{m\text{ x }m}\) of these points:
\[\begin{split}\begin{array}{lcl} \boldsymbol{\mu}_z, \textbf{P}_z &=& UT({Z}, \textbf{w}^m, \textbf{w}^c, \textbf{R}) \\\boldsymbol{\mu}_z &=& \sum_{i=0}^{2n} w_i^m {Z}_i \\\textbf{P}_z &=& \sum_{i=0}^{2n} ( w_i^c ({Z}_i - \boldsymbol{\mu}_z) ({Z}_i - \boldsymbol{\mu}_z)^T ) + \textbf{R} \end{array} \end{split}\]where \(\textbf{R}\) is the measurement covariance matrix.
Then, we compute the residual \(\textbf{y}\) of the measurement \(\textbf{z}\) :
\(\textbf{y} = \textbf{z} - \boldsymbol{\mu}_z\)
To compute the Kalman’s gain, we first need to compute the cross covariance of the state and the measurements:
\(\textbf{P}_{xy} = \sum_{i=0}^{2n} w_i^c ({Y}_i - \boldsymbol{\mu})({Z}_i - \boldsymbol{\mu}_z)^T\)
The Kalman’s gain is then defined as:
\(\textbf{K} = \textbf{P}_{xz} \textbf{P}_z^{-1}\)
Finally, we can compute the new state estimate \(\textbf{x}\) and the new covariance \(\textbf{P}\) :
\[\begin{split}\begin{array}{lcl} \textbf{x} &=& \boldsymbol{\mu} + \textbf{K} \textbf{y} \\\textbf{P} &=& \overline{\textbf{P}} - \textbf{K} \textbf{P}_z \textbf{K}^T \end{array} \end{split}\]Methods
Perform first the prediction step and then the filtering step.
Get the estimated (i.e. filtered) covariance of the state.
Get the predicted covariance of the state, i.e. the covariance of the prior.
Get the estimated (i.e. filtered) state.
Get the predicted state (i.e. the prior).
Set the guess of the initial state and covariance.
Predict the new state based on the last state and how far in time we want to predict.
Permit to change the covariance introduced at each update step.
Permit to change the covariance introduced at each prediction step.
Simple function to compute an addition, which just does \(\textbf{res} = \textbf{a} + \textbf{toAdd}\) .
Simple function to compute a mean, which just does \(\boldsymbol{\mu} = \sum_{i} wm_i \textbf{vals}_i\) .
Simple function to compute a residual, which just does \(\textbf{res} = \textbf{a} - \textbf{toSubtract}\) .
Update the estimate of the state based on a new measurement.
Inherited Methods
Operators
__doc__
__module__
Attributes
__annotations__
- __init__(self, Q: visp._visp.core.Matrix, R: visp._visp.core.Matrix, drawer: visp._visp.core.UKSigmaDrawerAbstract, f: Callable[[visp._visp.core.ColVector, float], visp._visp.core.ColVector], h: Callable[[visp._visp.core.ColVector], visp._visp.core.ColVector])¶
- filter(self, z: visp._visp.core.ColVector, dt: float, u: visp._visp.core.ColVector = vpColVector()) None ¶
Perform first the prediction step and then the filtering step.
Warning
To use the commands, the method vpUnscentedKalman::setCommandOnlyFunction or vpUnscentedKalman::setCommandStateFunction must be called beforehand.
- Parameters:
- z: visp._visp.core.ColVector¶
The new measurement.
- dt: float¶
The time in the future we must predict.
- u: visp._visp.core.ColVector = vpColVector()¶
The command(s) given to the system, if the impact of the system is known.
- getPest(self) visp._visp.core.Matrix ¶
Get the estimated (i.e. filtered) covariance of the state.
- Returns:
vpMatrix The filtered covariance matrix.
- getPpred(self) visp._visp.core.Matrix ¶
Get the predicted covariance of the state, i.e. the covariance of the prior.
- Returns:
vpMatrix The predicted covariance matrix.
- getXest(self) visp._visp.core.ColVector ¶
Get the estimated (i.e. filtered) state.
- Returns:
vpColVector The estimated state.
- getXpred(self) visp._visp.core.ColVector ¶
Get the predicted state (i.e. the prior).
- Returns:
vpColVector The predicted state.
- init(self, mu0: visp._visp.core.ColVector, P0: visp._visp.core.Matrix) None ¶
Set the guess of the initial state and covariance.
- Parameters:
- mu0: visp._visp.core.ColVector¶
Guess of the initial state.
- P0: visp._visp.core.Matrix¶
Guess of the initial covariance.
- predict(self, dt: float, u: visp._visp.core.ColVector = vpColVector()) None ¶
Predict the new state based on the last state and how far in time we want to predict.
Warning
To use the commands, the method vpUnscentedKalman::setCommandOnlyFunction or vpUnscentedKalman::setCommandStateFunction must be called beforehand.
- Parameters:
- dt: float¶
The time in the future we must predict.
- u: visp._visp.core.ColVector = vpColVector()¶
The command(s) given to the system, if the impact of the system is known.
- setCommandOnlyFunction(self, b: Callable[[visp._visp.core.ColVector, float], visp._visp.core.ColVector]) None ¶
- setCommandStateFunction(self, bx: Callable[[visp._visp.core.ColVector, visp._visp.core.ColVector, float], visp._visp.core.ColVector]) None ¶
- setMeasurementCovariance(self, R: visp._visp.core.Matrix) None ¶
Permit to change the covariance introduced at each update step.
- Parameters:
- R: visp._visp.core.Matrix¶
The measurement covariance matrix.
- setMeasurementMeanFunction(self, meanFunc: Callable[[list[visp._visp.core.ColVector], list[float]], visp._visp.core.ColVector]) None ¶
- setMeasurementResidualFunction(self, measResFunc: Callable[[visp._visp.core.ColVector, visp._visp.core.ColVector], visp._visp.core.ColVector]) None ¶
- setProcessCovariance(self, Q: visp._visp.core.Matrix) None ¶
Permit to change the covariance introduced at each prediction step.
- Parameters:
- Q: visp._visp.core.Matrix¶
The process covariance matrix.
- setStateAddFunction(self, stateAddFunc: Callable[[visp._visp.core.ColVector, visp._visp.core.ColVector], visp._visp.core.ColVector]) None ¶
- setStateMeanFunction(self, meanFunc: Callable[[list[visp._visp.core.ColVector], list[float]], visp._visp.core.ColVector]) None ¶
- setStateResidualFunction(self, stateResFunc: Callable[[visp._visp.core.ColVector, visp._visp.core.ColVector], visp._visp.core.ColVector]) None ¶
- static simpleAdd(a: visp._visp.core.ColVector, toAdd: visp._visp.core.ColVector) visp._visp.core.ColVector ¶
Simple function to compute an addition, which just does \(\textbf{res} = \textbf{a} + \textbf{toAdd}\) .
- Parameters:
- a: visp._visp.core.ColVector¶
Vector to which we must add something.
- toAdd: visp._visp.core.ColVector¶
The something we must add to a .
- Returns:
vpColVector \(\textbf{res} = \textbf{a} + \textbf{toAdd}\)
- static simpleMean(vals: list[visp._visp.core.ColVector], wm: list[float]) visp._visp.core.ColVector ¶
Simple function to compute a mean, which just does \(\boldsymbol{\mu} = \sum_{i} wm_i \textbf{vals}_i\) .
- static simpleResidual(a: visp._visp.core.ColVector, toSubtract: visp._visp.core.ColVector) visp._visp.core.ColVector ¶
Simple function to compute a residual, which just does \(\textbf{res} = \textbf{a} - \textbf{toSubtract}\) .
- Parameters:
- a: visp._visp.core.ColVector¶
Vector to which we must subtract something.
- toSubtract: visp._visp.core.ColVector¶
The something we must subtract to a .
- Returns:
vpColVector \(\textbf{res} = \textbf{a} - \textbf{toSubtract}\)
- update(self, z: visp._visp.core.ColVector) None ¶
Update the estimate of the state based on a new measurement.
- Parameters:
- z: visp._visp.core.ColVector¶
The measurements at the current timestep.