Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Update Equations

Update Equations

Feature Lifecycle

Feature paths:

  1. Lost early → MSCKF (nullspace)

  2. At marg time, short track → MSCKF (nullspace)

  3. At marg time, max track, SLAM full → MSCKF (nullspace)

  4. At marg time, max track, SLAM available → SLAM (added to state)

  5. Already SLAM → SLAM (update existing)

                      Feature at time t
                             │
                ┌────────────┴────────────┐
                │                         │
          Lost tracking?           Still tracked?
                │                         │
                ▼                         ▼
     ┌──────────────────┐      ┌──────────────────┐
     │  feats_lost      │      │ At oldest clone? │
     │  → MSCKF         │      └──────────────────┘
     └──────────────────┘               │
                                ┌───────┴───────┐
                               YES              NO
                                │               │
                                ▼               ▼
                     ┌──────────────────┐  Continue tracking
                     │  feats_marg      │  (not used yet)
                     └──────────────────┘
                                │
                     Track length > max_clone_size?
                                │
                     ┌──────────┴──────────┐
                    YES                    NO
                     │                     │
                     ▼                     ▼
          ┌──────────────────┐   ┌──────────────────┐
          │ feats_maxtracks  │   │  feats_marg      │
          └──────────────────┘   │  → MSCKF         │
                     │           └──────────────────┘
          SLAM budget available?
                     │
          ┌──────────┴──────────┐
         YES                    NO
          │                     │
          ▼                     ▼
┌──────────────────┐  ┌──────────────────┐
│  → SLAM          │  │  → MSCKF         │
│  (delayed_init)  │  │  (nullspace)     │
└──────────────────┘  └──────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│                          FEATURE TRACKING PHASE                             │
│                     (Features tracked across frames)                        │
└─────────────────────────────────────────────────────────────────────────────┘
                                      │
                                      ▼
                    ┌─────────────────────────────────┐
                    │   New Image at time t arrives   │
                    └─────────────────────────────────┘
                                      │
                                      ▼
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃                         FEATURE CATEGORIZATION                            ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
                                      │
                ┌─────────────────────┼─────────────────────┐
                ▼                     ▼                     ▼
    ┌───────────────────┐ ┌───────────────────┐ ┌──────────────────────┐
    │  Lost Features    │ │ Features at Oldest│ │ Already SLAM Features│
    │   (feats_lost)    │ │Clone (feats_marg) │ │   (continue update)  │
    │                   │ │                   │ │                      │
    │ NOT in newest     │ │ Present at time   │ │ Pull from trackFEATS │
    │ frame anymore     │ │ margtimestep()    │ │ or trackARUCO        │
    └───────────────────┘ └───────────────────┘ └──────────────────────┘
            │                       │                        │
            │                       ▼                        │
            │          ┌──────────────────────┐             │
            │          │  Check Track Length  │             │
            │          │ > max_clone_size?    │             │
            │          └──────────────────────┘             │
            │                 │         │                    │
            │                 ▼         ▼                    │
            │          ┌─────YES────┐  NO                   │
            │          │            │  │                     │
            │          ▼            │  │                     │
            │  ┌──────────────────┐│  │                     │
            │  │ feats_maxtracks  ││  │                     │
            │  │                  ││  │                     │
            │  │ Long-lived       ││  │                     │
            │  │ features eligible││  │                     │
            │  │ for SLAM         ││  │                     │
            │  └──────────────────┘│  │                     │
            │          │            │  │                     │
            │          ▼            ▼  ▼                     │
            │  ┌──────────────────────────────┐             │
            │  │  SLAM Budget Available?      │             │
            │  │  - max_slam_features > 0?    │             │
            │  │  - dt_slam_delay passed?     │             │
            │  │  - slots available?          │             │
            │  └──────────────────────────────┘             │
            │          │                │                    │
            │    ┌─────YES───────┐   NO│                    │
            │    ▼                │     │                    │
            │ ┌──────────────┐   │     │                    │
            │ │ feats_slam   │   │     │                    │
            │ │ (DELAYED)    │   │     │                    │
            │ └──────────────┘   │     │                    │
            │    │                │     │                    │
            └────┼────────────────┼─────┼────────────────────┘
                 │                │     │
                 │                └─────┴───────────┐
                 │                                  │
                 │                                  ▼
                 │                    ┌──────────────────────────────┐
                 │                    │    featsup_MSCKF             │
                 │                    │                              │
                 │                    │ = feats_lost                 │
                 │                    │ + feats_marg (not maxtrack)  │
                 │                    │ + feats_maxtracks (rejected) │
                 │                    └──────────────────────────────┘
                 │                                  │
┏━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━┓
┃                ▼                                 ▼                   ┃
┃   ┌─────────────────────────┐      ┌─────────────────────────────┐ ┃
┃   │  UpdaterSLAM Pipeline   │      │   UpdaterMSCKF Pipeline     │ ┃
┃   └─────────────────────────┘      └─────────────────────────────┘ ┃
┃                │                                 │                   ┃
┃                ▼                                 ▼                   ┃
┃   ┌─────────────────────────┐      ┌─────────────────────────────┐ ┃
┃   │ 1. delayed_init()       │      │ 1. Triangulate features     │ ┃
┃   │    - Triangulate        │      │                             │ ┃
┃   │    - Chi² test          │      │ 2. Compute Jacobians:       │ ┃
┃   │    - Initialize to      │      │    • H_f (w.r.t. feature)   │ ┃
┃   │      state vector       │      │    • H_x (w.r.t. state)     │ ┃
┃   │    - Expand covariance  │      │                             │ ┃
┃   │                         │      │ 3. Nullspace Projection     │ ┃
┃   │ 2. update()             │      │                             │ ┃
┃   │    - Update existing    │      │                             | ┃
┃   │      SLAM features      │      │                             │ ┃
┃   │    - Chi² test          │      │                             │ ┃
┃   │    - H_xf includes      │      │ 4. Chi² test                │ ┃
┃   │      feature Jacobian   │      │                             │ ┃
┃   │    - EKF update         │      │ 5. EKF update (state only)  │ ┃
┃   └─────────────────────────┘      │                             │ ┃
┃                │                    └─────────────────────────────┘ ┃
┃                │                                 │                   ┃
┗━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━┛
                 │                                 │
                 ▼                                 ▼
    ┌─────────────────────────┐      ┌─────────────────────────────┐
    │  Feature PERSISTS in    │      │  Feature DELETED            │
    │  state vector           │      │  (to_delete = true)         │
    │                         │      │                             │
    │  • In _features_SLAM    │      │  NOT in state vector        │
    │  • Part of covariance   │      │                             │
    │  • Updated each frame   │      │  Information extracted,     │
    │  • Marginalized if lost │      │  then discarded             │
    │    or fails χ² test     │      │                             │
    └─────────────────────────┘      └─────────────────────────────┘
                 │
                 │ (Continue updating until...)
                 │
                 ▼
    ┌─────────────────────────────────┐
    │  SLAM Feature Marginalization   │
    │                                 │
    │  Triggered when:                │
    │  • Lost tracking                │
    │  • update_fail_count > 1        │
    │  • Anchor clone marginalized    │
    │    (perform anchor change)      │
    └─────────────────────────────────┘

The state vector of our visual-inertial system consists of the current inertial navigation state, a set of c historical IMU pose clones, a set of m environmental landmarks, and a set of w cameras’ extrinsic and intrinsic parameters. The MSCKF features/landmarks never become a part of the state. Only those that successfully gets initialized by SLAM are added to state. (Not all MSCKF features or max tracks features become a part of slam since there are parameters like max_slam_features that bound the slam features)

xk=[xITxCTxMTxWTCtI]T\mathbf{x}_k = \begin{bmatrix} \mathbf{x}_I^\mathsf{T} & \mathbf{x}_C^\mathsf{T} & \mathbf{x}_M^\mathsf{T} & \mathbf{x}_W^\mathsf{T} & {}^\mathcal{C}t_I \end{bmatrix}^\mathsf{T}

xI=[GIkqˉTGpIkTGvIkTbωkTbakT]T\mathbf{x}_I = \begin{bmatrix} {}^{I_k}_{\mkern-2mu G}\bar{\mathbf{q}}^\mathsf{T} & {}^{G}_{\mkern-2mu}\mathbf{p}_{I_k}^\mathsf{T} & {}^{G}_{\mkern-2mu}\mathbf{v}_{I_k}^\mathsf{T} & \mathbf{b}_{\omega_k}^\mathsf{T} & \mathbf{b}_{a_k}^\mathsf{T} \end{bmatrix}^\mathsf{T}

xC=[GIk1qˉTGpIk1TGIkcqˉTGpIkcT]T\mathbf{x}_C = \begin{bmatrix} {}^{I_{k-1}}_{\mkern-2mu G}\bar{\mathbf{q}}^\mathsf{T} & {}^{G}_{\mkern-2mu}\mathbf{p}_{I_{k-1}}^\mathsf{T} & \cdots & {}^{I_{k-c}}_{\mkern-2mu G}\bar{\mathbf{q}}^\mathsf{T} & {}^{G}_{\mkern-2mu}\mathbf{p}_{I_{k-c}}^\mathsf{T} \end{bmatrix}^\mathsf{T}

xM=[Gpf1TGpfmT]T\mathbf{x}_M = \begin{bmatrix} {}^{G}_{\mkern-2mu}\mathbf{p}_{f_1}^\mathsf{T} & \cdots & {}^{G}_{\mkern-2mu}\mathbf{p}_{f_m}^\mathsf{T} \end{bmatrix}^\mathsf{T}

xW=[C1IqˉTC1pITζ0TCwIqˉTCwpITζwT]T\mathbf{x}_W = \begin{bmatrix} {}^{I}_{\mkern-2mu C_1}\bar{\mathbf{q}}^\mathsf{T} & {}^{C_1}_{\mkern-2mu}\mathbf{p}_I^\mathsf{T} & \boldsymbol{\zeta}_0^\mathsf{T} & \cdots & {}^{I}_{\mkern-2mu C_w}\bar{\mathbf{q}}^\mathsf{T} & {}^{C_w}_{\mkern-2mu}\mathbf{p}_I^\mathsf{T} & \boldsymbol{\zeta}_w^\mathsf{T} \end{bmatrix}^\mathsf{T}

Nullspace projection only happens when the position and covariance of the feature is unknown. It happens when MSCKF short tracks are marginalized out (MSCKF update) and SLAM long tracks are added to state (delayed init). In case of an slam update when the features are already in the state vector we dont do nullspace projection except for some different represenation which is different in how it stores the state.

MSCKF Update

In case of MSCKF features are not in state. MSCKF doesnt deal with initializing slam features. Initializing slam features follows similarly. The steps are:

  1. Triangulate features from multiple views

  2. Compute H_f (Jacobian w.r.t. feature) and H_x (Jacobian w.r.t. state)

  3. Nullspace projection: Projects out the feature dependency using QR decomposition (see docs)

  4. Update state with only H_x (feature is eliminated from the equations)

  5. Delete features immediately after use

The classic error state Kalman filter uses the following for measurement updates:

r=Hx~+noiser = H\tilde{x} + noise

H is measurement jacobian matrix, noise is zero mean white, uncorrelated to state error.

In here the measurement model applies for a single feature, fjf_j, that has been observed from a set of MjM_j camera poses (GCiqˉ,GpCi),iSj({}^{C_i}_{\mkern-2mu G}{\bar{\mathbf{q}}}, {}^{G}{\mathbf{p}}_{C_i}), i \in S_j. Each of the MjM_j observations of the feature is described by the model:

zi(j)=1CiZj[CiXjCiYj]+ni(j),iSj\mathbf{z}_i^{(j)} = \frac{1}{{}^{C_i} Z_j} \begin{bmatrix} {}^{C_i} X_j \\ {}^{C_i} Y_j \end{bmatrix} + \mathbf{n}_i^{(j)}, \quad i \in S_j

where ni(j)\mathbf{n}_i^{(j)} is the 2×12 \times 1 image noise vector, with covariance matrix Ri(j)=σim2I2\mathbf{R}_i^{(j)} = \sigma_{\text{im}}^2 \mathbf{I}_2. The feature position expressed in the camera frame, Cipfj{}^{C_i}{\mathbf{p}}_{f_j}, is given by:

Cipfj=[CiXjCiYjCiZj]=C(GCiqˉ)(GpfjGpCi){}^{C_i}{\mathbf{p}}_{f_j} = \begin{bmatrix} {}^{C_i} X_j \\ {}^{C_i} Y_j \\ {}^{C_i} Z_j \end{bmatrix} = \mathbf{C}({}^{C_i}_{\mkern-2mu G}{\bar{\mathbf{q}}})({}^{G}{\mathbf{p}}_{f_j} - {}^{G}{\mathbf{p}}_{C_i})

where Gpfj{}^{G}{\mathbf{p}}_{f_j} is the 3D feature position in global frame. Since this is unknown, in the first step of the algorithm triangulation is used to obtain an estimate, Gpfj{}^{G}{\mathbf{p}}_{f_j} , of the feature position. This is achieved using the pixel measurements z(j), i ∈ Sj , and the filter estimates of the camera poses at the corresponding time instants (assumed to be known exactly.). Once the estimate of feature is known, its converted to pixel coordinates and residual is computed for measurement update:

ri(j)=zi(j)z^i(j)\mathbf{r}_i^{(j)} = \mathbf{z}_i^{(j)} - \hat{\mathbf{z}}_i^{(j)}

where

z^i(j)=1CiZj^[CiXj^CiYj^],[CiXj^CiYj^CiZj^]=C(GCiqˉ^)(Gp^fjGp^Ci)\hat{\mathbf{z}}_i^{(j)} = \frac{1}{{}^{C_i} \hat{Z_j}} \begin{bmatrix} {}^{C_i} \hat{X_j} \\ {}^{C_i} \hat{Y_j} \end{bmatrix}, \quad \begin{bmatrix} {}^{C_i} \hat{X_j} \\ {}^{C_i} \hat{Y_j} \\ {}^{C_i} \hat{Z_j} \end{bmatrix} = \mathbf{C}({}^{C_i}_{\mkern-2mu G}\hat{\bar{\mathbf{q}}})({}^{G}_{\mkern-2mu}\hat{\mathbf{p}}_{f_j} - {}^{G}_{\mkern-2mu}\hat{\mathbf{p}}_{C_i})

Linearizing about the estimates for the camera pose and for the feature position, the residual of Eq. (20) can be approximated as:

ri(j)HXi(j)X~+Hfi(j)Gp~fj+ni(j)\mathbf{r}_i^{(j)} \approx \mathbf{H}_{\mathbf{X_i}}^{(j)} \tilde{\mathbf{X}} + \mathbf{H}_{f_i}^{(j)} {}^{G}_{\mkern-2mu}\tilde{\mathbf{p}}_{f_j} + \mathbf{n}_i^{(j)}

In the preceding expression HXi(j)\mathbf{H}_{\mathbf{X_i}}^{(j)} and Hfi(j)\mathbf{H}_{f_i}^{(j)} are the Jacobians of the measurement zi(j)\mathbf{z}_i^{(j)} with respect to the state and the feature position, respectively, and Gp~fj{}^{G}_{\mkern-2mu}\tilde{\mathbf{p}}_{f_j} is the error in the position estimate of fjf_j. By stacking the residuals of all MjM_j measurements of this feature, we obtain:

r(j)HX(j)X~+Hf(j)Gp~fj+n(j)\mathbf{r}^{(j)} \approx \mathbf{H}_{\mathbf{X}}^{(j)} \tilde{\mathbf{X}} + \mathbf{H}_{f}^{(j)} {}^{G}_{\mkern-2mu}\tilde{\mathbf{p}}_{f_j} + \mathbf{n}^{(j)}

where r(j)\mathbf{r}^{(j)}, HX(j)\mathbf{H}_{\mathbf{X}}^{(j)}, Hf(j)\mathbf{H}_{f}^{(j)}, and n(j)\mathbf{n}^{(j)} are block vectors or matrices with elements ri(j)\mathbf{r}_i^{(j)}, HXi(j)\mathbf{H}_{\mathbf{X_i}}^{(j)}, Hfi(j)\mathbf{H}_{f_i}^{(j)}, and ni(j)\mathbf{n}_i^{(j)}, for iSji \in S_j. Since the feature observations in different images are independent, the covariance matrix of n(j)\mathbf{n}^{(j)} is R(j)=σim2I2Mj\mathbf{R}^{(j)} = \sigma_{\text{im}}^2 \mathbf{I}_{2M_j}.

Since state estimate XX is used to compute Gpfj{}^{G}_{\mkern-2mu}\mathbf{p}_{f_j}, the error Gp~fj{}^{G}_{\mkern-2mu}\tilde{\mathbf{p}}_{f_j} is correlated with the errors in X~\tilde{X}. Thus the residual is not in X~\tilde{X} as in the Kalman filer update equation.

This is the reason for null space projection. We define residual ro(j)r_o^{(j)} by projecting r(j)r^{(j)} on the left nullspace of matrix Hf(j)H_f^{(j)}.

To this end for feature j(not mentioned below for clarity), we start with the measurement residual function by removing the “sensitivity” to feature error we compute and apply the left nullspace of the Jacobian Hf\mathbf{H}_f. However since a lot of measurements would add a lot of rows in H, the measurements are compressed.

The jacobians of original residual has size:

  1. 2n x 3 or 2*number_of_measurements x 3 (feature pF_in_G x,y,z) for feature jacobian since we have two u,v values per measurement (clones where we saw the features)

  2. 2n x 15 + 6c or 2*number_of_measurements x state size (15 calibrations + 6 per clone) for that measurement (clones where the measurements was taken, calibration, distortions, intrinsics). Since the feature u,v values dont depend on current imu state, imu intrinsics (part of propagation equations) we dont include them into state size. Once we find the jacobian we move to null space project to remove the Hf from the equations since thats correlated with noise and state. The new Hx would have 3 less rows due to this. Hence if the original Hx had 2n x state size then the null space projection would have 2n-3 x 1 residual 2n-3 x 15 + 6c for Hx after null space projection Next measurement compression converts this into 15 + 6c x 15 + 6c since thats useful for matrix multiplication.

Think of this like you are operating on a feature whose computed uv would be according:

Ckpf=ICRGIkR(GpfGpIk)+CpI{}^{C_k}\mathbf{p}_f = {}^{C}_{I}\mathbf{R} {}^{I_k}_{G}\mathbf{R} ({}^{G}\mathbf{p}_f - {}^{G}\mathbf{p}_{I_k}) + {}^{C}\mathbf{p}_I

zn,k=hp(Ckpf)=[Cx/CzCy/Cz]\mathbf{z}_{n,k} = \mathbf{h}_p({}^{C_k}\mathbf{p}_f) = \begin{bmatrix} {}^C x / {}^C z \\ {}^C y / {}^C z \end{bmatrix}

[uv]:=zk=hd(zn,k,ζ)=[fxx+cxfyy+cy]\begin{bmatrix} u \\ v \end{bmatrix} := \mathbf{z}_k = \mathbf{h}_d(\mathbf{z}_{n,k}, \boldsymbol{\zeta}) = \begin{bmatrix} f_x * x + c_x \\ f_y * y + c_y \end{bmatrix}

where

x=xnrθdx = \frac{x_n}{r} * \theta_d

y=ynrθdy = \frac{y_n}{r} * \theta_d

θd=θ(1+k1θ2+k2θ4+k3θ6+k4θ8)\theta_d = \theta(1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8)

r2=xn2+yn2r^2 = x_n^2 + y_n^2

θ=atan(r)\theta = \text{atan}(r)

Hence residuals depend on clone, intrinsics and extrinsics. Hf has max rank 3 since it can have at most 3 columns for 3D position of feature, which means after null space projection

[r1r2]=[Hx1Hx2]x~+[Hf10]f~+[n1n2]\begin{bmatrix} \mathbf{r}_1 \\ \mathbf{r}_2 \end{bmatrix} = \begin{bmatrix} \mathbf{H}_{x1} \\ \mathbf{H}_{x2} \end{bmatrix} \tilde{\mathbf{x}} + \begin{bmatrix} \mathbf{H}_{f1} \\ \mathbf{0} \end{bmatrix} \tilde{\mathbf{f}} + \begin{bmatrix} \mathbf{n}_1 \\ \mathbf{n}_2 \end{bmatrix}

Hf1 only has three rows which can be used to initialize the feature in the state vector. Now null space projection is used in following ways:

  1. In case of MSCKF update, the feature does not need to be initialized into the state vector and hence we only extract the (Hx2,r2,n2) after null space projection to update the state.

    Hf=[Q1Q2][R10]=Q1R1\mathbf{H}_f = [\mathbf{Q}_1 \quad \mathbf{Q}_2] \begin{bmatrix} \mathbf{R}_1 \\ \mathbf{0} \end{bmatrix} = \mathbf{Q}_1 \mathbf{R}_1

Multiplying the linearized measurement equation by the nullspace of the feature Jacobian from the left yields:

z~m,kHxx~k+Q1R1Gp~f+nkQ2z~mQ2Hxx~k+Q2Q1R1Gp~f+Q2nkQ2z~mQ2Hxx~k+Q2nkz~o,kHo,kx~k+no,k\begin{aligned} \tilde{\mathbf{z}}_{m,k} &\simeq \mathbf{H}_x \tilde{\mathbf{x}}_k + \mathbf{Q}_1 \mathbf{R}_1 {}^G\tilde{\mathbf{p}}_f + \mathbf{n}_k \\ \Rightarrow \mathbf{Q}_2^\top \tilde{\mathbf{z}}_m &\simeq \mathbf{Q}_2^\top \mathbf{H}_x \tilde{\mathbf{x}}_k + \color{red}{\mathbf{Q}_2^\top \mathbf{Q}_1 \mathbf{R}_1 {}^G\tilde{\mathbf{p}}_f} + \mathbf{Q}_2^\top \mathbf{n}_k \\ \Rightarrow \mathbf{Q}_2^\top \tilde{\mathbf{z}}_m &\simeq \mathbf{Q}_2^\top \mathbf{H}_x \tilde{\mathbf{x}}_k + \mathbf{Q}_2^\top \mathbf{n}_k \\ \Rightarrow \tilde{\mathbf{z}}_{o,k} &\simeq \mathbf{H}_{o,k} \tilde{\mathbf{x}}_k + \mathbf{n}_{o,k} \end{aligned}
  1. In case of SLAM update, the feature does need to be initialized into the state vector and hence we use the (Hx2,r2,n2) portion to update the state and (Hf1, Hx1, n1, r1) portion to intialize the covariance into the state vector(since they claim mean is same as gauss newton):

    f~=Hf11(r1n1Hxx~)\tilde{\mathbf{f}} = \mathbf{H}_{f1}^{-1}(\mathbf{r}_1 - \mathbf{n}_1 - \mathbf{H}_x\tilde{\mathbf{x}})

    E[f~]=Hf11(r1)\Rightarrow \mathbb{E}[\tilde{\mathbf{f}}] = \mathbf{H}_{f1}^{-1}(\mathbf{r}_1)

where we assumed noise and state error are zero mean. We can update f^\hat{\mathbf{f}} with this correction by f^+E[f~]\hat{\mathbf{f}} + \mathbb{E}[\tilde{\mathbf{f}}]. Note that this is equivalent to a Gauss Newton step for solving the corresponding maximum likelihood estimation (MLE) formed by fixing the estimate of x and optimizing over the value of f~\tilde{\mathbf{f}}, and should therefore be zero if we used such an optimization to come up with our initial estimate for the new variable.

We now can compute the covariance of the new feature as follows:

Pff=E[(f~E[f~])(f~E[f~])]=E[(Hf11(n1Hx1x~))(Hf11(n1Hx1x~))]=Hf11(Hx1PxxHx1+R1)Hf1\begin{aligned} \mathbf{P}_{ff} &= \mathbb{E}[(\tilde{\mathbf{f}} - \mathbb{E}[\tilde{\mathbf{f}}])(\tilde{\mathbf{f}} - \mathbb{E}[\tilde{\mathbf{f}}])^\top] \\ &= \mathbb{E}[(\mathbf{H}_{f1}^{-1}(-\mathbf{n}_1 - \mathbf{H}_{x1}\tilde{\mathbf{x}}))(\mathbf{H}_{f1}^{-1}(-\mathbf{n}_1 - \mathbf{H}_{x1}\tilde{\mathbf{x}}))^\top] \\ &= \mathbf{H}_{f1}^{-1}(\mathbf{H}_{x1}\mathbf{P}_{xx}\mathbf{H}_{x1}^\top + \mathbf{R}_1)\mathbf{H}_{f1}^{-\top} \end{aligned}

and the cross correlation can be computed as:

Pxf=E[(x~)(f~E[f~])]=E[(x~)(Hf11(n1Hx1x~))]=PxxHx1Hf1\begin{aligned} \mathbf{P}_{xf} &= \mathbb{E}[(\tilde{\mathbf{x}})(\tilde{\mathbf{f}} - \mathbb{E}[\tilde{\mathbf{f}}])^\top] \\ &= \mathbb{E}[(\tilde{\mathbf{x}})(\mathbf{H}_{f1}^{-1}(-\mathbf{n}_1 - \mathbf{H}_{x1}\tilde{\mathbf{x}}))^\top] \\ &= -\mathbf{P}_{xx}\mathbf{H}_{x1}^\top \mathbf{H}_{f1}^{-\top} \end{aligned}

These entries can then be placed in the correct location for the covariance. For example when initializing a new feature to the end of the state, the augmented covariance would be:

Paug=[PxxPxfPxfPff]\mathbf{P}_{aug} = \begin{bmatrix} \mathbf{P}_{xx} & \mathbf{P}_{xf} \\ \mathbf{P}_{xf}^\top & \mathbf{P}_{ff} \end{bmatrix}

After null space projection the Hx matrix is of size:

z~o,kHo,kx~k+no\tilde{\mathbf{z}}_{o,k} \simeq \mathbf{H}_{o,k} \tilde{\mathbf{x}}_k + \mathbf{n}_o
(2n3×1)=(2n3×15+6c)(15+6c×1)+(2n3×1)(2n - 3 \times 1) = (2n - 3 \times 15 + 6c)(15 + 6c \times 1) + (2n - 3 \times 1)

where n is the the number of uv measurements (sum total of all uv measurements for one or multiple features), c number of clones (for one or all features), 15 intrinsics and extrinsics.

Measurement Compression

One of the most costly operations in the EKF update is the matrix multiplication. To mitigate this issue, we perform the thin QR decomposition of the measurement Jacobian after nullspace projection. The measurement compression converts a Ho,k from 2n-3 x 15+6c into 15+6c x 15+6c since mostly that would be the smaller dimension or rank:

Ho,k=[Q1Q2][R10]=Q1R1\mathbf{H}_{o,k} = [\mathbf{Q}_1 \quad \mathbf{Q}_2] \begin{bmatrix} \mathbf{R}_1 \\ \mathbf{0} \end{bmatrix} = \mathbf{Q}_1\mathbf{R}_1

This QR decomposition can be performed again using Givens rotations (note that this operation in general is not cheap though). We apply this QR to the linearized measurement residuals to compress measurements:

r~o,kHo,kx~k+no\tilde{\mathbf{r}}_{o,k} \approx \mathbf{H}_{o,k}\tilde{\mathbf{x}}_k + \mathbf{n}_o
r~o,kQ1R1x~k+no\tilde{\mathbf{r}}_{o,k} \approx \mathbf{Q}_1\mathbf{R}_1\tilde{\mathbf{x}}_k + \mathbf{n}_o
Q1Tr~o,kQ1TQ1R1x~k+Q1Tno\mathbf{Q}_1^\mathsf{T} \tilde{\mathbf{r}}_{o,k} \approx \mathbf{Q}_1^\mathsf{T} \mathbf{Q}_1\mathbf{R}_1\tilde{\mathbf{x}}_k + \mathbf{Q}_1^\mathsf{T} \mathbf{n}_o
Q1Tr~o,kR1x~k+Q1Tno\mathbf{Q}_1^\mathsf{T} \tilde{\mathbf{r}}_{o,k} \approx \mathbf{R}_1\tilde{\mathbf{x}}_k + \mathbf{Q}_1^\mathsf{T} \mathbf{n}_o
r~n,kHn,kx~k+nn\Rightarrow \tilde{\mathbf{r}}_{n,k} \approx \mathbf{H}_{n,k}\tilde{\mathbf{x}}_k + \mathbf{n}_{n}

As a result, the compressed measurement Jacobian will be of the size of the state, which will significantly reduce the EKF update cost:

x^kk=x^kk1+Pkk1Hn,kT(Hn,kPkk1Hn,kT+Rn)1z~n,k\hat{\mathbf{x}}_{k|k} = \hat{\mathbf{x}}_{k|k-1} + \mathbf{P}_{k|k-1}\mathbf{H}_{n,k}^\mathsf{T}(\mathbf{H}_{n,k}\mathbf{P}_{k|k-1}\mathbf{H}_{n,k}^\mathsf{T} + \mathbf{R}_n)^{-1}\tilde{\mathbf{z}}_{n,k}
Pkk=Pkk1Pkk1Hn,kT(Hn,kPkk1Hn,kT+Rn)1Hn,kPkk1T\mathbf{P}_{k|k} = \mathbf{P}_{k|k-1} - \mathbf{P}_{k|k-1}\mathbf{H}_{n,k}^\mathsf{T}(\mathbf{H}_{n,k}\mathbf{P}_{k|k-1}\mathbf{H}_{n,k}^\mathsf{T} + \mathbf{R}_n)^{-1}\mathbf{H}_{n,k}\mathbf{P}_{k|k-1}^\mathsf{T}
void UpdaterMSCKF::update(std::shared_ptr<State> state, std::vector<std::shared_ptr<Feature>> &feature_vec) {

  // Return if no features
  if (feature_vec.empty())
    return;

  // Start timing
  boost::posix_time::ptime rT0, rT1, rT2, rT3, rT4, rT5;
  rT0 = boost::posix_time::microsec_clock::local_time();

  // 0. Get all timestamps our clones are at (and thus valid measurement times)
  std::vector<double> clonetimes;
  for (const auto &clone_imu : state->_clones_IMU) {
    clonetimes.emplace_back(clone_imu.first);
  }

  // 1. Clean all feature measurements and make sure they all have valid clone times
  auto it0 = feature_vec.begin();
  while (it0 != feature_vec.end()) {

    // Clean the feature
    (*it0)->clean_old_measurements(clonetimes);

    // Count how many measurements
    int ct_meas = 0;
    for (const auto &pair : (*it0)->timestamps) {
      ct_meas += (*it0)->timestamps[pair.first].size();
    }

    // Remove if we don't have enough
    if (ct_meas < 2) {
      (*it0)->to_delete = true;
      it0 = feature_vec.erase(it0);
    } else {
      it0++;
    }
  }
  rT1 = boost::posix_time::microsec_clock::local_time();

  // 2. Create vector of cloned *CAMERA* poses at each of our clone timesteps
  std::unordered_map<size_t, std::unordered_map<double, FeatureInitializer::ClonePose>> clones_cam;
  for (const auto &clone_calib : state->_calib_IMUtoCAM) {

    // For this camera, create the vector of camera poses
    std::unordered_map<double, FeatureInitializer::ClonePose> clones_cami;
    for (const auto &clone_imu : state->_clones_IMU) {

      // Get current camera pose
      Eigen::Matrix<double, 3, 3> R_GtoCi = clone_calib.second->Rot() * clone_imu.second->Rot();
      Eigen::Matrix<double, 3, 1> p_CioinG = clone_imu.second->pos() - R_GtoCi.transpose() * clone_calib.second->pos();

      // Append to our map
      clones_cami.insert({clone_imu.first, FeatureInitializer::ClonePose(R_GtoCi, p_CioinG)});
    }

    // Append to our map
    clones_cam.insert({clone_calib.first, clones_cami});
  }

  // 3. Try to triangulate all MSCKF or new SLAM features that have measurements
  auto it1 = feature_vec.begin();
  while (it1 != feature_vec.end()) {

    // Triangulate the feature and remove if it fails
    bool success_tri = true;
    if (initializer_feat->config().triangulate_1d) {
      success_tri = initializer_feat->single_triangulation_1d(*it1, clones_cam);
    } else {
      success_tri = initializer_feat->single_triangulation(*it1, clones_cam);
    }

    // Gauss-newton refine the feature
    bool success_refine = true;
    if (initializer_feat->config().refine_features) {
      success_refine = initializer_feat->single_gaussnewton(*it1, clones_cam);
    }

    // Remove the feature if not a success
    if (!success_tri || !success_refine) {
      (*it1)->to_delete = true;
      it1 = feature_vec.erase(it1);
      continue;
    }
    it1++;
  }
  rT2 = boost::posix_time::microsec_clock::local_time();

  // Calculate the max possible measurement size
  size_t max_meas_size = 0;
  for (size_t i = 0; i < feature_vec.size(); i++) {
    for (const auto &pair : feature_vec.at(i)->timestamps) {
      max_meas_size += 2 * feature_vec.at(i)->timestamps[pair.first].size();
    }
  }

  // Calculate max possible state size (i.e. the size of our covariance)
  // NOTE: that when we have the single inverse depth representations, those are only 1dof in size
  size_t max_hx_size = state->max_covariance_size();
  for (auto &landmark : state->_features_SLAM) {
    max_hx_size -= landmark.second->size();
  }

  // Large Jacobian and residual of *all* features for this update
  Eigen::VectorXd res_big = Eigen::VectorXd::Zero(max_meas_size);
  Eigen::MatrixXd Hx_big = Eigen::MatrixXd::Zero(max_meas_size, max_hx_size);
  std::unordered_map<std::shared_ptr<Type>, size_t> Hx_mapping;
  std::vector<std::shared_ptr<Type>> Hx_order_big;
  size_t ct_jacob = 0;
  size_t ct_meas = 0;

  // 4. Compute linear system for each feature, nullspace project, and reject
  auto it2 = feature_vec.begin();
  while (it2 != feature_vec.end()) {

    // Convert our feature into our current format
    UpdaterHelper::UpdaterHelperFeature feat;
    feat.featid = (*it2)->featid;
    feat.uvs = (*it2)->uvs;
    feat.uvs_norm = (*it2)->uvs_norm;
    feat.timestamps = (*it2)->timestamps;

    // If we are using single inverse depth, then it is equivalent to using the msckf inverse depth
    feat.feat_representation = state->_options.feat_rep_msckf;
    if (state->_options.feat_rep_msckf == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {
      feat.feat_representation = LandmarkRepresentation::Representation::ANCHORED_MSCKF_INVERSE_DEPTH;
    }

    // Save the position and its fej value
    if (LandmarkRepresentation::is_relative_representation(feat.feat_representation)) {
      feat.anchor_cam_id = (*it2)->anchor_cam_id;
      feat.anchor_clone_timestamp = (*it2)->anchor_clone_timestamp;
      feat.p_FinA = (*it2)->p_FinA;
      feat.p_FinA_fej = (*it2)->p_FinA;
    } else {
      feat.p_FinG = (*it2)->p_FinG;
      feat.p_FinG_fej = (*it2)->p_FinG;
    }

    // Our return values (feature jacobian, state jacobian, residual, and order of state jacobian)
    Eigen::MatrixXd H_f;
    Eigen::MatrixXd H_x;
    Eigen::VectorXd res;
    std::vector<std::shared_ptr<Type>> Hx_order;

    // Get the Jacobian for this feature
    UpdaterHelper::get_feature_jacobian_full(state, feat, H_f, H_x, res, Hx_order);

    // Nullspace project
    UpdaterHelper::nullspace_project_inplace(H_f, H_x, res);

    /// Chi2 distance check
    Eigen::MatrixXd P_marg = StateHelper::get_marginal_covariance(state, Hx_order);
    Eigen::MatrixXd S = H_x * P_marg * H_x.transpose();
    S.diagonal() += _options.sigma_pix_sq * Eigen::VectorXd::Ones(S.rows());
    double chi2 = res.dot(S.llt().solve(res));

    // Get our threshold (we precompute up to 500 but handle the case that it is more)
    double chi2_check;
    if (res.rows() < 500) {
      chi2_check = chi_squared_table[res.rows()];
    } else {
      boost::math::chi_squared chi_squared_dist(res.rows());
      chi2_check = boost::math::quantile(chi_squared_dist, 0.95);
      PRINT_WARNING(YELLOW "chi2_check over the residual limit - %d\n" RESET, (int)res.rows());
    }

    // Check if we should delete or not
    if (chi2 > _options.chi2_multipler * chi2_check) {
      (*it2)->to_delete = true;
      it2 = feature_vec.erase(it2);
      // PRINT_DEBUG("featid = %d\n", feat.featid);
      // PRINT_DEBUG("chi2 = %f > %f\n", chi2, _options.chi2_multipler*chi2_check);
      // std::stringstream ss;
      // ss << "res = " << std::endl << res.transpose() << std::endl;
      // PRINT_DEBUG(ss.str().c_str());
      continue;
    }

    // We are good!!! Append to our large H vector
    size_t ct_hx = 0;
    for (const auto &var : Hx_order) {

      // Ensure that this variable is in our Jacobian
      if (Hx_mapping.find(var) == Hx_mapping.end()) {
        Hx_mapping.insert({var, ct_jacob});
        Hx_order_big.push_back(var);
        ct_jacob += var->size();
      }

      // Append to our large Jacobian
      Hx_big.block(ct_meas, Hx_mapping[var], H_x.rows(), var->size()) = H_x.block(0, ct_hx, H_x.rows(), var->size());
      ct_hx += var->size();
    }

    // Append our residual and move forward
    res_big.block(ct_meas, 0, res.rows(), 1) = res;
    ct_meas += res.rows();
    it2++;
  }
  rT3 = boost::posix_time::microsec_clock::local_time();

  // We have appended all features to our Hx_big, res_big
  // Delete it so we do not reuse information
  for (size_t f = 0; f < feature_vec.size(); f++) {
    feature_vec[f]->to_delete = true;
  }

  // Return if we don't have anything and resize our matrices
  if (ct_meas < 1) {
    return;
  }
  assert(ct_meas <= max_meas_size);
  assert(ct_jacob <= max_hx_size);
  res_big.conservativeResize(ct_meas, 1);
  Hx_big.conservativeResize(ct_meas, ct_jacob);

  // 5. Perform measurement compression
  UpdaterHelper::measurement_compress_inplace(Hx_big, res_big);
  if (Hx_big.rows() < 1) {
    return;
  }
  rT4 = boost::posix_time::microsec_clock::local_time();

  // Our noise is isotropic, so make it here after our compression
  Eigen::MatrixXd R_big = _options.sigma_pix_sq * Eigen::MatrixXd::Identity(res_big.rows(), res_big.rows());

  // 6. With all good features update the state
  StateHelper::EKFUpdate(state, Hx_order_big, Hx_big, res_big, R_big);
  rT5 = boost::posix_time::microsec_clock::local_time();

  // Debug print timing information
  PRINT_ALL("[MSCKF-UP]: %.4f seconds to clean\n", (rT1 - rT0).total_microseconds() * 1e-6);
  PRINT_ALL("[MSCKF-UP]: %.4f seconds to triangulate\n", (rT2 - rT1).total_microseconds() * 1e-6);
  PRINT_ALL("[MSCKF-UP]: %.4f seconds create system (%d features)\n", (rT3 - rT2).total_microseconds() * 1e-6, (int)feature_vec.size());
  PRINT_ALL("[MSCKF-UP]: %.4f seconds compress system\n", (rT4 - rT3).total_microseconds() * 1e-6);
  PRINT_ALL("[MSCKF-UP]: %.4f seconds update state (%d size)\n", (rT5 - rT4).total_microseconds() * 1e-6, (int)res_big.rows());
  PRINT_ALL("[MSCKF-UP]: %.4f seconds total\n", (rT5 - rT1).total_microseconds() * 1e-6);
}

Slam updates

Delayed init

  • Triangulate

  • Chi² test

  • Initialize to state vector

  • Expand covariance

Delayed init happens when a msckf feature needs to be added to the state vector. This is normally done when the feature has survived max clones number of camera poses. Check feature lifecycle for more information. The procedure for initializing the feature is similar to MSCKF update in that after triangulation gives mean position of the feature in world frame, the null space projection is done. This provides linear equations for error/residual in feature position which can be derived from the non null space projected part. The null space projected part is used to update the EKF. The final feature estimate would be added to the state vector and covariance changed accordingly.

Specifically, given a set of measurements involving the state x and a new feature f, we want to optimally and efficiently initialize the feature.

zi=hi(x,f)+nz_i = h_i(\mathbf{x}, \mathbf{f}) + \mathbf{n}

In general, we collect more than the minimum number of measurements at different times needed for initialization (i.e. delayed). For example, although in principle we need two monocular images to initialize a 3D point feature, we often collect more than two images in order to obtain better initialization. In openvins code initialization happens only when a feature is seen in max clones camera poses and is ready to be converted to SLAM point and consequently added to state vector. To process all collected measurements, we stack them and perform linearization around some linearization points (estimates) denoted by x^\mathbf{\hat{x}} and f^\mathbf{\hat{f}}:

z=[z1z2zm]=h(x,f)+n\mathbf{z} = \begin{bmatrix} \mathbf{z}_1 \\ \mathbf{z}_2 \\ \vdots \\ \mathbf{z}_m \end{bmatrix} = \mathbf{h}(\mathbf{{x}}, \mathbf{{f}}) + \mathbf{n}
    r=zh(x^,f^)=Hxx~+Hff~+n\implies \mathbf{r} = \mathbf{z} - \mathbf{h}(\mathbf{\hat{x}}, \mathbf{\hat{f}}) = \mathbf{H}_\mathbf{x}\mathbf{\tilde{x}} + \mathbf{H}_\mathbf{f}\mathbf{\tilde{f}} + \mathbf{n}

To efficiently compute the resulting augmented covariance matrix, we perform Givens rotations to zero-out rows in Hf\mathbf{H}_\mathbf{f} with indices larger than the dimension of f\mathbf{f}, and apply the same Givens rotations to Hx\mathbf{H}_\mathbf{x} and r\mathbf{r}. As a result of this operation, we have the following linear system:

[r1r2]=[H1xH2x]x~+[H1f0]f~+[n1n2]\begin{bmatrix} \mathbf{r}_1 \\ \mathbf{r}_2 \end{bmatrix} = \begin{bmatrix} \mathbf{H}_{1\mathbf{x}} \\ \mathbf{H}_{2\mathbf{x}} \end{bmatrix}\mathbf{\tilde{x}} + \begin{bmatrix} \mathbf{H}_{1\mathbf{f}} \\ \mathbf{0} \end{bmatrix}\mathbf{\tilde{f}} + \begin{bmatrix} \mathbf{n}_1 \\ \mathbf{n}_2 \end{bmatrix}

Note that the bottom system essentially is corresponding to the nullspace projection as in the MSCKF update and H1f\mathbf{H}_{1\mathbf{f}} is generally invertible. Note also that we assume the measurement noise is isotropic; otherwise, we should first perform whitening to make it isotropic, which would save significant computations. So, if the original measurement noise covariance R=σ2ImR = \sigma^2 I_m and the dimension of f\mathbf{f} is nn, then the inferred measurement noise covariance will be R1=σ2InR_1 = \sigma^2 I_n and R2=σ2ImnR_2 = \sigma^2 I_{m-n}.

Now we can directly solve for the error of the new feature based on the first subsystem:

f~=H1f1(r1n1H1xx~)\mathbf{\tilde{f}} = \mathbf{H}_{1\mathbf{f}}^{-1}(\mathbf{r}_1 - \mathbf{n}_1 - \mathbf{H}_{1\mathbf{x}}\mathbf{\tilde{x}})
    E[f~]=H1f1(r1)\implies E[\mathbf{\tilde{f}}] = \mathbf{H}_{1\mathbf{f}}^{-1}(\mathbf{r}_1)

where we assumed noise and state error are zero mean. We can update f^\mathbf{\hat{f}} with this correction by f^+E[f~]\mathbf{\hat{f}} + E[\mathbf{\tilde{f}}].

We now can compute the covariance of the new feature as follows:

Pff=E[(f~E[f~])(f~E[f~])T]=E[(H1f1(n1H1xx~))(H1f1(n1H1xx~))T]P_{\mathbf{f}\mathbf{f}} = E[(\mathbf{\tilde{f}} - E[\mathbf{\tilde{f}}])(\mathbf{\tilde{f}} - E[\mathbf{\tilde{f}}])^T] = E[(\mathbf{H}_{1\mathbf{f}}^{-1}(-\mathbf{n}_1 - \mathbf{H}_{1\mathbf{x}}\mathbf{\tilde{x}}))(\mathbf{H}_{1\mathbf{f}}^{-1}(-\mathbf{n}_1 - \mathbf{H}_{1\mathbf{x}}\mathbf{\tilde{x}}))^T]
=H1f1(H1xPxxH1xT+R1)H1fT= \mathbf{H}_{1\mathbf{f}}^{-1}(\mathbf{H}_{1\mathbf{x}}P_{\mathbf{x}\mathbf{x}}\mathbf{H}_{1\mathbf{x}}^T + R_1)\mathbf{H}_{1\mathbf{f}}^{-T}

and the cross correlation can be computed as:

Pxf=E[x~(f~E[f~])T]=E[x~(H1f1(n1H1xx~))T]P_{\mathbf{x}\mathbf{f}} = E[\mathbf{\tilde{x}}(\mathbf{\tilde{f}} - E[\mathbf{\tilde{f}}])^T] = E[\mathbf{\tilde{x}}(\mathbf{H}_{1\mathbf{f}}^{-1}(-\mathbf{n}_1 - \mathbf{H}_{1\mathbf{x}}\mathbf{\tilde{x}}))^T]
=PxxH1xTH1fT= -P_{\mathbf{x}\mathbf{x}}\mathbf{H}_{1\mathbf{x}}^T\mathbf{H}_{1\mathbf{f}}^{-T}

These entries can then be placed in the correct location for the covariance. For example when initializing a new feature to the end of the state, the augmented covariance would be:

Paug=[PxxPxfPxfTPff]P_{\text{aug}} = \begin{bmatrix} P_{\mathbf{x}\mathbf{x}} & P_{\mathbf{x}\mathbf{f}} \\ P_{\mathbf{x}\mathbf{f}}^T & P_{\mathbf{f}\mathbf{f}} \end{bmatrix}

After initialization, r2,Hx2,n2r_2, H_{x_2}, n_2 can be used to update our new state through a standard EKF update.

Implementation details

For all the features individually: Create datastructure for features with pFinG from triangulation For null space projection, we need to find out H_f, H_x (jacobians) and res as in r above. res would be [2xtotal_meas, 1], H_f would be [2xtotal_meas, 3], H_X would be [2xtotal_meas, total_hx] where total_hx would be total of calibration, distortion (calibration, distortion would only be added if config asks to add them) and involved clone size.

Residual computation (res/r)

res for a measurement is defined as:

res=uvmuvdistres = uv_{m} - uv_{dist}

uvdist=distort(uvnorm)uv_{dist} = distort(uv_{norm})

uvnorm=pFinCi(0)/pFinCi(2),pFinCi(1)/pFinCi(2)uv_{norm} = p_{FinCi}(0)/p_{FinCi}(2), p_{FinCi}(1)/p_{FinCi}(2)

where pFinCip_{FinCi} is formed for evert measurement “m” of each feature:

std::shared_ptr<PoseJPL> clone_Ii = state->_clones_IMU.at(feature.timestamps[pair.first].at(m));
Eigen::Matrix3d R_GtoIi = clone_Ii->Rot();
Eigen::Vector3d p_IiinG = clone_Ii->pos();

// Get current feature in the IMU
Eigen::Vector3d p_FinIi = R_GtoIi * (p_FinG - p_IiinG);

// Project the current feature into the current frame of reference
Eigen::Vector3d p_FinCi = R_ItoC * p_FinIi + p_IinC;

Jacobian matrix H_f ((r)/(pFinG)\partial(r)/\partial(p_{FinG}))

If fej is to be used, we change the p_FinCi with fej clones rather than updated clones as in:

// If we are doing first estimate Jacobians, then overwrite with the first estimates
if (state->_options.do_fej) {
R_GtoIi = clone_Ii->Rot_fej();
p_IiinG = clone_Ii->pos_fej();
// R_ItoC = calibration->Rot_fej();
// p_IinC = calibration->pos_fej();
p_FinIi = R_GtoIi * (p_FinG_fej - p_IiinG);
p_FinCi = R_ItoC * p_FinIi + p_IinC;
// uv_norm << p_FinCi(0)/p_FinCi(2),p_FinCi(1)/p_FinCi(2);
// cam_d = state->get_intrinsics_CAM(pair.first)->fej();
}

The jacobian here are the same as in dynamic_intialization notebook section Jacobian computation, Fisheye model and perspective projection function. First the computes Jacobians with respect to normalized image coordinates and possibly the camera intrinsics. Then the code proceeds with computing the jacobians for normalized coordinates in respect to projection function dzn_dpfc. Next the derivative of p_FinCi with respect to p_FinIi is computed followed by derivative of p_FinCi in respect to camera clone state. dz_dpfg is computed.

Here: dz_dzn is jacobian of uv_dist above wrt uv_norm, dz_dzeta is jacobian of uv_dist wrt intrinsics and distortions, dzn_dpfc is derivative of uv_norm wrt p_FinCi, dpfc_dpfg is p_FinCi wrt p_FinG, dpfc_dclone is p_FinCi wrt clones rotation and position. dz_dpfg is z wrt p_FinG

Eigen::MatrixXd dz_dzn, dz_dzeta;
state->_cam_intrinsics_cameras.at(pair.first)->compute_distort_jacobian(uv_norm, dz_dzn, dz_dzeta);

// Normalized coordinates in respect to projection function
Eigen::MatrixXd dzn_dpfc = Eigen::MatrixXd::Zero(2, 3);
dzn_dpfc << 1 / p_FinCi(2), 0, -p_FinCi(0) / (p_FinCi(2) * p_FinCi(2)), 0, 1 / p_FinCi(2), -p_FinCi(1) / (p_FinCi(2) * p_FinCi(2));

// Derivative of p_FinCi in respect to p_FinIi
Eigen::MatrixXd dpfc_dpfg = R_ItoC * R_GtoIi;

// Derivative of p_FinCi in respect to camera clone state
Eigen::MatrixXd dpfc_dclone = Eigen::MatrixXd::Zero(3, 6);
dpfc_dclone.block(0, 0, 3, 3).noalias() = R_ItoC * skew_x(p_FinIi);
dpfc_dclone.block(0, 3, 3, 3) = -dpfc_dpfg;

// Precompute some matrices
Eigen::MatrixXd dz_dpfc = dz_dzn * dzn_dpfc;
Eigen::MatrixXd dz_dpfg = dz_dpfc * dpfc_dpfg;

H_f is build for each c measurement. This portion would be [2x3].

// CHAINRULE: get the total feature Jacobian
H_f.block(2 * c, 0, 2, H_f.cols()).noalias() = dz_dpfg * dpfg_dlambda;

H_x is build for each c measurement. Put the jacobian for measurement c at 2*c, clone_Ii/calibration/distortion position which can be found from the map. The size for this portion would be [2x(clone_size/distortion/calibration)].

// CHAINRULE: get state clone Jacobian
H_x.block(2 * c, map_hx[clone_Ii], 2, clone_Ii->size()).noalias() = dz_dpfc * dpfc_dclone;

// CHAINRULE: loop through all extra states and add their
// NOTE: we add the Jacobian here as we might be in the anchoring pose for this measurement
for (size_t i = 0; i < dpfg_dx_order.size(); i++) {
    H_x.block(2 * c, map_hx[dpfg_dx_order.at(i)], 2, dpfg_dx_order.at(i)->size()).noalias() += dz_dpfg * dpfg_dx.at(i);
}

//=========================================================================
//=========================================================================

// Derivative of p_FinCi in respect to camera calibration (R_ItoC, p_IinC)
if (state->_options.do_calib_camera_pose) {
    // Calculate the Jacobian
    Eigen::MatrixXd dpfc_dcalib = Eigen::MatrixXd::Zero(3, 6);
    dpfc_dcalib.block(0, 0, 3, 3) = skew_x(p_FinCi - p_IinC);
    dpfc_dcalib.block(0, 3, 3, 3) = Eigen::Matrix<double, 3, 3>::Identity();

    // Chainrule it and add it to the big jacobian
    H_x.block(2 * c, map_hx[calibration], 2, calibration->size()).noalias() += dz_dpfc * dpfc_dcalib;
}

// Derivative of measurement in respect to distortion parameters
if (state->_options.do_calib_camera_intrinsics) {
    H_x.block(2 * c, map_hx[distortion], 2, distortion->size()) = dz_dzeta;
}

Different H matrices

H_f - Feature Jacobian Jacobian with respect to feature parameters (the landmark position). Its size depends on the landmark representation: 3 columns for full 3D representations (ANCHORED_3D, GLOBAL_3D, etc.) 1 column for single depth representation (ANCHORED_INVERSE_DEPTH_SINGLE)

H_x - State Jacobian Jacobian with respect to state variables (IMU poses, camera calibration, etc.) that affect the feature measurement. This includes things like: Clone poses (historical IMU states) Camera extrinsics (if calibrating) Anchor pose (for relative representations)

H_xf - Combined Jacobian A concatenated matrix combining both state and feature Jacobians: H_xf = [H_x | H_f]

Hx_big - Stacked Multi-Feature Jacobian A large matrix that vertically stacks all the state Jacobians from multiple features for batch processing.

Summary Table:

MatrixDimensionsMeaning
H_f(2×num_meas) × (1 or 3)Single feature → feature params
H_x(2×num_meas) × (state_size)Single feature → state params
H_xf(2×num_meas) × (state_size + feat_size)Single feature → [state, feature]
Hx_big(all_meas) × (state_size)All features → state params
void UpdaterSLAM::delayed_init(std::shared_ptr<State> state, std::vector<std::shared_ptr<Feature>> &feature_vec) {

  // Return if no features
  if (feature_vec.empty())
    return;

  // Start timing
  boost::posix_time::ptime rT0, rT1, rT2, rT3;
  rT0 = boost::posix_time::microsec_clock::local_time();

  // 0. Get all timestamps our clones are at (and thus valid measurement times)
  std::vector<double> clonetimes;
  for (const auto &clone_imu : state->_clones_IMU) {
    clonetimes.emplace_back(clone_imu.first);
  }

  // 1. Clean all feature measurements and make sure they all have valid clone times
  auto it0 = feature_vec.begin();
  while (it0 != feature_vec.end()) {

    // Clean the feature
    (*it0)->clean_old_measurements(clonetimes);

    // Count how many measurements
    int ct_meas = 0;
    for (const auto &pair : (*it0)->timestamps) {
      ct_meas += (*it0)->timestamps[pair.first].size();
    }

    // Remove if we don't have enough
    if (ct_meas < 2) {
      (*it0)->to_delete = true;
      it0 = feature_vec.erase(it0);
    } else {
      it0++;
    }
  }
  rT1 = boost::posix_time::microsec_clock::local_time();

  // 2. Create vector of cloned *CAMERA* poses at each of our clone timesteps
  std::unordered_map<size_t, std::unordered_map<double, FeatureInitializer::ClonePose>> clones_cam;
  for (const auto &clone_calib : state->_calib_IMUtoCAM) {

    // For this camera, create the vector of camera poses
    std::unordered_map<double, FeatureInitializer::ClonePose> clones_cami;
    for (const auto &clone_imu : state->_clones_IMU) {

      // Get current camera pose
      Eigen::Matrix<double, 3, 3> R_GtoCi = clone_calib.second->Rot() * clone_imu.second->Rot();
      Eigen::Matrix<double, 3, 1> p_CioinG = clone_imu.second->pos() - R_GtoCi.transpose() * clone_calib.second->pos();

      // Append to our map
      clones_cami.insert({clone_imu.first, FeatureInitializer::ClonePose(R_GtoCi, p_CioinG)});
    }

    // Append to our map
    clones_cam.insert({clone_calib.first, clones_cami});
  }

  // 3. Try to triangulate all MSCKF or new SLAM features that have measurements
  auto it1 = feature_vec.begin();
  while (it1 != feature_vec.end()) {

    // Triangulate the feature and remove if it fails
    bool success_tri = true;
    if (initializer_feat->config().triangulate_1d) {
      success_tri = initializer_feat->single_triangulation_1d(*it1, clones_cam);
    } else {
      success_tri = initializer_feat->single_triangulation(*it1, clones_cam);
    }

    // Gauss-newton refine the feature
    bool success_refine = true;
    if (initializer_feat->config().refine_features) {
      success_refine = initializer_feat->single_gaussnewton(*it1, clones_cam);
    }

    // Remove the feature if not a success
    if (!success_tri || !success_refine) {
      (*it1)->to_delete = true;
      it1 = feature_vec.erase(it1);
      continue;
    }
    it1++;
  }
  rT2 = boost::posix_time::microsec_clock::local_time();

  // 4. Compute linear system for each feature, nullspace project, and reject
  auto it2 = feature_vec.begin();
  while (it2 != feature_vec.end()) {

    // Convert our feature into our current format
    UpdaterHelper::UpdaterHelperFeature feat;
    feat.featid = (*it2)->featid;
    feat.uvs = (*it2)->uvs;
    feat.uvs_norm = (*it2)->uvs_norm;
    feat.timestamps = (*it2)->timestamps;

    // If we are using single inverse depth, then it is equivalent to using the msckf inverse depth
    auto feat_rep =
        ((int)feat.featid < state->_options.max_aruco_features) ? state->_options.feat_rep_aruco : state->_options.feat_rep_slam;
    feat.feat_representation = feat_rep;
    if (feat_rep == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {
      feat.feat_representation = LandmarkRepresentation::Representation::ANCHORED_MSCKF_INVERSE_DEPTH;
    }

    // Save the position and its fej value
    if (LandmarkRepresentation::is_relative_representation(feat.feat_representation)) {
      feat.anchor_cam_id = (*it2)->anchor_cam_id;
      feat.anchor_clone_timestamp = (*it2)->anchor_clone_timestamp;
      feat.p_FinA = (*it2)->p_FinA;
      feat.p_FinA_fej = (*it2)->p_FinA;
    } else {
      feat.p_FinG = (*it2)->p_FinG;
      feat.p_FinG_fej = (*it2)->p_FinG;
    }

    // Our return values (feature jacobian, state jacobian, residual, and order of state jacobian)
    Eigen::MatrixXd H_f;
    Eigen::MatrixXd H_x;
    Eigen::VectorXd res;
    std::vector<std::shared_ptr<Type>> Hx_order;

    // Get the Jacobian for this feature
    UpdaterHelper::get_feature_jacobian_full(state, feat, H_f, H_x, res, Hx_order);

    // If we are doing the single feature representation, then we need to remove the bearing portion
    // To do so, we project the bearing portion onto the state and depth Jacobians and the residual.
    // This allows us to directly initialize the feature as a depth-old feature
    if (feat_rep == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {

      // Append the Jacobian in respect to the depth of the feature
      Eigen::MatrixXd H_xf = H_x;
      H_xf.conservativeResize(H_x.rows(), H_x.cols() + 1);
      H_xf.block(0, H_x.cols(), H_x.rows(), 1) = H_f.block(0, H_f.cols() - 1, H_f.rows(), 1);
      H_f.conservativeResize(H_f.rows(), H_f.cols() - 1);

      // Nullspace project the bearing portion
      // This takes into account that we have marginalized the bearing already
      // Thus this is crucial to ensuring estimator consistency as we are not taking the bearing to be true
      UpdaterHelper::nullspace_project_inplace(H_f, H_xf, res);

      // Split out the state portion and feature portion
      H_x = H_xf.block(0, 0, H_xf.rows(), H_xf.cols() - 1);
      H_f = H_xf.block(0, H_xf.cols() - 1, H_xf.rows(), 1);
    }

    // Create feature pointer (we will always create it of size three since we initialize the single invese depth as a msckf anchored
    // representation)
    int landmark_size = (feat_rep == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) ? 1 : 3;
    auto landmark = std::make_shared<Landmark>(landmark_size);
    landmark->_featid = feat.featid;
    landmark->_feat_representation = feat_rep;
    landmark->_unique_camera_id = (*it2)->anchor_cam_id;
    if (LandmarkRepresentation::is_relative_representation(feat.feat_representation)) {
      landmark->_anchor_cam_id = feat.anchor_cam_id;
      landmark->_anchor_clone_timestamp = feat.anchor_clone_timestamp;
      landmark->set_from_xyz(feat.p_FinA, false);
      landmark->set_from_xyz(feat.p_FinA_fej, true);
    } else {
      landmark->set_from_xyz(feat.p_FinG, false);
      landmark->set_from_xyz(feat.p_FinG_fej, true);
    }

    // Measurement noise matrix
    double sigma_pix_sq =
        ((int)feat.featid < state->_options.max_aruco_features) ? _options_aruco.sigma_pix_sq : _options_slam.sigma_pix_sq;
    Eigen::MatrixXd R = sigma_pix_sq * Eigen::MatrixXd::Identity(res.rows(), res.rows());

    // Try to initialize, delete new pointer if we failed
    double chi2_multipler =
        ((int)feat.featid < state->_options.max_aruco_features) ? _options_aruco.chi2_multipler : _options_slam.chi2_multipler;
    if (StateHelper::initialize(state, landmark, Hx_order, H_x, H_f, R, res, chi2_multipler)) {
      state->_features_SLAM.insert({(*it2)->featid, landmark});
      (*it2)->to_delete = true;
      it2++;
    } else {
      (*it2)->to_delete = true;
      it2 = feature_vec.erase(it2);
    }
  }
  rT3 = boost::posix_time::microsec_clock::local_time();

  // Debug print timing information
  if (!feature_vec.empty()) {
    PRINT_ALL("[SLAM-DELAY]: %.4f seconds to clean\n", (rT1 - rT0).total_microseconds() * 1e-6);
    PRINT_ALL("[SLAM-DELAY]: %.4f seconds to triangulate\n", (rT2 - rT1).total_microseconds() * 1e-6);
    PRINT_ALL("[SLAM-DELAY]: %.4f seconds initialize (%d features)\n", (rT3 - rT2).total_microseconds() * 1e-6, (int)feature_vec.size());
    PRINT_ALL("[SLAM-DELAY]: %.4f seconds total\n", (rT3 - rT1).total_microseconds() * 1e-6);
  }
}

Update

  • Update existing SLAM features

  • Chi² test

  • H_xf includes feature Jacobian

  • EKF update Once a feature has been initialized into the state vector using delayed init the corresponding feature position is added to the state vector. Hence in the update the flow is different from MSCKF and SLAM updates that do delayed init. Here rather than getting Hf and Hx and doing null space projection, Hx and Hf are concatenated as in [Hx | Hf]. This is since the feature is now a part of the state vector and hence jacobian wrt feature position pF_in_G is directly in the measurement jacobian. Hence we do not need to do any null space projection. Update hence only calculates the residual for each timestamp, Hx and Hf for each residual and then appends this to a big H and does EKF update. The

void UpdaterSLAM::update(std::shared_ptr<State> state, std::vector<std::shared_ptr<Feature>> &feature_vec) {

  // Return if no features
  if (feature_vec.empty())
    return;

  // Start timing
  boost::posix_time::ptime rT0, rT1, rT2, rT3;
  rT0 = boost::posix_time::microsec_clock::local_time();

  // 0. Get all timestamps our clones are at (and thus valid measurement times)
  std::vector<double> clonetimes;
  for (const auto &clone_imu : state->_clones_IMU) {
    clonetimes.emplace_back(clone_imu.first);
  }

  // 1. Clean all feature measurements and make sure they all have valid clone times
  auto it0 = feature_vec.begin();
  while (it0 != feature_vec.end()) {

    // Clean the feature
    (*it0)->clean_old_measurements(clonetimes);

    // Count how many measurements
    int ct_meas = 0;
    for (const auto &pair : (*it0)->timestamps) {
      ct_meas += (*it0)->timestamps[pair.first].size();
    }

    // Get the landmark and its representation
    // For single depth representation we need at least two measurement
    // This is because we do nullspace projection
    std::shared_ptr<Landmark> landmark = state->_features_SLAM.at((*it0)->featid);
    int required_meas = (landmark->_feat_representation == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) ? 2 : 1;

    // Remove if we don't have enough
    if (ct_meas < 1) {
      (*it0)->to_delete = true;
      it0 = feature_vec.erase(it0);
    } else if (ct_meas < required_meas) {
      it0 = feature_vec.erase(it0);
    } else {
      it0++;
    }
  }
  rT1 = boost::posix_time::microsec_clock::local_time();

  // Calculate the max possible measurement size
  size_t max_meas_size = 0;
  for (size_t i = 0; i < feature_vec.size(); i++) {
    for (const auto &pair : feature_vec.at(i)->timestamps) {
      max_meas_size += 2 * feature_vec.at(i)->timestamps[pair.first].size();
    }
  }

  // Calculate max possible state size (i.e. the size of our covariance)
  size_t max_hx_size = state->max_covariance_size();

  // Large Jacobian, residual, and measurement noise of *all* features for this update
  Eigen::VectorXd res_big = Eigen::VectorXd::Zero(max_meas_size);
  Eigen::MatrixXd Hx_big = Eigen::MatrixXd::Zero(max_meas_size, max_hx_size);
  Eigen::MatrixXd R_big = Eigen::MatrixXd::Identity(max_meas_size, max_meas_size);
  std::unordered_map<std::shared_ptr<Type>, size_t> Hx_mapping;
  std::vector<std::shared_ptr<Type>> Hx_order_big;
  size_t ct_jacob = 0;
  size_t ct_meas = 0;

  // 4. Compute linear system for each feature, nullspace project, and reject
  auto it2 = feature_vec.begin();
  while (it2 != feature_vec.end()) {

    // Ensure we have the landmark and it is the same
    assert(state->_features_SLAM.find((*it2)->featid) != state->_features_SLAM.end());
    assert(state->_features_SLAM.at((*it2)->featid)->_featid == (*it2)->featid);

    // Get our landmark from the state
    std::shared_ptr<Landmark> landmark = state->_features_SLAM.at((*it2)->featid);

    // Convert the state landmark into our current format
    UpdaterHelper::UpdaterHelperFeature feat;
    feat.featid = (*it2)->featid;
    feat.uvs = (*it2)->uvs;
    feat.uvs_norm = (*it2)->uvs_norm;
    feat.timestamps = (*it2)->timestamps;

    // If we are using single inverse depth, then it is equivalent to using the msckf inverse depth
    feat.feat_representation = landmark->_feat_representation;
    if (landmark->_feat_representation == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {
      feat.feat_representation = LandmarkRepresentation::Representation::ANCHORED_MSCKF_INVERSE_DEPTH;
    }

    // Save the position and its fej value
    if (LandmarkRepresentation::is_relative_representation(feat.feat_representation)) {
      feat.anchor_cam_id = landmark->_anchor_cam_id;
      feat.anchor_clone_timestamp = landmark->_anchor_clone_timestamp;
      feat.p_FinA = landmark->get_xyz(false);
      feat.p_FinA_fej = landmark->get_xyz(true);
    } else {
      feat.p_FinG = landmark->get_xyz(false);
      feat.p_FinG_fej = landmark->get_xyz(true);
    }

    // Our return values (feature jacobian, state jacobian, residual, and order of state jacobian)
    Eigen::MatrixXd H_f;
    Eigen::MatrixXd H_x;
    Eigen::VectorXd res;
    std::vector<std::shared_ptr<Type>> Hx_order;

    // Get the Jacobian for this feature
    UpdaterHelper::get_feature_jacobian_full(state, feat, H_f, H_x, res, Hx_order);

    // Place Jacobians in one big Jacobian, since the landmark is already in our state vector
    Eigen::MatrixXd H_xf = H_x;
    if (landmark->_feat_representation == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {

      // Append the Jacobian in respect to the depth of the feature
      H_xf.conservativeResize(H_x.rows(), H_x.cols() + 1);
      H_xf.block(0, H_x.cols(), H_x.rows(), 1) = H_f.block(0, H_f.cols() - 1, H_f.rows(), 1);
      H_f.conservativeResize(H_f.rows(), H_f.cols() - 1);

      // Nullspace project the bearing portion
      // This takes into account that we have marginalized the bearing already
      // Thus this is crucial to ensuring estimator consistency as we are not taking the bearing to be true
      UpdaterHelper::nullspace_project_inplace(H_f, H_xf, res);

    } else {

      // Else we have the full feature in our state, so just append it
      H_xf.conservativeResize(H_x.rows(), H_x.cols() + H_f.cols());
      H_xf.block(0, H_x.cols(), H_x.rows(), H_f.cols()) = H_f;
    }

    // Append to our Jacobian order vector
    std::vector<std::shared_ptr<Type>> Hxf_order = Hx_order;
    Hxf_order.push_back(landmark);

    // Chi2 distance check
    Eigen::MatrixXd P_marg = StateHelper::get_marginal_covariance(state, Hxf_order);
    Eigen::MatrixXd S = H_xf * P_marg * H_xf.transpose();
    double sigma_pix_sq =
        ((int)feat.featid < state->_options.max_aruco_features) ? _options_aruco.sigma_pix_sq : _options_slam.sigma_pix_sq;
    S.diagonal() += sigma_pix_sq * Eigen::VectorXd::Ones(S.rows());
    double chi2 = res.dot(S.llt().solve(res));

    // Get our threshold (we precompute up to 500 but handle the case that it is more)
    double chi2_check;
    if (res.rows() < 500) {
      chi2_check = chi_squared_table[res.rows()];
    } else {
      boost::math::chi_squared chi_squared_dist(res.rows());
      chi2_check = boost::math::quantile(chi_squared_dist, 0.95);
      PRINT_WARNING(YELLOW "chi2_check over the residual limit - %d\n" RESET, (int)res.rows());
    }

    // Check if we should delete or not
    double chi2_multipler =
        ((int)feat.featid < state->_options.max_aruco_features) ? _options_aruco.chi2_multipler : _options_slam.chi2_multipler;
    if (chi2 > chi2_multipler * chi2_check) {
      if ((int)feat.featid < state->_options.max_aruco_features) {
        PRINT_WARNING(YELLOW "[SLAM-UP]: rejecting aruco tag %d for chi2 thresh (%.3f > %.3f)\n" RESET, (int)feat.featid, chi2,
                      chi2_multipler * chi2_check);
      } else {
        landmark->update_fail_count++;
      }
      (*it2)->to_delete = true;
      it2 = feature_vec.erase(it2);
      continue;
    }

    // Debug print when we are going to update the aruco tags
    if ((int)feat.featid < state->_options.max_aruco_features) {
      PRINT_DEBUG("[SLAM-UP]: accepted aruco tag %d for chi2 thresh (%.3f < %.3f)\n", (int)feat.featid, chi2, chi2_multipler * chi2_check);
    }

    // We are good!!! Append to our large H vector
    size_t ct_hx = 0;
    for (const auto &var : Hxf_order) {

      // Ensure that this variable is in our Jacobian
      if (Hx_mapping.find(var) == Hx_mapping.end()) {
        Hx_mapping.insert({var, ct_jacob});
        Hx_order_big.push_back(var);
        ct_jacob += var->size();
      }

      // Append to our large Jacobian
      Hx_big.block(ct_meas, Hx_mapping[var], H_xf.rows(), var->size()) = H_xf.block(0, ct_hx, H_xf.rows(), var->size());
      ct_hx += var->size();
    }

    // Our isotropic measurement noise
    R_big.block(ct_meas, ct_meas, res.rows(), res.rows()) *= sigma_pix_sq;

    // Append our residual and move forward
    res_big.block(ct_meas, 0, res.rows(), 1) = res;
    ct_meas += res.rows();
    it2++;
  }
  rT2 = boost::posix_time::microsec_clock::local_time();

  // We have appended all features to our Hx_big, res_big
  // Delete it so we do not reuse information
  for (size_t f = 0; f < feature_vec.size(); f++) {
    feature_vec[f]->to_delete = true;
  }

  // Return if we don't have anything and resize our matrices
  if (ct_meas < 1) {
    return;
  }
  assert(ct_meas <= max_meas_size);
  assert(ct_jacob <= max_hx_size);
  res_big.conservativeResize(ct_meas, 1);
  Hx_big.conservativeResize(ct_meas, ct_jacob);
  R_big.conservativeResize(ct_meas, ct_meas);

  // 5. With all good SLAM features update the state
  StateHelper::EKFUpdate(state, Hx_order_big, Hx_big, res_big, R_big);
  rT3 = boost::posix_time::microsec_clock::local_time();

  // Debug print timing information
  PRINT_ALL("[SLAM-UP]: %.4f seconds to clean\n", (rT1 - rT0).total_microseconds() * 1e-6);
  PRINT_ALL("[SLAM-UP]: %.4f seconds creating linear system\n", (rT2 - rT1).total_microseconds() * 1e-6);
  PRINT_ALL("[SLAM-UP]: %.4f seconds to update (%d feats of %d size)\n", (rT3 - rT2).total_microseconds() * 1e-6, (int)feature_vec.size(),
            (int)Hx_big.rows());
  PRINT_ALL("[SLAM-UP]: %.4f seconds total\n", (rT3 - rT1).total_microseconds() * 1e-6);
}

Marginalization

Marginalization is straightforward. It removes the variables from the state vector and covariance and update the order. I am guessing this is the only thing thats done. For Gaussian distributions, marginalizing out variables simply means removing the corresponding rows and columns from the covariance matrix. The other covariances (P(x₁,x₁), P(x₁,x₂), P(x₂,x₂)) remain unchanged.

Questions: How is state updated when residual only updates clones and calibs How are slam feature updated How is covariance created Maybe chi2 test