Update Equations¶
Feature Lifecycle¶
Feature paths:
Lost early → MSCKF (nullspace)
At marg time, short track → MSCKF (nullspace)
At marg time, max track, SLAM full → MSCKF (nullspace)
At marg time, max track, SLAM available → SLAM (added to state)
Already SLAM → SLAM (update existing)
Feature at time t
│
┌────────────┴────────────┐
│ │
Lost tracking? Still tracked?
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ feats_lost │ │ At oldest clone? │
│ → MSCKF │ └──────────────────┘
└──────────────────┘ │
┌───────┴───────┐
YES NO
│ │
▼ ▼
┌──────────────────┐ Continue tracking
│ feats_marg │ (not used yet)
└──────────────────┘
│
Track length > max_clone_size?
│
┌──────────┴──────────┐
YES NO
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ feats_maxtracks │ │ feats_marg │
└──────────────────┘ │ → MSCKF │
│ └──────────────────┘
SLAM budget available?
│
┌──────────┴──────────┐
YES NO
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ → SLAM │ │ → MSCKF │
│ (delayed_init) │ │ (nullspace) │
└──────────────────┘ └──────────────────┘┌─────────────────────────────────────────────────────────────────────────────┐
│ FEATURE TRACKING PHASE │
│ (Features tracked across frames) │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────┐
│ New Image at time t arrives │
└─────────────────────────────────┘
│
▼
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ FEATURE CATEGORIZATION ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
│
┌─────────────────────┼─────────────────────┐
▼ ▼ ▼
┌───────────────────┐ ┌───────────────────┐ ┌──────────────────────┐
│ Lost Features │ │ Features at Oldest│ │ Already SLAM Features│
│ (feats_lost) │ │Clone (feats_marg) │ │ (continue update) │
│ │ │ │ │ │
│ NOT in newest │ │ Present at time │ │ Pull from trackFEATS │
│ frame anymore │ │ margtimestep() │ │ or trackARUCO │
└───────────────────┘ └───────────────────┘ └──────────────────────┘
│ │ │
│ ▼ │
│ ┌──────────────────────┐ │
│ │ Check Track Length │ │
│ │ > max_clone_size? │ │
│ └──────────────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────YES────┐ NO │
│ │ │ │ │
│ ▼ │ │ │
│ ┌──────────────────┐│ │ │
│ │ feats_maxtracks ││ │ │
│ │ ││ │ │
│ │ Long-lived ││ │ │
│ │ features eligible││ │ │
│ │ for SLAM ││ │ │
│ └──────────────────┘│ │ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────────────────────────┐ │
│ │ SLAM Budget Available? │ │
│ │ - max_slam_features > 0? │ │
│ │ - dt_slam_delay passed? │ │
│ │ - slots available? │ │
│ └──────────────────────────────┘ │
│ │ │ │
│ ┌─────YES───────┐ NO│ │
│ ▼ │ │ │
│ ┌──────────────┐ │ │ │
│ │ feats_slam │ │ │ │
│ │ (DELAYED) │ │ │ │
│ └──────────────┘ │ │ │
│ │ │ │ │
└────┼────────────────┼─────┼────────────────────┘
│ │ │
│ └─────┴───────────┐
│ │
│ ▼
│ ┌──────────────────────────────┐
│ │ featsup_MSCKF │
│ │ │
│ │ = feats_lost │
│ │ + feats_marg (not maxtrack) │
│ │ + feats_maxtracks (rejected) │
│ └──────────────────────────────┘
│ │
┏━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━┓
┃ ▼ ▼ ┃
┃ ┌─────────────────────────┐ ┌─────────────────────────────┐ ┃
┃ │ UpdaterSLAM Pipeline │ │ UpdaterMSCKF Pipeline │ ┃
┃ └─────────────────────────┘ └─────────────────────────────┘ ┃
┃ │ │ ┃
┃ ▼ ▼ ┃
┃ ┌─────────────────────────┐ ┌─────────────────────────────┐ ┃
┃ │ 1. delayed_init() │ │ 1. Triangulate features │ ┃
┃ │ - Triangulate │ │ │ ┃
┃ │ - Chi² test │ │ 2. Compute Jacobians: │ ┃
┃ │ - Initialize to │ │ • H_f (w.r.t. feature) │ ┃
┃ │ state vector │ │ • H_x (w.r.t. state) │ ┃
┃ │ - Expand covariance │ │ │ ┃
┃ │ │ │ 3. Nullspace Projection │ ┃
┃ │ 2. update() │ │ │ ┃
┃ │ - Update existing │ │ | ┃
┃ │ SLAM features │ │ │ ┃
┃ │ - Chi² test │ │ │ ┃
┃ │ - H_xf includes │ │ 4. Chi² test │ ┃
┃ │ feature Jacobian │ │ │ ┃
┃ │ - EKF update │ │ 5. EKF update (state only) │ ┃
┃ └─────────────────────────┘ │ │ ┃
┃ │ └─────────────────────────────┘ ┃
┃ │ │ ┃
┗━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━┛
│ │
▼ ▼
┌─────────────────────────┐ ┌─────────────────────────────┐
│ Feature PERSISTS in │ │ Feature DELETED │
│ state vector │ │ (to_delete = true) │
│ │ │ │
│ • In _features_SLAM │ │ NOT in state vector │
│ • Part of covariance │ │ │
│ • Updated each frame │ │ Information extracted, │
│ • Marginalized if lost │ │ then discarded │
│ or fails χ² test │ │ │
└─────────────────────────┘ └─────────────────────────────┘
│
│ (Continue updating until...)
│
▼
┌─────────────────────────────────┐
│ SLAM Feature Marginalization │
│ │
│ Triggered when: │
│ • Lost tracking │
│ • update_fail_count > 1 │
│ • Anchor clone marginalized │
│ (perform anchor change) │
└─────────────────────────────────┘The state vector of our visual-inertial system consists of the current inertial navigation state, a set of c historical IMU pose clones, a set of m environmental landmarks, and a set of w cameras’ extrinsic and intrinsic parameters. The MSCKF features/landmarks never become a part of the state. Only those that successfully gets initialized by SLAM are added to state. (Not all MSCKF features or max tracks features become a part of slam since there are parameters like max_slam_features that bound the slam features)
Nullspace projection only happens when the position and covariance of the feature is unknown. It happens when MSCKF short tracks are marginalized out (MSCKF update) and SLAM long tracks are added to state (delayed init). In case of an slam update when the features are already in the state vector we dont do nullspace projection except for some different represenation which is different in how it stores the state.
MSCKF Update¶
In case of MSCKF features are not in state. MSCKF doesnt deal with initializing slam features. Initializing slam features follows similarly. The steps are:
Triangulate features from multiple views
Compute H_f (Jacobian w.r.t. feature) and H_x (Jacobian w.r.t. state)
Nullspace projection: Projects out the feature dependency using QR decomposition (see docs)
Update state with only H_x (feature is eliminated from the equations)
Delete features immediately after use
The classic error state Kalman filter uses the following for measurement updates:
H is measurement jacobian matrix, noise is zero mean white, uncorrelated to state error.
In here the measurement model applies for a single feature, , that has been observed from a set of camera poses . Each of the observations of the feature is described by the model:
where is the image noise vector, with covariance matrix . The feature position expressed in the camera frame, , is given by:
where is the 3D feature position in global frame. Since this is unknown, in the first step of the algorithm triangulation is used to obtain an estimate, , of the feature position. This is achieved using the pixel measurements z(j), i ∈ Sj , and the filter estimates of the camera poses at the corresponding time instants (assumed to be known exactly.). Once the estimate of feature is known, its converted to pixel coordinates and residual is computed for measurement update:
where
Linearizing about the estimates for the camera pose and for the feature position, the residual of Eq. (20) can be approximated as:
In the preceding expression and are the Jacobians of the measurement with respect to the state and the feature position, respectively, and is the error in the position estimate of . By stacking the residuals of all measurements of this feature, we obtain:
where , , , and are block vectors or matrices with elements , , , and , for . Since the feature observations in different images are independent, the covariance matrix of is .
Since state estimate is used to compute , the error is correlated with the errors in . Thus the residual is not in as in the Kalman filer update equation.
This is the reason for null space projection. We define residual by projecting on the left nullspace of matrix .
To this end for feature j(not mentioned below for clarity), we start with the measurement residual function by removing the “sensitivity” to feature error we compute and apply the left nullspace of the Jacobian . However since a lot of measurements would add a lot of rows in H, the measurements are compressed.
The jacobians of original residual has size:
2n x 3 or 2*number_of_measurements x 3 (feature pF_in_G x,y,z) for feature jacobian since we have two u,v values per measurement (clones where we saw the features)
2n x 15 + 6c or 2*number_of_measurements x state size (15 calibrations + 6 per clone) for that measurement (clones where the measurements was taken, calibration, distortions, intrinsics). Since the feature u,v values dont depend on current imu state, imu intrinsics (part of propagation equations) we dont include them into state size. Once we find the jacobian we move to null space project to remove the Hf from the equations since thats correlated with noise and state. The new Hx would have 3 less rows due to this. Hence if the original Hx had 2n x state size then the null space projection would have 2n-3 x 1 residual 2n-3 x 15 + 6c for Hx after null space projection Next measurement compression converts this into 15 + 6c x 15 + 6c since thats useful for matrix multiplication.
Think of this like you are operating on a feature whose computed uv would be according:
where
Hence residuals depend on clone, intrinsics and extrinsics. Hf has max rank 3 since it can have at most 3 columns for 3D position of feature, which means after null space projection
Hf1 only has three rows which can be used to initialize the feature in the state vector. Now null space projection is used in following ways:
In case of MSCKF update, the feature does not need to be initialized into the state vector and hence we only extract the (Hx2,r2,n2) after null space projection to update the state.
Multiplying the linearized measurement equation by the nullspace of the feature Jacobian from the left yields:
In case of SLAM update, the feature does need to be initialized into the state vector and hence we use the (Hx2,r2,n2) portion to update the state and (Hf1, Hx1, n1, r1) portion to intialize the covariance into the state vector(since they claim mean is same as gauss newton):
where we assumed noise and state error are zero mean. We can update with this correction by . Note that this is equivalent to a Gauss Newton step for solving the corresponding maximum likelihood estimation (MLE) formed by fixing the estimate of x and optimizing over the value of , and should therefore be zero if we used such an optimization to come up with our initial estimate for the new variable.
We now can compute the covariance of the new feature as follows:
and the cross correlation can be computed as:
These entries can then be placed in the correct location for the covariance. For example when initializing a new feature to the end of the state, the augmented covariance would be:
After null space projection the Hx matrix is of size:
where n is the the number of uv measurements (sum total of all uv measurements for one or multiple features), c number of clones (for one or all features), 15 intrinsics and extrinsics.
Measurement Compression¶
One of the most costly operations in the EKF update is the matrix multiplication. To mitigate this issue, we perform the thin QR decomposition of the measurement Jacobian after nullspace projection. The measurement compression converts a Ho,k from 2n-3 x 15+6c into 15+6c x 15+6c since mostly that would be the smaller dimension or rank:
This QR decomposition can be performed again using Givens rotations (note that this operation in general is not cheap though). We apply this QR to the linearized measurement residuals to compress measurements:
As a result, the compressed measurement Jacobian will be of the size of the state, which will significantly reduce the EKF update cost:
void UpdaterMSCKF::update(std::shared_ptr<State> state, std::vector<std::shared_ptr<Feature>> &feature_vec) {
// Return if no features
if (feature_vec.empty())
return;
// Start timing
boost::posix_time::ptime rT0, rT1, rT2, rT3, rT4, rT5;
rT0 = boost::posix_time::microsec_clock::local_time();
// 0. Get all timestamps our clones are at (and thus valid measurement times)
std::vector<double> clonetimes;
for (const auto &clone_imu : state->_clones_IMU) {
clonetimes.emplace_back(clone_imu.first);
}
// 1. Clean all feature measurements and make sure they all have valid clone times
auto it0 = feature_vec.begin();
while (it0 != feature_vec.end()) {
// Clean the feature
(*it0)->clean_old_measurements(clonetimes);
// Count how many measurements
int ct_meas = 0;
for (const auto &pair : (*it0)->timestamps) {
ct_meas += (*it0)->timestamps[pair.first].size();
}
// Remove if we don't have enough
if (ct_meas < 2) {
(*it0)->to_delete = true;
it0 = feature_vec.erase(it0);
} else {
it0++;
}
}
rT1 = boost::posix_time::microsec_clock::local_time();
// 2. Create vector of cloned *CAMERA* poses at each of our clone timesteps
std::unordered_map<size_t, std::unordered_map<double, FeatureInitializer::ClonePose>> clones_cam;
for (const auto &clone_calib : state->_calib_IMUtoCAM) {
// For this camera, create the vector of camera poses
std::unordered_map<double, FeatureInitializer::ClonePose> clones_cami;
for (const auto &clone_imu : state->_clones_IMU) {
// Get current camera pose
Eigen::Matrix<double, 3, 3> R_GtoCi = clone_calib.second->Rot() * clone_imu.second->Rot();
Eigen::Matrix<double, 3, 1> p_CioinG = clone_imu.second->pos() - R_GtoCi.transpose() * clone_calib.second->pos();
// Append to our map
clones_cami.insert({clone_imu.first, FeatureInitializer::ClonePose(R_GtoCi, p_CioinG)});
}
// Append to our map
clones_cam.insert({clone_calib.first, clones_cami});
}
// 3. Try to triangulate all MSCKF or new SLAM features that have measurements
auto it1 = feature_vec.begin();
while (it1 != feature_vec.end()) {
// Triangulate the feature and remove if it fails
bool success_tri = true;
if (initializer_feat->config().triangulate_1d) {
success_tri = initializer_feat->single_triangulation_1d(*it1, clones_cam);
} else {
success_tri = initializer_feat->single_triangulation(*it1, clones_cam);
}
// Gauss-newton refine the feature
bool success_refine = true;
if (initializer_feat->config().refine_features) {
success_refine = initializer_feat->single_gaussnewton(*it1, clones_cam);
}
// Remove the feature if not a success
if (!success_tri || !success_refine) {
(*it1)->to_delete = true;
it1 = feature_vec.erase(it1);
continue;
}
it1++;
}
rT2 = boost::posix_time::microsec_clock::local_time();
// Calculate the max possible measurement size
size_t max_meas_size = 0;
for (size_t i = 0; i < feature_vec.size(); i++) {
for (const auto &pair : feature_vec.at(i)->timestamps) {
max_meas_size += 2 * feature_vec.at(i)->timestamps[pair.first].size();
}
}
// Calculate max possible state size (i.e. the size of our covariance)
// NOTE: that when we have the single inverse depth representations, those are only 1dof in size
size_t max_hx_size = state->max_covariance_size();
for (auto &landmark : state->_features_SLAM) {
max_hx_size -= landmark.second->size();
}
// Large Jacobian and residual of *all* features for this update
Eigen::VectorXd res_big = Eigen::VectorXd::Zero(max_meas_size);
Eigen::MatrixXd Hx_big = Eigen::MatrixXd::Zero(max_meas_size, max_hx_size);
std::unordered_map<std::shared_ptr<Type>, size_t> Hx_mapping;
std::vector<std::shared_ptr<Type>> Hx_order_big;
size_t ct_jacob = 0;
size_t ct_meas = 0;
// 4. Compute linear system for each feature, nullspace project, and reject
auto it2 = feature_vec.begin();
while (it2 != feature_vec.end()) {
// Convert our feature into our current format
UpdaterHelper::UpdaterHelperFeature feat;
feat.featid = (*it2)->featid;
feat.uvs = (*it2)->uvs;
feat.uvs_norm = (*it2)->uvs_norm;
feat.timestamps = (*it2)->timestamps;
// If we are using single inverse depth, then it is equivalent to using the msckf inverse depth
feat.feat_representation = state->_options.feat_rep_msckf;
if (state->_options.feat_rep_msckf == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {
feat.feat_representation = LandmarkRepresentation::Representation::ANCHORED_MSCKF_INVERSE_DEPTH;
}
// Save the position and its fej value
if (LandmarkRepresentation::is_relative_representation(feat.feat_representation)) {
feat.anchor_cam_id = (*it2)->anchor_cam_id;
feat.anchor_clone_timestamp = (*it2)->anchor_clone_timestamp;
feat.p_FinA = (*it2)->p_FinA;
feat.p_FinA_fej = (*it2)->p_FinA;
} else {
feat.p_FinG = (*it2)->p_FinG;
feat.p_FinG_fej = (*it2)->p_FinG;
}
// Our return values (feature jacobian, state jacobian, residual, and order of state jacobian)
Eigen::MatrixXd H_f;
Eigen::MatrixXd H_x;
Eigen::VectorXd res;
std::vector<std::shared_ptr<Type>> Hx_order;
// Get the Jacobian for this feature
UpdaterHelper::get_feature_jacobian_full(state, feat, H_f, H_x, res, Hx_order);
// Nullspace project
UpdaterHelper::nullspace_project_inplace(H_f, H_x, res);
/// Chi2 distance check
Eigen::MatrixXd P_marg = StateHelper::get_marginal_covariance(state, Hx_order);
Eigen::MatrixXd S = H_x * P_marg * H_x.transpose();
S.diagonal() += _options.sigma_pix_sq * Eigen::VectorXd::Ones(S.rows());
double chi2 = res.dot(S.llt().solve(res));
// Get our threshold (we precompute up to 500 but handle the case that it is more)
double chi2_check;
if (res.rows() < 500) {
chi2_check = chi_squared_table[res.rows()];
} else {
boost::math::chi_squared chi_squared_dist(res.rows());
chi2_check = boost::math::quantile(chi_squared_dist, 0.95);
PRINT_WARNING(YELLOW "chi2_check over the residual limit - %d\n" RESET, (int)res.rows());
}
// Check if we should delete or not
if (chi2 > _options.chi2_multipler * chi2_check) {
(*it2)->to_delete = true;
it2 = feature_vec.erase(it2);
// PRINT_DEBUG("featid = %d\n", feat.featid);
// PRINT_DEBUG("chi2 = %f > %f\n", chi2, _options.chi2_multipler*chi2_check);
// std::stringstream ss;
// ss << "res = " << std::endl << res.transpose() << std::endl;
// PRINT_DEBUG(ss.str().c_str());
continue;
}
// We are good!!! Append to our large H vector
size_t ct_hx = 0;
for (const auto &var : Hx_order) {
// Ensure that this variable is in our Jacobian
if (Hx_mapping.find(var) == Hx_mapping.end()) {
Hx_mapping.insert({var, ct_jacob});
Hx_order_big.push_back(var);
ct_jacob += var->size();
}
// Append to our large Jacobian
Hx_big.block(ct_meas, Hx_mapping[var], H_x.rows(), var->size()) = H_x.block(0, ct_hx, H_x.rows(), var->size());
ct_hx += var->size();
}
// Append our residual and move forward
res_big.block(ct_meas, 0, res.rows(), 1) = res;
ct_meas += res.rows();
it2++;
}
rT3 = boost::posix_time::microsec_clock::local_time();
// We have appended all features to our Hx_big, res_big
// Delete it so we do not reuse information
for (size_t f = 0; f < feature_vec.size(); f++) {
feature_vec[f]->to_delete = true;
}
// Return if we don't have anything and resize our matrices
if (ct_meas < 1) {
return;
}
assert(ct_meas <= max_meas_size);
assert(ct_jacob <= max_hx_size);
res_big.conservativeResize(ct_meas, 1);
Hx_big.conservativeResize(ct_meas, ct_jacob);
// 5. Perform measurement compression
UpdaterHelper::measurement_compress_inplace(Hx_big, res_big);
if (Hx_big.rows() < 1) {
return;
}
rT4 = boost::posix_time::microsec_clock::local_time();
// Our noise is isotropic, so make it here after our compression
Eigen::MatrixXd R_big = _options.sigma_pix_sq * Eigen::MatrixXd::Identity(res_big.rows(), res_big.rows());
// 6. With all good features update the state
StateHelper::EKFUpdate(state, Hx_order_big, Hx_big, res_big, R_big);
rT5 = boost::posix_time::microsec_clock::local_time();
// Debug print timing information
PRINT_ALL("[MSCKF-UP]: %.4f seconds to clean\n", (rT1 - rT0).total_microseconds() * 1e-6);
PRINT_ALL("[MSCKF-UP]: %.4f seconds to triangulate\n", (rT2 - rT1).total_microseconds() * 1e-6);
PRINT_ALL("[MSCKF-UP]: %.4f seconds create system (%d features)\n", (rT3 - rT2).total_microseconds() * 1e-6, (int)feature_vec.size());
PRINT_ALL("[MSCKF-UP]: %.4f seconds compress system\n", (rT4 - rT3).total_microseconds() * 1e-6);
PRINT_ALL("[MSCKF-UP]: %.4f seconds update state (%d size)\n", (rT5 - rT4).total_microseconds() * 1e-6, (int)res_big.rows());
PRINT_ALL("[MSCKF-UP]: %.4f seconds total\n", (rT5 - rT1).total_microseconds() * 1e-6);
}Slam updates¶
Delayed init¶
Triangulate
Chi² test
Initialize to state vector
Expand covariance
Delayed init happens when a msckf feature needs to be added to the state vector. This is normally done when the feature has survived max clones number of camera poses. Check feature lifecycle for more information. The procedure for initializing the feature is similar to MSCKF update in that after triangulation gives mean position of the feature in world frame, the null space projection is done. This provides linear equations for error/residual in feature position which can be derived from the non null space projected part. The null space projected part is used to update the EKF. The final feature estimate would be added to the state vector and covariance changed accordingly.
Specifically, given a set of measurements involving the state x and a new feature f, we want to optimally and efficiently initialize the feature.
In general, we collect more than the minimum number of measurements at different times needed for initialization (i.e. delayed). For example, although in principle we need two monocular images to initialize a 3D point feature, we often collect more than two images in order to obtain better initialization. In openvins code initialization happens only when a feature is seen in max clones camera poses and is ready to be converted to SLAM point and consequently added to state vector. To process all collected measurements, we stack them and perform linearization around some linearization points (estimates) denoted by and :
To efficiently compute the resulting augmented covariance matrix, we perform Givens rotations to zero-out rows in with indices larger than the dimension of , and apply the same Givens rotations to and . As a result of this operation, we have the following linear system:
Note that the bottom system essentially is corresponding to the nullspace projection as in the MSCKF update and is generally invertible. Note also that we assume the measurement noise is isotropic; otherwise, we should first perform whitening to make it isotropic, which would save significant computations. So, if the original measurement noise covariance and the dimension of is , then the inferred measurement noise covariance will be and .
Now we can directly solve for the error of the new feature based on the first subsystem:
where we assumed noise and state error are zero mean. We can update with this correction by .
We now can compute the covariance of the new feature as follows:
and the cross correlation can be computed as:
These entries can then be placed in the correct location for the covariance. For example when initializing a new feature to the end of the state, the augmented covariance would be:
After initialization, can be used to update our new state through a standard EKF update.
Implementation details¶
For all the features individually: Create datastructure for features with pFinG from triangulation For null space projection, we need to find out H_f, H_x (jacobians) and res as in r above. res would be [2xtotal_meas, 1], H_f would be [2xtotal_meas, 3], H_X would be [2xtotal_meas, total_hx] where total_hx would be total of calibration, distortion (calibration, distortion would only be added if config asks to add them) and involved clone size.
Residual computation (res/r)¶
res for a measurement is defined as:
where is formed for evert measurement “m” of each feature:
std::shared_ptr<PoseJPL> clone_Ii = state->_clones_IMU.at(feature.timestamps[pair.first].at(m));
Eigen::Matrix3d R_GtoIi = clone_Ii->Rot();
Eigen::Vector3d p_IiinG = clone_Ii->pos();
// Get current feature in the IMU
Eigen::Vector3d p_FinIi = R_GtoIi * (p_FinG - p_IiinG);
// Project the current feature into the current frame of reference
Eigen::Vector3d p_FinCi = R_ItoC * p_FinIi + p_IinC;Jacobian matrix H_f ()¶
If fej is to be used, we change the p_FinCi with fej clones rather than updated clones as in:
// If we are doing first estimate Jacobians, then overwrite with the first estimates
if (state->_options.do_fej) {
R_GtoIi = clone_Ii->Rot_fej();
p_IiinG = clone_Ii->pos_fej();
// R_ItoC = calibration->Rot_fej();
// p_IinC = calibration->pos_fej();
p_FinIi = R_GtoIi * (p_FinG_fej - p_IiinG);
p_FinCi = R_ItoC * p_FinIi + p_IinC;
// uv_norm << p_FinCi(0)/p_FinCi(2),p_FinCi(1)/p_FinCi(2);
// cam_d = state->get_intrinsics_CAM(pair.first)->fej();
}The jacobian here are the same as in dynamic_intialization notebook section Jacobian computation, Fisheye model and perspective projection function. First the computes Jacobians with respect to normalized image coordinates and possibly the camera intrinsics. Then the code proceeds with computing the jacobians for normalized coordinates in respect to projection function dzn_dpfc. Next the derivative of p_FinCi with respect to p_FinIi is computed followed by derivative of p_FinCi in respect to camera clone state. dz_dpfg is computed.
Here: dz_dzn is jacobian of uv_dist above wrt uv_norm, dz_dzeta is jacobian of uv_dist wrt intrinsics and distortions, dzn_dpfc is derivative of uv_norm wrt p_FinCi, dpfc_dpfg is p_FinCi wrt p_FinG, dpfc_dclone is p_FinCi wrt clones rotation and position. dz_dpfg is z wrt p_FinG
Eigen::MatrixXd dz_dzn, dz_dzeta;
state->_cam_intrinsics_cameras.at(pair.first)->compute_distort_jacobian(uv_norm, dz_dzn, dz_dzeta);
// Normalized coordinates in respect to projection function
Eigen::MatrixXd dzn_dpfc = Eigen::MatrixXd::Zero(2, 3);
dzn_dpfc << 1 / p_FinCi(2), 0, -p_FinCi(0) / (p_FinCi(2) * p_FinCi(2)), 0, 1 / p_FinCi(2), -p_FinCi(1) / (p_FinCi(2) * p_FinCi(2));
// Derivative of p_FinCi in respect to p_FinIi
Eigen::MatrixXd dpfc_dpfg = R_ItoC * R_GtoIi;
// Derivative of p_FinCi in respect to camera clone state
Eigen::MatrixXd dpfc_dclone = Eigen::MatrixXd::Zero(3, 6);
dpfc_dclone.block(0, 0, 3, 3).noalias() = R_ItoC * skew_x(p_FinIi);
dpfc_dclone.block(0, 3, 3, 3) = -dpfc_dpfg;
// Precompute some matrices
Eigen::MatrixXd dz_dpfc = dz_dzn * dzn_dpfc;
Eigen::MatrixXd dz_dpfg = dz_dpfc * dpfc_dpfg;H_f is build for each c measurement. This portion would be [2x3].
// CHAINRULE: get the total feature Jacobian
H_f.block(2 * c, 0, 2, H_f.cols()).noalias() = dz_dpfg * dpfg_dlambda;H_x is build for each c measurement. Put the jacobian for measurement c at 2*c, clone_Ii/calibration/distortion position which can be found from the map. The size for this portion would be [2x(clone_size/distortion/calibration)].
// CHAINRULE: get state clone Jacobian
H_x.block(2 * c, map_hx[clone_Ii], 2, clone_Ii->size()).noalias() = dz_dpfc * dpfc_dclone;
// CHAINRULE: loop through all extra states and add their
// NOTE: we add the Jacobian here as we might be in the anchoring pose for this measurement
for (size_t i = 0; i < dpfg_dx_order.size(); i++) {
H_x.block(2 * c, map_hx[dpfg_dx_order.at(i)], 2, dpfg_dx_order.at(i)->size()).noalias() += dz_dpfg * dpfg_dx.at(i);
}
//=========================================================================
//=========================================================================
// Derivative of p_FinCi in respect to camera calibration (R_ItoC, p_IinC)
if (state->_options.do_calib_camera_pose) {
// Calculate the Jacobian
Eigen::MatrixXd dpfc_dcalib = Eigen::MatrixXd::Zero(3, 6);
dpfc_dcalib.block(0, 0, 3, 3) = skew_x(p_FinCi - p_IinC);
dpfc_dcalib.block(0, 3, 3, 3) = Eigen::Matrix<double, 3, 3>::Identity();
// Chainrule it and add it to the big jacobian
H_x.block(2 * c, map_hx[calibration], 2, calibration->size()).noalias() += dz_dpfc * dpfc_dcalib;
}
// Derivative of measurement in respect to distortion parameters
if (state->_options.do_calib_camera_intrinsics) {
H_x.block(2 * c, map_hx[distortion], 2, distortion->size()) = dz_dzeta;
}Different H matrices¶
H_f - Feature Jacobian Jacobian with respect to feature parameters (the landmark position). Its size depends on the landmark representation: 3 columns for full 3D representations (ANCHORED_3D, GLOBAL_3D, etc.) 1 column for single depth representation (ANCHORED_INVERSE_DEPTH_SINGLE)
H_x - State Jacobian Jacobian with respect to state variables (IMU poses, camera calibration, etc.) that affect the feature measurement. This includes things like: Clone poses (historical IMU states) Camera extrinsics (if calibrating) Anchor pose (for relative representations)
H_xf - Combined Jacobian A concatenated matrix combining both state and feature Jacobians: H_xf = [H_x | H_f]
Hx_big - Stacked Multi-Feature Jacobian A large matrix that vertically stacks all the state Jacobians from multiple features for batch processing.
Summary Table:
| Matrix | Dimensions | Meaning |
|---|---|---|
| H_f | (2×num_meas) × (1 or 3) | Single feature → feature params |
| H_x | (2×num_meas) × (state_size) | Single feature → state params |
| H_xf | (2×num_meas) × (state_size + feat_size) | Single feature → [state, feature] |
| Hx_big | (all_meas) × (state_size) | All features → state params |
void UpdaterSLAM::delayed_init(std::shared_ptr<State> state, std::vector<std::shared_ptr<Feature>> &feature_vec) {
// Return if no features
if (feature_vec.empty())
return;
// Start timing
boost::posix_time::ptime rT0, rT1, rT2, rT3;
rT0 = boost::posix_time::microsec_clock::local_time();
// 0. Get all timestamps our clones are at (and thus valid measurement times)
std::vector<double> clonetimes;
for (const auto &clone_imu : state->_clones_IMU) {
clonetimes.emplace_back(clone_imu.first);
}
// 1. Clean all feature measurements and make sure they all have valid clone times
auto it0 = feature_vec.begin();
while (it0 != feature_vec.end()) {
// Clean the feature
(*it0)->clean_old_measurements(clonetimes);
// Count how many measurements
int ct_meas = 0;
for (const auto &pair : (*it0)->timestamps) {
ct_meas += (*it0)->timestamps[pair.first].size();
}
// Remove if we don't have enough
if (ct_meas < 2) {
(*it0)->to_delete = true;
it0 = feature_vec.erase(it0);
} else {
it0++;
}
}
rT1 = boost::posix_time::microsec_clock::local_time();
// 2. Create vector of cloned *CAMERA* poses at each of our clone timesteps
std::unordered_map<size_t, std::unordered_map<double, FeatureInitializer::ClonePose>> clones_cam;
for (const auto &clone_calib : state->_calib_IMUtoCAM) {
// For this camera, create the vector of camera poses
std::unordered_map<double, FeatureInitializer::ClonePose> clones_cami;
for (const auto &clone_imu : state->_clones_IMU) {
// Get current camera pose
Eigen::Matrix<double, 3, 3> R_GtoCi = clone_calib.second->Rot() * clone_imu.second->Rot();
Eigen::Matrix<double, 3, 1> p_CioinG = clone_imu.second->pos() - R_GtoCi.transpose() * clone_calib.second->pos();
// Append to our map
clones_cami.insert({clone_imu.first, FeatureInitializer::ClonePose(R_GtoCi, p_CioinG)});
}
// Append to our map
clones_cam.insert({clone_calib.first, clones_cami});
}
// 3. Try to triangulate all MSCKF or new SLAM features that have measurements
auto it1 = feature_vec.begin();
while (it1 != feature_vec.end()) {
// Triangulate the feature and remove if it fails
bool success_tri = true;
if (initializer_feat->config().triangulate_1d) {
success_tri = initializer_feat->single_triangulation_1d(*it1, clones_cam);
} else {
success_tri = initializer_feat->single_triangulation(*it1, clones_cam);
}
// Gauss-newton refine the feature
bool success_refine = true;
if (initializer_feat->config().refine_features) {
success_refine = initializer_feat->single_gaussnewton(*it1, clones_cam);
}
// Remove the feature if not a success
if (!success_tri || !success_refine) {
(*it1)->to_delete = true;
it1 = feature_vec.erase(it1);
continue;
}
it1++;
}
rT2 = boost::posix_time::microsec_clock::local_time();
// 4. Compute linear system for each feature, nullspace project, and reject
auto it2 = feature_vec.begin();
while (it2 != feature_vec.end()) {
// Convert our feature into our current format
UpdaterHelper::UpdaterHelperFeature feat;
feat.featid = (*it2)->featid;
feat.uvs = (*it2)->uvs;
feat.uvs_norm = (*it2)->uvs_norm;
feat.timestamps = (*it2)->timestamps;
// If we are using single inverse depth, then it is equivalent to using the msckf inverse depth
auto feat_rep =
((int)feat.featid < state->_options.max_aruco_features) ? state->_options.feat_rep_aruco : state->_options.feat_rep_slam;
feat.feat_representation = feat_rep;
if (feat_rep == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {
feat.feat_representation = LandmarkRepresentation::Representation::ANCHORED_MSCKF_INVERSE_DEPTH;
}
// Save the position and its fej value
if (LandmarkRepresentation::is_relative_representation(feat.feat_representation)) {
feat.anchor_cam_id = (*it2)->anchor_cam_id;
feat.anchor_clone_timestamp = (*it2)->anchor_clone_timestamp;
feat.p_FinA = (*it2)->p_FinA;
feat.p_FinA_fej = (*it2)->p_FinA;
} else {
feat.p_FinG = (*it2)->p_FinG;
feat.p_FinG_fej = (*it2)->p_FinG;
}
// Our return values (feature jacobian, state jacobian, residual, and order of state jacobian)
Eigen::MatrixXd H_f;
Eigen::MatrixXd H_x;
Eigen::VectorXd res;
std::vector<std::shared_ptr<Type>> Hx_order;
// Get the Jacobian for this feature
UpdaterHelper::get_feature_jacobian_full(state, feat, H_f, H_x, res, Hx_order);
// If we are doing the single feature representation, then we need to remove the bearing portion
// To do so, we project the bearing portion onto the state and depth Jacobians and the residual.
// This allows us to directly initialize the feature as a depth-old feature
if (feat_rep == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {
// Append the Jacobian in respect to the depth of the feature
Eigen::MatrixXd H_xf = H_x;
H_xf.conservativeResize(H_x.rows(), H_x.cols() + 1);
H_xf.block(0, H_x.cols(), H_x.rows(), 1) = H_f.block(0, H_f.cols() - 1, H_f.rows(), 1);
H_f.conservativeResize(H_f.rows(), H_f.cols() - 1);
// Nullspace project the bearing portion
// This takes into account that we have marginalized the bearing already
// Thus this is crucial to ensuring estimator consistency as we are not taking the bearing to be true
UpdaterHelper::nullspace_project_inplace(H_f, H_xf, res);
// Split out the state portion and feature portion
H_x = H_xf.block(0, 0, H_xf.rows(), H_xf.cols() - 1);
H_f = H_xf.block(0, H_xf.cols() - 1, H_xf.rows(), 1);
}
// Create feature pointer (we will always create it of size three since we initialize the single invese depth as a msckf anchored
// representation)
int landmark_size = (feat_rep == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) ? 1 : 3;
auto landmark = std::make_shared<Landmark>(landmark_size);
landmark->_featid = feat.featid;
landmark->_feat_representation = feat_rep;
landmark->_unique_camera_id = (*it2)->anchor_cam_id;
if (LandmarkRepresentation::is_relative_representation(feat.feat_representation)) {
landmark->_anchor_cam_id = feat.anchor_cam_id;
landmark->_anchor_clone_timestamp = feat.anchor_clone_timestamp;
landmark->set_from_xyz(feat.p_FinA, false);
landmark->set_from_xyz(feat.p_FinA_fej, true);
} else {
landmark->set_from_xyz(feat.p_FinG, false);
landmark->set_from_xyz(feat.p_FinG_fej, true);
}
// Measurement noise matrix
double sigma_pix_sq =
((int)feat.featid < state->_options.max_aruco_features) ? _options_aruco.sigma_pix_sq : _options_slam.sigma_pix_sq;
Eigen::MatrixXd R = sigma_pix_sq * Eigen::MatrixXd::Identity(res.rows(), res.rows());
// Try to initialize, delete new pointer if we failed
double chi2_multipler =
((int)feat.featid < state->_options.max_aruco_features) ? _options_aruco.chi2_multipler : _options_slam.chi2_multipler;
if (StateHelper::initialize(state, landmark, Hx_order, H_x, H_f, R, res, chi2_multipler)) {
state->_features_SLAM.insert({(*it2)->featid, landmark});
(*it2)->to_delete = true;
it2++;
} else {
(*it2)->to_delete = true;
it2 = feature_vec.erase(it2);
}
}
rT3 = boost::posix_time::microsec_clock::local_time();
// Debug print timing information
if (!feature_vec.empty()) {
PRINT_ALL("[SLAM-DELAY]: %.4f seconds to clean\n", (rT1 - rT0).total_microseconds() * 1e-6);
PRINT_ALL("[SLAM-DELAY]: %.4f seconds to triangulate\n", (rT2 - rT1).total_microseconds() * 1e-6);
PRINT_ALL("[SLAM-DELAY]: %.4f seconds initialize (%d features)\n", (rT3 - rT2).total_microseconds() * 1e-6, (int)feature_vec.size());
PRINT_ALL("[SLAM-DELAY]: %.4f seconds total\n", (rT3 - rT1).total_microseconds() * 1e-6);
}
}Update¶
Update existing SLAM features
Chi² test
H_xf includes feature Jacobian
EKF update Once a feature has been initialized into the state vector using delayed init the corresponding feature position is added to the state vector. Hence in the update the flow is different from MSCKF and SLAM updates that do delayed init. Here rather than getting Hf and Hx and doing null space projection, Hx and Hf are concatenated as in [Hx | Hf]. This is since the feature is now a part of the state vector and hence jacobian wrt feature position pF_in_G is directly in the measurement jacobian. Hence we do not need to do any null space projection. Update hence only calculates the residual for each timestamp, Hx and Hf for each residual and then appends this to a big H and does EKF update. The
void UpdaterSLAM::update(std::shared_ptr<State> state, std::vector<std::shared_ptr<Feature>> &feature_vec) {
// Return if no features
if (feature_vec.empty())
return;
// Start timing
boost::posix_time::ptime rT0, rT1, rT2, rT3;
rT0 = boost::posix_time::microsec_clock::local_time();
// 0. Get all timestamps our clones are at (and thus valid measurement times)
std::vector<double> clonetimes;
for (const auto &clone_imu : state->_clones_IMU) {
clonetimes.emplace_back(clone_imu.first);
}
// 1. Clean all feature measurements and make sure they all have valid clone times
auto it0 = feature_vec.begin();
while (it0 != feature_vec.end()) {
// Clean the feature
(*it0)->clean_old_measurements(clonetimes);
// Count how many measurements
int ct_meas = 0;
for (const auto &pair : (*it0)->timestamps) {
ct_meas += (*it0)->timestamps[pair.first].size();
}
// Get the landmark and its representation
// For single depth representation we need at least two measurement
// This is because we do nullspace projection
std::shared_ptr<Landmark> landmark = state->_features_SLAM.at((*it0)->featid);
int required_meas = (landmark->_feat_representation == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) ? 2 : 1;
// Remove if we don't have enough
if (ct_meas < 1) {
(*it0)->to_delete = true;
it0 = feature_vec.erase(it0);
} else if (ct_meas < required_meas) {
it0 = feature_vec.erase(it0);
} else {
it0++;
}
}
rT1 = boost::posix_time::microsec_clock::local_time();
// Calculate the max possible measurement size
size_t max_meas_size = 0;
for (size_t i = 0; i < feature_vec.size(); i++) {
for (const auto &pair : feature_vec.at(i)->timestamps) {
max_meas_size += 2 * feature_vec.at(i)->timestamps[pair.first].size();
}
}
// Calculate max possible state size (i.e. the size of our covariance)
size_t max_hx_size = state->max_covariance_size();
// Large Jacobian, residual, and measurement noise of *all* features for this update
Eigen::VectorXd res_big = Eigen::VectorXd::Zero(max_meas_size);
Eigen::MatrixXd Hx_big = Eigen::MatrixXd::Zero(max_meas_size, max_hx_size);
Eigen::MatrixXd R_big = Eigen::MatrixXd::Identity(max_meas_size, max_meas_size);
std::unordered_map<std::shared_ptr<Type>, size_t> Hx_mapping;
std::vector<std::shared_ptr<Type>> Hx_order_big;
size_t ct_jacob = 0;
size_t ct_meas = 0;
// 4. Compute linear system for each feature, nullspace project, and reject
auto it2 = feature_vec.begin();
while (it2 != feature_vec.end()) {
// Ensure we have the landmark and it is the same
assert(state->_features_SLAM.find((*it2)->featid) != state->_features_SLAM.end());
assert(state->_features_SLAM.at((*it2)->featid)->_featid == (*it2)->featid);
// Get our landmark from the state
std::shared_ptr<Landmark> landmark = state->_features_SLAM.at((*it2)->featid);
// Convert the state landmark into our current format
UpdaterHelper::UpdaterHelperFeature feat;
feat.featid = (*it2)->featid;
feat.uvs = (*it2)->uvs;
feat.uvs_norm = (*it2)->uvs_norm;
feat.timestamps = (*it2)->timestamps;
// If we are using single inverse depth, then it is equivalent to using the msckf inverse depth
feat.feat_representation = landmark->_feat_representation;
if (landmark->_feat_representation == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {
feat.feat_representation = LandmarkRepresentation::Representation::ANCHORED_MSCKF_INVERSE_DEPTH;
}
// Save the position and its fej value
if (LandmarkRepresentation::is_relative_representation(feat.feat_representation)) {
feat.anchor_cam_id = landmark->_anchor_cam_id;
feat.anchor_clone_timestamp = landmark->_anchor_clone_timestamp;
feat.p_FinA = landmark->get_xyz(false);
feat.p_FinA_fej = landmark->get_xyz(true);
} else {
feat.p_FinG = landmark->get_xyz(false);
feat.p_FinG_fej = landmark->get_xyz(true);
}
// Our return values (feature jacobian, state jacobian, residual, and order of state jacobian)
Eigen::MatrixXd H_f;
Eigen::MatrixXd H_x;
Eigen::VectorXd res;
std::vector<std::shared_ptr<Type>> Hx_order;
// Get the Jacobian for this feature
UpdaterHelper::get_feature_jacobian_full(state, feat, H_f, H_x, res, Hx_order);
// Place Jacobians in one big Jacobian, since the landmark is already in our state vector
Eigen::MatrixXd H_xf = H_x;
if (landmark->_feat_representation == LandmarkRepresentation::Representation::ANCHORED_INVERSE_DEPTH_SINGLE) {
// Append the Jacobian in respect to the depth of the feature
H_xf.conservativeResize(H_x.rows(), H_x.cols() + 1);
H_xf.block(0, H_x.cols(), H_x.rows(), 1) = H_f.block(0, H_f.cols() - 1, H_f.rows(), 1);
H_f.conservativeResize(H_f.rows(), H_f.cols() - 1);
// Nullspace project the bearing portion
// This takes into account that we have marginalized the bearing already
// Thus this is crucial to ensuring estimator consistency as we are not taking the bearing to be true
UpdaterHelper::nullspace_project_inplace(H_f, H_xf, res);
} else {
// Else we have the full feature in our state, so just append it
H_xf.conservativeResize(H_x.rows(), H_x.cols() + H_f.cols());
H_xf.block(0, H_x.cols(), H_x.rows(), H_f.cols()) = H_f;
}
// Append to our Jacobian order vector
std::vector<std::shared_ptr<Type>> Hxf_order = Hx_order;
Hxf_order.push_back(landmark);
// Chi2 distance check
Eigen::MatrixXd P_marg = StateHelper::get_marginal_covariance(state, Hxf_order);
Eigen::MatrixXd S = H_xf * P_marg * H_xf.transpose();
double sigma_pix_sq =
((int)feat.featid < state->_options.max_aruco_features) ? _options_aruco.sigma_pix_sq : _options_slam.sigma_pix_sq;
S.diagonal() += sigma_pix_sq * Eigen::VectorXd::Ones(S.rows());
double chi2 = res.dot(S.llt().solve(res));
// Get our threshold (we precompute up to 500 but handle the case that it is more)
double chi2_check;
if (res.rows() < 500) {
chi2_check = chi_squared_table[res.rows()];
} else {
boost::math::chi_squared chi_squared_dist(res.rows());
chi2_check = boost::math::quantile(chi_squared_dist, 0.95);
PRINT_WARNING(YELLOW "chi2_check over the residual limit - %d\n" RESET, (int)res.rows());
}
// Check if we should delete or not
double chi2_multipler =
((int)feat.featid < state->_options.max_aruco_features) ? _options_aruco.chi2_multipler : _options_slam.chi2_multipler;
if (chi2 > chi2_multipler * chi2_check) {
if ((int)feat.featid < state->_options.max_aruco_features) {
PRINT_WARNING(YELLOW "[SLAM-UP]: rejecting aruco tag %d for chi2 thresh (%.3f > %.3f)\n" RESET, (int)feat.featid, chi2,
chi2_multipler * chi2_check);
} else {
landmark->update_fail_count++;
}
(*it2)->to_delete = true;
it2 = feature_vec.erase(it2);
continue;
}
// Debug print when we are going to update the aruco tags
if ((int)feat.featid < state->_options.max_aruco_features) {
PRINT_DEBUG("[SLAM-UP]: accepted aruco tag %d for chi2 thresh (%.3f < %.3f)\n", (int)feat.featid, chi2, chi2_multipler * chi2_check);
}
// We are good!!! Append to our large H vector
size_t ct_hx = 0;
for (const auto &var : Hxf_order) {
// Ensure that this variable is in our Jacobian
if (Hx_mapping.find(var) == Hx_mapping.end()) {
Hx_mapping.insert({var, ct_jacob});
Hx_order_big.push_back(var);
ct_jacob += var->size();
}
// Append to our large Jacobian
Hx_big.block(ct_meas, Hx_mapping[var], H_xf.rows(), var->size()) = H_xf.block(0, ct_hx, H_xf.rows(), var->size());
ct_hx += var->size();
}
// Our isotropic measurement noise
R_big.block(ct_meas, ct_meas, res.rows(), res.rows()) *= sigma_pix_sq;
// Append our residual and move forward
res_big.block(ct_meas, 0, res.rows(), 1) = res;
ct_meas += res.rows();
it2++;
}
rT2 = boost::posix_time::microsec_clock::local_time();
// We have appended all features to our Hx_big, res_big
// Delete it so we do not reuse information
for (size_t f = 0; f < feature_vec.size(); f++) {
feature_vec[f]->to_delete = true;
}
// Return if we don't have anything and resize our matrices
if (ct_meas < 1) {
return;
}
assert(ct_meas <= max_meas_size);
assert(ct_jacob <= max_hx_size);
res_big.conservativeResize(ct_meas, 1);
Hx_big.conservativeResize(ct_meas, ct_jacob);
R_big.conservativeResize(ct_meas, ct_meas);
// 5. With all good SLAM features update the state
StateHelper::EKFUpdate(state, Hx_order_big, Hx_big, res_big, R_big);
rT3 = boost::posix_time::microsec_clock::local_time();
// Debug print timing information
PRINT_ALL("[SLAM-UP]: %.4f seconds to clean\n", (rT1 - rT0).total_microseconds() * 1e-6);
PRINT_ALL("[SLAM-UP]: %.4f seconds creating linear system\n", (rT2 - rT1).total_microseconds() * 1e-6);
PRINT_ALL("[SLAM-UP]: %.4f seconds to update (%d feats of %d size)\n", (rT3 - rT2).total_microseconds() * 1e-6, (int)feature_vec.size(),
(int)Hx_big.rows());
PRINT_ALL("[SLAM-UP]: %.4f seconds total\n", (rT3 - rT1).total_microseconds() * 1e-6);
}Marginalization¶
Marginalization is straightforward. It removes the variables from the state vector and covariance and update the order. I am guessing this is the only thing thats done. For Gaussian distributions, marginalizing out variables simply means removing the corresponding rows and columns from the covariance matrix. The other covariances (P(x₁,x₁), P(x₁,x₂), P(x₂,x₂)) remain unchanged.
Questions: How is state updated when residual only updates clones and calibs How are slam feature updated How is covariance created Maybe chi2 test