Reviewed all remaining notebooks
parent
d3895d6ebb
commit
942750b127
|
@ -9,30 +9,11 @@
|
|||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `BatchFixedLagSmoother` class in GTSAM is designed for fixed-lag smoothing in nonlinear factor graphs. It maintains a sliding window of the most recent variables and marginalizes out older variables. This is particularly useful in real-time applications where memory and computational efficiency are critical.\n",
|
||||
"The `IncrementalFixedLagSmoother` is a [FixedLagSmoother](FixedLagSmoother.ipynb) that uses [LevenbergMarquardtOptimizer](LevenbergMarquardtOptimizer.ipynb) for batch optimization.\n",
|
||||
"\n",
|
||||
"This fixed lag smoother will **batch-optimize** at every iteration, but warm-started from the last estimate."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "42c80522",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Mathematical Formulation\n",
|
||||
"\n",
|
||||
"The `BatchFixedLagSmoother` operates on the principle of fixed-lag smoothing, where the objective is to estimate the state $\\mathbf{x}_t$ given all measurements up to time $t$, but only retaining a fixed window of recent states. The optimization problem can be expressed as:\n",
|
||||
"$$\n",
|
||||
"\\min_{\\mathbf{x}_{t-L:t}} \\sum_{i=1}^{N} \\| \\mathbf{h}_i(\\mathbf{x}_{t-L:t}) - \\mathbf{z}_i \\|^2\n",
|
||||
"$$\n",
|
||||
"where $L$ is the fixed lag, $\\mathbf{h}_i$ are the measurement functions, and $\\mathbf{z}_i$ are the measurements.\n",
|
||||
"In practice, the functions $\\mathbf{h}_i$ depend only on a subset of the state variables $\\mathbf{X}_i$, and the optimization is performed over a set of $N$ *factors* $\\phi_i$ instead:\n",
|
||||
"$$\n",
|
||||
"\\min_{\\mathbf{x}_{t-L:t}} \\sum_{i=1}^{N} \\| \\phi_i(\\mathbf{X}_i; \\mathbf{z}_i) \\|^2\n",
|
||||
"$$\n",
|
||||
"The API below allows the user to add new factors at every iteration, which will be automatically pruned after they no longer depend on any variables in the lag."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "92b4f851",
|
||||
|
|
|
@ -5,59 +5,40 @@
|
|||
"id": "cdd2fdc5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# FixedLagSmoother Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"# FixedLagSmoother\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `FixedLagSmoother` class in GTSAM is designed for incremental smoothing and mapping in robotics and computer vision applications. It maintains a fixed-size window of the most recent states, allowing for efficient updates and marginalization of older states. This is particularly useful in scenarios where real-time performance is crucial, and memory usage needs to be controlled.\n",
|
||||
"\n",
|
||||
"## Key Features\n",
|
||||
"\n",
|
||||
"- **Incremental Updates**: The `FixedLagSmoother` allows for efficient updates as new measurements are received, making it suitable for real-time applications.\n",
|
||||
"- **Fixed-Lag Smoothing**: It maintains a fixed window of recent states, which helps in managing computational resources by marginalizing out older states.\n",
|
||||
"- **Nonlinear Optimization**: Utilizes nonlinear optimization techniques to refine the estimates of the states within the fixed lag window.\n",
|
||||
"\n",
|
||||
"## Main Methods\n",
|
||||
"\n",
|
||||
"### Update\n",
|
||||
"\n",
|
||||
"The `update` method is central to the `FixedLagSmoother` class. It incorporates new measurements and updates the state estimates within the fixed lag window. The method ensures that the estimates are consistent with the new information while maintaining computational efficiency.\n",
|
||||
"\n",
|
||||
"### Marginalization\n",
|
||||
"\n",
|
||||
"Marginalization is a key process in fixed-lag smoothing, where older states are removed from the optimization problem to keep the problem size manageable. This is done while preserving the essential information about the past states in the form of a prior.\n",
|
||||
"\n",
|
||||
"### Optimization\n",
|
||||
"\n",
|
||||
"The class employs nonlinear optimization techniques to solve the smoothing problem. The optimization process aims to minimize the error between the predicted and observed measurements, leading to refined state estimates.\n",
|
||||
"The `FixedLagSmoother` class is the base class for [BatchFixedLagSmoother](BatchFixedLagSmoother.ipynb) and [IncrementalFixedLagSmoother](IncrementalFixedLagSmoother.ipynb).\n",
|
||||
"\n",
|
||||
"It provides an API for fixed-lag smoothing in nonlinear factor graphs. It maintains a sliding window of the most recent variables and marginalizes out older variables. This is particularly useful in real-time applications where memory and computational efficiency are critical."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8d372784",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Mathematical Formulation\n",
|
||||
"\n",
|
||||
"The `FixedLagSmoother` operates on the principle of minimizing a cost function that represents the sum of squared errors between the predicted and observed measurements. Mathematically, this can be expressed as:\n",
|
||||
"\n",
|
||||
"In fixed-lag smoothing the objective is to estimate the state $\\mathbf{x}_t$ given all measurements up to time $t$, but only retaining a fixed window of recent states. The optimization problem can be expressed as:\n",
|
||||
"$$\n",
|
||||
"\\min_x \\sum_i \\| h(x_i) - z_i \\|^2\n",
|
||||
"\\min_{\\mathbf{x}_{t-L:t}} \\sum_{i=1}^{N} \\| \\mathbf{h}_i(\\mathbf{x}_{t-L:t}) - \\mathbf{z}_i \\|^2\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"where $h(x_i)$ is the predicted measurement, $z_i$ is the observed measurement, and $x_i$ represents the state variables within the fixed lag window.\n",
|
||||
"\n",
|
||||
"## Applications\n",
|
||||
"\n",
|
||||
"The `FixedLagSmoother` is widely used in applications such as:\n",
|
||||
"\n",
|
||||
"- **Simultaneous Localization and Mapping (SLAM)**: Helps in maintaining a consistent map and robot trajectory in real-time.\n",
|
||||
"- **Visual-Inertial Odometry (VIO)**: Used for estimating the motion of a camera-equipped device by fusing visual and inertial data.\n",
|
||||
"- **Sensor Fusion**: Combines data from multiple sensors to improve the accuracy of state estimates.\n",
|
||||
"\n",
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"The `FixedLagSmoother` class is a powerful tool for real-time state estimation in dynamic environments. Its ability to handle incremental updates and maintain a fixed-size problem makes it ideal for applications where computational resources are limited. By leveraging nonlinear optimization, it provides accurate and consistent state estimates within the fixed lag window."
|
||||
"where $L$ is the fixed lag, $\\mathbf{h}_i$ are the measurement functions, and $\\mathbf{z}_i$ are the measurements.\n",
|
||||
"In practice, the functions $\\mathbf{h}_i$ depend only on a subset of the state variables $\\mathbf{X}_i$, and the optimization is performed over a set of $N$ *factors* $\\phi_i$ instead:\n",
|
||||
"$$\n",
|
||||
"\\min_{\\mathbf{x}_{t-L:t}} \\sum_{i=1}^{N} \\| \\phi_i(\\mathbf{X}_i; \\mathbf{z}_i) \\|^2\n",
|
||||
"$$\n",
|
||||
"The API below allows the user to add new factors at every iteration, which will be automatically pruned after they no longer depend on any variables in the lag."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -5,21 +5,26 @@
|
|||
"id": "867a20bc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ISAM2 Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"# ISAM2\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `ISAM2` class in GTSAM is an incremental smoothing and mapping algorithm that efficiently updates the solution to a nonlinear optimization problem as new measurements are added. This class is particularly useful in applications such as SLAM (Simultaneous Localization and Mapping) where real-time performance is crucial.\n",
|
||||
"The `ISAM2` class in GTSAM is an incremental smoothing and mapping algorithm that efficiently updates the solution to a nonlinear optimization problem as new measurements are added. This class is particularly useful in applications such as SLAM (Simultaneous Localization and Mapping) where real-time performance is crucial. \n",
|
||||
"\n",
|
||||
"The algorithm is described in the 2012 IJJR paper by {cite:t}`http://dx.doi.org/10.1177/0278364911430419`. For background, also see the more recent booklet by {cite:t}`https://doi.org/10.1561/2300000043`.\n",
|
||||
"\n",
|
||||
"## Key Features\n",
|
||||
"\n",
|
||||
"- **Incremental Updates**: `ISAM2` allows for incremental updates to the factor graph, avoiding the need to solve the entire problem from scratch with each new measurement.\n",
|
||||
"- **Bayesian Inference**: Utilizes Bayes' rule to update beliefs about the state of the system as new information becomes available.\n",
|
||||
"- **Nonlinear Optimization**: Capable of handling nonlinear systems, leveraging iterative optimization techniques to refine estimates.\n",
|
||||
"- **Efficient Variable Reordering**: Dynamically reorders variables to maintain sparsity and improve computational efficiency.\n",
|
||||
"\n",
|
||||
"- **Efficient Variable Reordering**: Dynamically reorders variables to maintain sparsity and improve computational efficiency."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9ce0ec12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Main Methods\n",
|
||||
"\n",
|
||||
"### Initialization and Configuration\n",
|
||||
|
@ -37,30 +42,15 @@
|
|||
"\n",
|
||||
"### Advanced Features\n",
|
||||
"\n",
|
||||
"- **relinearize**: Forces relinearization of the entire factor graph, which can be useful in scenarios where significant nonlinearities are introduced.\n",
|
||||
"- **getFactorsUnsafe**: Provides access to the internal factor graph, allowing for advanced manipulations and custom analyses.\n",
|
||||
"\n",
|
||||
"## Mathematical Formulation\n",
|
||||
"\n",
|
||||
"The `ISAM2` algorithm is based on the factor graph representation of the problem, where the joint probability distribution is expressed as a product of factors:\n",
|
||||
"\n",
|
||||
"$$ P(X|Z) \\propto \\prod_{i} \\phi_i(X_i, Z_i) $$\n",
|
||||
"\n",
|
||||
"Here, $X$ represents the set of variables, $Z$ the measurements, and $\\phi_i$ the individual factors.\n",
|
||||
"\n",
|
||||
"The update process involves solving a nonlinear optimization problem, typically using the Gauss-Newton or Levenberg-Marquardt algorithms, to minimize the error:\n",
|
||||
"\n",
|
||||
"$$ \\min_{X} \\sum_{i} \\| h_i(X_i) - Z_i \\|^2 $$\n",
|
||||
"\n",
|
||||
"where $h_i(X_i)$ are the measurement functions.\n",
|
||||
"\n",
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"The `ISAM2` class is a powerful tool for real-time estimation in dynamic environments. Its ability to efficiently update solutions with new data makes it ideal for applications requiring continuous adaptation and refinement of estimates. Users can leverage its advanced features to customize the behavior and performance of the algorithm to suit specific needs."
|
||||
"- **getFactorsUnsafe**: Provides access to the internal factor graph, allowing for advanced manipulations and custom analysis."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -0,0 +1,23 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cdd2fdc5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# IncrementalFixedLagSmoother\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `IncrementalFixedLagSmoother` is a [FixedLagSmoother](FixedLagSmoother.ipynb) that uses [iSAM2](iSAM2.ipynb) for incremental inference.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
|
@ -5,13 +5,11 @@
|
|||
"id": "f4c73cc1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# LinearContainerFactor Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"# LinearContainerFactor\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `LinearContainerFactor` class in GTSAM is a specialized factor that encapsulates a linear factor within a nonlinear factor graph. This class allows for the seamless integration of linear factors into a nonlinear optimization problem, providing flexibility in problem modeling and solution.\n",
|
||||
"The `LinearContainerFactor` class in GTSAM is a specialized factor that encapsulates a linear factor within a nonlinear factor graph. This is used extensively when marginalizing out variables.\n",
|
||||
"\n",
|
||||
"## Key Features\n",
|
||||
"\n",
|
||||
|
@ -21,45 +19,15 @@
|
|||
"\n",
|
||||
"## Key Methods\n",
|
||||
"\n",
|
||||
"### Constructor\n",
|
||||
"\n",
|
||||
"- **LinearContainerFactor**: This constructor initializes the `LinearContainerFactor` with a linear factor and optionally with values. It serves as the entry point for creating an instance of this class.\n",
|
||||
"\n",
|
||||
"### Error Evaluation\n",
|
||||
"\n",
|
||||
"- **error**: This method calculates the error of the factor given a set of values. The error is typically defined as the difference between the predicted and observed measurements, and it plays a crucial role in optimization.\n",
|
||||
"\n",
|
||||
"### Jacobian Computation\n",
|
||||
"\n",
|
||||
"- **linearize**: This method computes the Jacobian matrix of the factor. The Jacobian is a matrix of partial derivatives that describes how the error changes with respect to changes in the variables. It is a critical component in gradient-based optimization algorithms.\n",
|
||||
"\n",
|
||||
"### Accessors\n",
|
||||
"\n",
|
||||
"- **keys**: This method returns the keys associated with the factor. Keys are identifiers for the variables involved in the factor, and they are essential for understanding the structure of the factor graph.\n",
|
||||
"\n",
|
||||
"### Utility Methods\n",
|
||||
"\n",
|
||||
"- **equals**: This method checks for equality between two `LinearContainerFactor` instances. It is useful for testing and validation purposes.\n",
|
||||
"\n",
|
||||
"## Mathematical Background\n",
|
||||
"\n",
|
||||
"The `LinearContainerFactor` operates within the context of factor graphs, where the goal is to minimize the total error across all factors. The error for a linear factor can be expressed as:\n",
|
||||
"\n",
|
||||
"$$ e(x) = A \\cdot x - b $$\n",
|
||||
"\n",
|
||||
"where $A$ is the coefficient matrix, $x$ is the vector of variables, and $b$ is the measurement vector. The optimization process aims to find the values of $x$ that minimize the sum of squared errors:\n",
|
||||
"\n",
|
||||
"$$ \\text{minimize} \\quad \\sum e(x)^T \\cdot e(x) $$\n",
|
||||
"\n",
|
||||
"The Jacobian matrix, which is derived from the linearization of the error function, is crucial for iterative optimization techniques such as Gauss-Newton or Levenberg-Marquardt.\n",
|
||||
"\n",
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"The `LinearContainerFactor` class is a powerful tool in GTSAM for integrating linear factors into nonlinear optimization problems. By providing mechanisms for error evaluation and Jacobian computation, it facilitates the efficient solution of complex estimation problems in robotics and computer vision."
|
||||
"- **LinearContainerFactor**: This constructor initializes the `LinearContainerFactor` with a linear factor and optionally with values. It serves as the entry point for creating an instance of this class."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -0,0 +1,124 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# NonlinearEquality\n",
|
||||
"\n",
|
||||
"The `NonlinearEquality` family of factors in GTSAM provides constraints to enforce equality between variables or between a variable and a constant value. These factors are useful in optimization problems where strict equality constraints are required. Below is an overview of the API, grouped by functionality."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## NonlinearEquality\n",
|
||||
"\n",
|
||||
"The `NonlinearEquality` factor enforces equality between a variable and a feasible value. It supports both exact and inexact evaluation modes.\n",
|
||||
"\n",
|
||||
"### Constructors\n",
|
||||
"- `NonlinearEquality(Key j, const T& feasible, const CompareFunction& compare)` \n",
|
||||
" Creates a factor that enforces exact equality between the variable at key `j` and the feasible value `feasible`. \n",
|
||||
" - `j`: Key of the variable to constrain. \n",
|
||||
" - `feasible`: The feasible value to enforce equality with. \n",
|
||||
" - `compare`: Optional comparison function (default uses `traits<T>::Equals`).\n",
|
||||
"\n",
|
||||
"- `NonlinearEquality(Key j, const T& feasible, double error_gain, const CompareFunction& compare)` \n",
|
||||
" Creates a factor that allows inexact evaluation with a specified error gain. \n",
|
||||
" - `error_gain`: Gain applied to the error when the constraint is violated.\n",
|
||||
"\n",
|
||||
"### Methods\n",
|
||||
"- `double error(const Values& c) const` \n",
|
||||
" Computes the error for the given values. Returns `0.0` if the constraint is satisfied, or a scaled error if `allow_error_` is enabled.\n",
|
||||
"\n",
|
||||
"- `Vector evaluateError(const T& xj, OptionalMatrixType H = nullptr) const` \n",
|
||||
" Evaluates the error vector for the given variable value `xj`. Optionally computes the Jacobian matrix `H`.\n",
|
||||
"\n",
|
||||
"- `GaussianFactor::shared_ptr linearize(const Values& x) const` \n",
|
||||
" Linearizes the factor at the given values `x` to create a Gaussian factor."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## NonlinearEquality1\n",
|
||||
"\n",
|
||||
"The `NonlinearEquality1` factor is a unary equality constraint that fixes a variable to a specific value.\n",
|
||||
"\n",
|
||||
"### Constructors\n",
|
||||
"- `NonlinearEquality1(const X& value, Key key, double mu = 1000.0)` \n",
|
||||
" Creates a factor that fixes the variable at `key` to the value `value`. \n",
|
||||
" - `value`: The fixed value for the variable. \n",
|
||||
" - `key`: Key of the variable to constrain. \n",
|
||||
" - `mu`: Strength of the constraint (default: `1000.0`).\n",
|
||||
"\n",
|
||||
"### Methods\n",
|
||||
"- `Vector evaluateError(const X& x1, OptionalMatrixType H = nullptr) const` \n",
|
||||
" Evaluates the error vector for the given variable value `x1`. Optionally computes the Jacobian matrix `H`.\n",
|
||||
"\n",
|
||||
"- `void print(const std::string& s, const KeyFormatter& keyFormatter) const` \n",
|
||||
" Prints the factor details, including the fixed value and noise model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## NonlinearEquality2\n",
|
||||
"\n",
|
||||
"The `NonlinearEquality2` factor is a binary equality constraint that enforces equality between two variables.\n",
|
||||
"\n",
|
||||
"### Constructors\n",
|
||||
"- `NonlinearEquality2(Key key1, Key key2, double mu = 1e4)` \n",
|
||||
" Creates a factor that enforces equality between the variables at `key1` and `key2`. \n",
|
||||
" - `key1`: Key of the first variable. \n",
|
||||
" - `key2`: Key of the second variable. \n",
|
||||
" - `mu`: Strength of the constraint (default: `1e4`).\n",
|
||||
"\n",
|
||||
"### Methods\n",
|
||||
"- `Vector evaluateError(const T& x1, const T& x2, OptionalMatrixType H1 = nullptr, OptionalMatrixType H2 = nullptr) const` \n",
|
||||
" Evaluates the error vector for the given variable values `x1` and `x2`. Optionally computes the Jacobian matrices `H1` and `H2`.\n",
|
||||
"\n",
|
||||
"- `void print(const std::string& s, const KeyFormatter& keyFormatter) const` \n",
|
||||
" Prints the factor details, including the keys and noise model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Common Features\n",
|
||||
"\n",
|
||||
"### Error Handling Modes\n",
|
||||
"- Exact Evaluation: Throws an error during linearization if the constraint is violated. \n",
|
||||
"- Inexact Evaluation: Allows nonzero error and scales it using the `error_gain_` parameter.\n",
|
||||
"\n",
|
||||
"### Serialization\n",
|
||||
"All factors support serialization for saving and loading models.\n",
|
||||
"\n",
|
||||
"### Testable Interface\n",
|
||||
"All factors implement the `Testable` interface, providing methods like:\n",
|
||||
"- `void print(const std::string& s, const KeyFormatter& keyFormatter) const` \n",
|
||||
" Prints the factor details.\n",
|
||||
"- `bool equals(const NonlinearFactor& f, double tol) const` \n",
|
||||
" Checks if two factors are equal within a specified tolerance."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"These factors provide a flexible way to enforce equality constraints in nonlinear optimization problems, making them useful for applications like SLAM, robotics, and control systems."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
|
@ -5,14 +5,45 @@
|
|||
"id": "381ccaaa",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# NonlinearFactor Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"# NonlinearFactor\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `NonlinearFactor` class in GTSAM is a fundamental component used in nonlinear optimization problems. It represents a factor in a factor graph, which is a key concept in probabilistic graphical models. The class is designed to work with nonlinear functions, making it suitable for a wide range of applications in robotics and computer vision, such as SLAM (Simultaneous Localization and Mapping) and structure from motion.\n",
|
||||
"The `NonlinearFactor` class in GTSAM is a fundamental component used in nonlinear optimization. It represents a factor in a factor graph. The class is designed to work with nonlinear, continuous functions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "94ffa16d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Mathematical Formulation\n",
|
||||
"\n",
|
||||
"The `NonlinearFactor` is generally represented by a function $f(x)$, where $x$ is a vector of variables. The error is given by:\n",
|
||||
"$$\n",
|
||||
"e(x) = f(x)- z\n",
|
||||
"$$\n",
|
||||
"where $z$ is the observed measurement. The optimization process aims to minimize the sum of squared errors:\n",
|
||||
"$$\n",
|
||||
"\\min_x \\sum_i \\| e_i(x) \\|^2 \n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"Linearization involves approximating $f(x)$ around a point $x_0$:\n",
|
||||
"$$\n",
|
||||
"f(x) \\approx f(x_0) + A\\delta x\n",
|
||||
"$$\n",
|
||||
"where $A$ is the Jacobian matrix of $f$ at $x_0$, and $\\delta x \\doteq x - x_0$. This leads to a linearized error:\n",
|
||||
"$$\n",
|
||||
"e(x) \\approx (f(x_0) + A\\delta x) - z = A\\delta x - b\n",
|
||||
"$$\n",
|
||||
"where $b\\doteq z - f(x_0)$ is the prediction error."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e3842ba3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Key Functionalities\n",
|
||||
"\n",
|
||||
"### Error Calculation\n",
|
||||
|
@ -35,35 +66,19 @@
|
|||
"\n",
|
||||
"- **keys**: Provides access to the keys (or variable indices) involved in the factor. This is essential for understanding which variables the factor is connected to in the factor graph.\n",
|
||||
"\n",
|
||||
"## Mathematical Formulation\n",
|
||||
"\n",
|
||||
"The `NonlinearFactor` is generally represented by a function $f(x)$, where $x$ is a vector of variables. The error is given by:\n",
|
||||
"\n",
|
||||
"$$ e(x) = z - f(x) $$\n",
|
||||
"\n",
|
||||
"where $z$ is the observed measurement. The optimization process aims to minimize the sum of squared errors:\n",
|
||||
"\n",
|
||||
"$$ \\min_x \\sum_i \\| e_i(x) \\|^2 $$\n",
|
||||
"\n",
|
||||
"Linearization involves approximating $f(x)$ around a point $x_0$:\n",
|
||||
"\n",
|
||||
"$$ f(x) \\approx f(x_0) + J(x - x_0) $$\n",
|
||||
"\n",
|
||||
"where $J$ is the Jacobian matrix of $f$ at $x_0$. This leads to a linearized error:\n",
|
||||
"\n",
|
||||
"$$ e(x) \\approx z - (f(x_0) + J(x - x_0)) $$\n",
|
||||
"\n",
|
||||
"## Usage Notes\n",
|
||||
"\n",
|
||||
"- The `NonlinearFactor` class is typically used in conjunction with a `NonlinearFactorGraph`, which is a collection of such factors.\n",
|
||||
"- Users need to implement the `evaluateError` method in derived classes to define the specific measurement model.\n",
|
||||
"- The class is designed to be flexible and extensible, allowing for custom factors to be created for specific applications.\n",
|
||||
"\n",
|
||||
"In summary, the `NonlinearFactor` class is a versatile and essential component for building and solving nonlinear optimization problems in GTSAM. Its ability to handle nonlinear relationships and provide linear approximations makes it suitable for a wide range of applications in robotics and beyond."
|
||||
"- The class is designed to be flexible and extensible, allowing for custom factors to be created for specific applications."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -5,13 +5,11 @@
|
|||
"id": "a58d890a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# NonlinearFactorGraph Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"# NonlinearFactorGraph\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `NonlinearFactorGraph` class in GTSAM is a key component for representing and solving nonlinear factor graphs. A factor graph is a bipartite graph that represents the factorization of a function, commonly used in probabilistic graphical models. In the context of GTSAM, it is used to represent the structure of optimization problems, particularly in the domain of simultaneous localization and mapping (SLAM) and structure from motion (SfM).\n",
|
||||
"The `NonlinearFactorGraph` class in GTSAM is a key component for representing and solving nonlinear factor graphs. A factor graph is a bipartite graph that represents the factorization of a function, commonly used in probabilistic graphical models. In the context of GTSAM, it is used to represent the structure of optimization problems, e.g., in the domain of simultaneous localization and mapping (SLAM) or structure from motion (SfM).\n",
|
||||
"\n",
|
||||
"## Key Functionalities\n",
|
||||
"\n",
|
||||
|
@ -35,32 +33,31 @@
|
|||
"- **empty**: Checks if the graph contains any factors.\n",
|
||||
"- **at**: Accesses a specific factor by its index.\n",
|
||||
"- **back**: Retrieves the last factor in the graph.\n",
|
||||
"- **front**: Retrieves the first factor in the graph.\n",
|
||||
"\n",
|
||||
"- **front**: Retrieves the first factor in the graph."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "71b15f4c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Optimization and Linearization\n",
|
||||
"\n",
|
||||
"- **linearize**: Converts the nonlinear factor graph into a linear factor graph at a given linearization point. This is a crucial step in iterative optimization algorithms like Gauss-Newton or Levenberg-Marquardt.\n",
|
||||
"- **linearize**: Converts the nonlinear factor graph into a linear factor graph at a given linearization point. This is a crucial step in iterative optimization algorithms like [Gauss-Newton](./GaussNewtonOptimizer.ipynb) or [Levenberg-Marquardt](./LevenbergMarquardtOptimizer.ipynb).\n",
|
||||
" \n",
|
||||
" The linearization process involves computing the Jacobian matrices of the nonlinear functions, resulting in a linear approximation:\n",
|
||||
" \n",
|
||||
" $$ f(x) \\approx f(x_0) + J(x - x_0) $$\n",
|
||||
" $$ f(x) \\approx f(x_0) + A(x - x_0) $$\n",
|
||||
" \n",
|
||||
" where $J$ is the Jacobian matrix evaluated at the point $x_0$.\n",
|
||||
"\n",
|
||||
"### Utilities\n",
|
||||
"\n",
|
||||
"- **equals**: Compares two nonlinear factor graphs for equality, considering both the structure and the factors themselves.\n",
|
||||
"- **clone**: Creates a deep copy of the factor graph, including all its factors.\n",
|
||||
"\n",
|
||||
"## Usage Notes\n",
|
||||
"\n",
|
||||
"The `NonlinearFactorGraph` class is designed to be flexible and efficient, allowing users to construct complex optimization problems by adding and managing factors. It integrates seamlessly with GTSAM's optimization algorithms, enabling robust solutions to large-scale nonlinear problems.\n",
|
||||
"\n",
|
||||
"For effective use, it is important to understand the nature of the factors being added and the implications of linearization on the optimization process. The class provides a robust interface for managing the lifecycle of a factor graph, from construction through to optimization and solution extraction."
|
||||
" where $A$ is the Jacobian matrix evaluated at the point $x_0$."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -5,69 +5,19 @@
|
|||
"id": "2b6fc012",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# NonlinearISAM Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"# NonlinearISAM\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `NonlinearISAM` class in GTSAM is a powerful tool for incrementally solving nonlinear factor graphs. It is particularly useful in applications where the problem is continuously evolving, such as in SLAM (Simultaneous Localization and Mapping) or incremental structure-from-motion. The class leverages the iSAM (incremental Smoothing and Mapping) algorithm to efficiently update solutions as new measurements are added.\n",
|
||||
"\n",
|
||||
"## Key Features\n",
|
||||
"\n",
|
||||
"- **Incremental Updates**: `NonlinearISAM` allows for the efficient update of the solution when new factors are added to the graph. This is crucial for real-time applications where the problem is continuously changing.\n",
|
||||
" \n",
|
||||
"- **Batch Initialization**: The class can perform a full batch optimization to initialize the solution, which can then be refined incrementally.\n",
|
||||
"\n",
|
||||
"- **Marginalization**: It supports marginalizing out variables that are no longer needed, which helps in maintaining computational efficiency.\n",
|
||||
"\n",
|
||||
"## Main Methods\n",
|
||||
"\n",
|
||||
"### Initialization and Update\n",
|
||||
"\n",
|
||||
"- **`update`**: This method is central to the `NonlinearISAM` class. It allows for the addition of new factors and variables to the existing factor graph. The update is performed incrementally, leveraging previous computations to enhance efficiency.\n",
|
||||
"\n",
|
||||
"- **`estimate`**: After performing updates, this method retrieves the current best estimate of the variable values.\n",
|
||||
"\n",
|
||||
"### Batch Operations\n",
|
||||
"\n",
|
||||
"- **`batchStep`**: This method performs a full batch optimization, which can be useful for reinitializing the solution or when a significant change in the problem structure occurs.\n",
|
||||
"\n",
|
||||
"### Marginalization\n",
|
||||
"\n",
|
||||
"- **`marginalize`**: This method allows for the removal of variables from the factor graph. Marginalization is useful for reducing the problem size and maintaining efficiency.\n",
|
||||
"\n",
|
||||
"## Mathematical Background\n",
|
||||
"\n",
|
||||
"The `NonlinearISAM` class operates on factor graphs, which are bipartite graphs consisting of variable nodes and factor nodes. The goal is to find the configuration of variables that maximizes the product of all factors, often expressed as:\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
"\\max_{\\mathbf{x}} \\prod_{i} \\phi_i(\\mathbf{x}_i)\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"where $\\phi_i(\\mathbf{x}_i)$ are the factors depending on subsets of variables $\\mathbf{x}_i$.\n",
|
||||
"\n",
|
||||
"The iSAM algorithm updates the solution by incrementally solving the linearized system of equations derived from the factor graph:\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
"\\mathbf{A} \\Delta \\mathbf{x} = \\mathbf{b}\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"where $\\mathbf{A}$ is the Jacobian matrix of the factors, $\\Delta \\mathbf{x}$ is the update to the variable estimates, and $\\mathbf{b}$ is the residual vector.\n",
|
||||
"\n",
|
||||
"## Usage Notes\n",
|
||||
"\n",
|
||||
"- **Efficiency**: The incremental nature of `NonlinearISAM` makes it highly efficient for large-scale problems where new data is continuously being integrated.\n",
|
||||
"\n",
|
||||
"- **Robustness**: The ability to perform batch optimizations and marginalize variables provides robustness against changes in the problem structure.\n",
|
||||
"\n",
|
||||
"- **Applications**: This class is particularly suited for robotics and computer vision applications where real-time performance is critical.\n",
|
||||
"\n",
|
||||
"In summary, the `NonlinearISAM` class is a sophisticated tool for handling dynamic nonlinear optimization problems, offering both incremental and batch processing capabilities to efficiently manage evolving factor graphs."
|
||||
"The `NonlinearISAM` class wraps the incremental factorization version of iSAM {cite:p}`http://dx.doi.org/10.1109/TRO.2008.2006706`. It is largely superseded by [iSAM2](./ISAM2.ipynb) but it is a simpler class, with less bells and whistles, that might be easier to debug. For background, also see the more recent booklet by {cite:t}`https://doi.org/10.1561/2300000043`.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -5,13 +5,11 @@
|
|||
"id": "ec35011c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# GTSAM PriorFactor Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"# PriorFactor\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `PriorFactor` class in GTSAM is a specialized factor used in probabilistic graphical models, particularly within the context of nonlinear optimization and estimation problems. It represents a prior belief about a variable in the form of a Gaussian distribution. This class is crucial for incorporating prior knowledge into the optimization process, which can significantly enhance the accuracy and robustness of the solutions.\n",
|
||||
"The `PriorFactor` represents a prior belief about a variable in the form of a Gaussian distribution. This class is crucial for incorporating prior knowledge into the optimization process, which can significantly enhance the accuracy and robustness of the solutions.\n",
|
||||
"\n",
|
||||
"## Key Functionalities\n",
|
||||
"\n",
|
||||
|
@ -29,17 +27,13 @@
|
|||
"\n",
|
||||
"where $x$ is the estimated value, and $\\mu$ is the prior mean. The error is then weighted by the noise model to form the contribution of this factor to the overall objective function.\n",
|
||||
"\n",
|
||||
"### Jacobian Computation\n",
|
||||
"### Adding to a Factor Graph\n",
|
||||
"\n",
|
||||
"In the context of optimization, the `PriorFactor` provides methods to compute the Jacobian of the error function. This is essential for gradient-based optimization algorithms, which rely on derivatives to iteratively improve the solution.\n",
|
||||
"\n",
|
||||
"### Contribution to Factor Graph\n",
|
||||
"\n",
|
||||
"The `PriorFactor` contributes to the factor graph by adding a term to the objective function that penalizes deviations from the prior. This term is integrated into the overall optimization problem, ensuring that the solution respects the prior knowledge encoded by the factor.\n",
|
||||
"[NonlinearFactorGraph](./NonlinearFactorGraph.ipynb) has a templated method `addPrior<T>` that provides a convenient way to add priors.\n",
|
||||
"\n",
|
||||
"## Usage Considerations\n",
|
||||
"\n",
|
||||
"- **Noise Model**: The choice of noise model is critical as it determines how strongly the prior is enforced. A tighter noise model implies a stronger belief in the prior.\n",
|
||||
"- **Noise Model**: The choice of noise model is critical as it determines how strongly the prior is enforced. A tighter noise model implies a stronger belief in the prior. *Note that very strong priors might make the condition number of the linear systems to be solved very high. In this case consider adding a [NonlinearEqualityFactor]\n",
|
||||
"- **Integration with Other Factors**: The `PriorFactor` is typically used in conjunction with other factors that model the system dynamics and measurements. It helps anchor the solution, especially in scenarios with limited or noisy measurements.\n",
|
||||
"- **Applications**: Common applications include SLAM (Simultaneous Localization and Mapping), where priors on initial poses or landmarks can significantly improve map accuracy and convergence speed.\n",
|
||||
"\n",
|
||||
|
@ -49,7 +43,11 @@
|
|||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -5,62 +5,63 @@
|
|||
"id": "5a0c879e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# WhiteNoiseFactor Class Documentation\n",
|
||||
"# WhiteNoiseFactor\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"*Below is partly generated with ChatGPT 4o, needs to be verified.*\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `WhiteNoiseFactor` class in GTSAM is a specialized factor used in nonlinear optimization problems, particularly in the context of probabilistic graphical models. This class models the effect of white noise on a measurement, which is a common assumption in many estimation problems. The primary purpose of this class is to incorporate the uncertainty due to white noise into the optimization process.\n",
|
||||
"\n",
|
||||
"## Key Functionalities\n",
|
||||
"\n",
|
||||
"### Noise Modeling\n",
|
||||
"\n",
|
||||
"- **White Noise Assumption**: The class assumes that the noise affecting the measurements is Gaussian and uncorrelated, which is often referred to as \"white noise\". This assumption simplifies the mathematical treatment of noise in the optimization problem.\n",
|
||||
"\n",
|
||||
"### Factor Operations\n",
|
||||
"\n",
|
||||
"- **Error Calculation**: The `WhiteNoiseFactor` computes the error between the predicted and observed measurements, incorporating the noise model. This error is crucial for the optimization process as it influences the adjustment of variables to minimize the overall error in the system.\n",
|
||||
"\n",
|
||||
"- **Jacobian Computation**: The class provides methods to compute the Jacobian of the error function with respect to the variables involved. The Jacobian is essential for gradient-based optimization techniques, as it provides the necessary derivatives to guide the optimization algorithm.\n",
|
||||
"\n",
|
||||
"### Mathematical Formulation\n",
|
||||
"\n",
|
||||
"The error function for a `WhiteNoiseFactor` can be represented as:\n",
|
||||
"\n",
|
||||
"$$ e(x) = h(x) - z $$\n",
|
||||
"The `WhiteNoiseFactor` in GTSAM is a binary nonlinear factor designed to estimate the parameters of zero-mean Gaussian white noise. It uses a **mean-precision parameterization**, where the mean $ \\mu $ and precision $ \\tau = 1/\\sigma^2 $ are treated as variables to be optimized."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b40b3242",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Parameterization\n",
|
||||
"\n",
|
||||
"The factor models the negative log-likelihood of a zero-mean Gaussian distribution as follows,\n",
|
||||
"$$\n",
|
||||
"f(z, \\mu, \\tau) = \\log(\\sqrt{2\\pi}) - 0.5 \\log(\\tau) + 0.5 \\tau (z - \\mu)^2\n",
|
||||
"$$\n",
|
||||
"where:\n",
|
||||
"- $e(x)$ is the error function.\n",
|
||||
"- $h(x)$ is the predicted measurement based on the current estimate of the variables.\n",
|
||||
"- $z$ is the observed measurement.\n",
|
||||
"- $ z $: Measurement value (observed data).\n",
|
||||
"- $ \\mu $: Mean of the Gaussian distribution (to be estimated).\n",
|
||||
"- $ \\tau $: Precision of the Gaussian distribution $ \\tau = 1/\\sigma^2 $, also to be estimated).\n",
|
||||
"\n",
|
||||
"The noise is assumed to be Gaussian with zero mean and a certain covariance, which is often represented as:\n",
|
||||
"This formulation allows the factor to optimize both the mean and precision of the noise model simultaneously."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2f36abdb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use Case: Estimating IMU Noise Characteristics\n",
|
||||
"\n",
|
||||
"$$ \\text{Cov}(e) = \\sigma^2 I $$\n",
|
||||
"The `WhiteNoiseFactor` can be used in system identification tasks, such as estimating the noise characteristics of an IMU. Here's how it would work:\n",
|
||||
"\n",
|
||||
"where $\\sigma^2$ is the variance of the noise and $I$ is the identity matrix.\n",
|
||||
"1. **Define the Measurement**:\n",
|
||||
" - Collect a series of IMU measurements (e.g., accelerometer or gyroscope readings) under known conditions (e.g., stationary or constant velocity).\n",
|
||||
"\n",
|
||||
"### Optimization Integration\n",
|
||||
"2. **Set Up the Factor Graph**:\n",
|
||||
" - Add `WhiteNoiseFactor` instances to the factor graph for each measurement, linking the observed value $ z $ to the mean and precision variables.\n",
|
||||
"\n",
|
||||
"- **Factor Graphs**: The `WhiteNoiseFactor` is integrated into factor graphs, which are a key structure in GTSAM for representing and solving large-scale estimation problems. Each factor in the graph contributes to the overall error that the optimization process seeks to minimize.\n",
|
||||
"3. **Optimize the Graph**:\n",
|
||||
" - Use GTSAM's nonlinear optimization tools to solve for the mean $ \\mu $ and precision $ \\tau $ that best explain the observed measurements.\n",
|
||||
"\n",
|
||||
"- **Nonlinear Optimization**: The class is designed to work seamlessly with GTSAM's nonlinear optimization framework, allowing it to handle complex, real-world estimation problems that involve non-linear relationships between variables.\n",
|
||||
"\n",
|
||||
"## Usage Notes\n",
|
||||
"\n",
|
||||
"- **Assumptions**: Users should ensure that the white noise assumption is valid for their specific application, as deviations from this assumption can lead to suboptimal estimation results.\n",
|
||||
"\n",
|
||||
"- **Integration**: The `WhiteNoiseFactor` should be used in conjunction with other factors and variables in a factor graph to effectively model the entire system being estimated.\n",
|
||||
"\n",
|
||||
"- **Performance**: The efficiency of the optimization process can be influenced by the choice of noise model and the structure of the factor graph. Proper tuning and validation are recommended to achieve optimal performance.\n",
|
||||
"\n",
|
||||
"In summary, the `WhiteNoiseFactor` class is a powerful tool in GTSAM for modeling and mitigating the effects of white noise in nonlinear estimation problems. Its integration into factor graphs and compatibility with GTSAM's optimization algorithms make it a versatile component for a wide range of applications."
|
||||
"4. **Extract Noise Characteristics**:\n",
|
||||
" - The optimized mean $ \\mu $ represents the bias in the sensor measurements.\n",
|
||||
" - The optimized precision $ \\tau $ can be inverted to compute the standard deviation $ \\sigma = 1/\\sqrt{\\tau} $, which represents the noise level."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -4,48 +4,50 @@ The `nonlinear` module in GTSAM includes a comprehensive set of tools for nonlin
|
|||
|
||||
## Core Classes
|
||||
|
||||
- **[NonlinearFactorGraph](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearFactorGraph.h)**: Represents the optimization problem as a graph of factors.
|
||||
- **[NonlinearFactor](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearFactor.h)**: Base class for all nonlinear factors.
|
||||
- **[NoiseModelFactor](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NoiseModelFactor.h)**: Base class for factors with noise models.
|
||||
- **[Values](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/Values.h)**: Container for variable assignments used in optimization.
|
||||
- [NonlinearFactorGraph](doc/NonlinearFactorGraph.ipynb): Represents the optimization problem as a graph of factors.
|
||||
- [NonlinearFactor](doc/NonlinearFactor.ipynb): Base class for all nonlinear factors.
|
||||
- [NoiseModelFactor](doc/NonlinearFactor.ipynb): Base class for factors with noise models.
|
||||
- [Values](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/Values.h): Container for variable assignments used in optimization.
|
||||
|
||||
## Batch Optimizers
|
||||
|
||||
- **[NonlinearOptimizer](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearOptimizer.h)**: Base class for all batch optimizers.
|
||||
- **[NonlinearOptimizerParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearOptimizerParams.h)**: Base parameters class for all optimizers.
|
||||
- [NonlinearOptimizer](doc/NonlinearOptimizer.ipynb): Base class for all batch optimizers.
|
||||
- [NonlinearOptimizerParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearOptimizerParams.h): Base parameters class for all optimizers.
|
||||
|
||||
- **[GaussNewtonOptimizer](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GaussNewtonOptimizer.h)**: Implements Gauss-Newton optimization.
|
||||
- **[GaussNewtonParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GaussNewtonParams.h)**: Parameters for Gauss-Newton optimization.
|
||||
- [GaussNewtonOptimizer](doc/GaussNewtonOptimizer.ipynb): Implements Gauss-Newton optimization.
|
||||
- [GaussNewtonParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GaussNewtonParams.h): Parameters for Gauss-Newton optimization.
|
||||
|
||||
- **[LevenbergMarquardtOptimizer](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtOptimizer.h)**: Implements Levenberg-Marquardt optimization.
|
||||
- **[LevenbergMarquardtParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtParams.h)**: Parameters for Levenberg-Marquardt optimization.
|
||||
- [LevenbergMarquardtOptimizer](doc/LevenbergMarquardtOptimizer.ipynb): Implements Levenberg-Marquardt optimization.
|
||||
- [LevenbergMarquardtParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtParams.h): Parameters for Levenberg-Marquardt optimization.
|
||||
|
||||
- **[DoglegOptimizer](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/DoglegOptimizer.h)**: Implements Powell's Dogleg optimization.
|
||||
- **[DoglegParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/DoglegParams.h)**: Parameters for Dogleg optimization.
|
||||
- [DoglegOptimizer](doc/DoglegOptimizer.ipynb): Implements Powell's Dogleg optimization.
|
||||
- [DoglegParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/DoglegParams.h): Parameters for Dogleg optimization.
|
||||
|
||||
- **[GncOptimizer](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GncOptimizer.h)**: Implements robust optimization using Graduated Non-Convexity.
|
||||
- **[GncParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GncParams.h)**: Parameters for Graduated Non-Convexity optimization.
|
||||
- [GncOptimizer](doc/GncOptimizer.ipynb): Implements robust optimization using Graduated Non-Convexity.
|
||||
- [GncParams](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GncParams.h): Parameters for Graduated Non-Convexity optimization.
|
||||
|
||||
## Incremental Optimizers
|
||||
|
||||
- **[ISAM2](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/ISAM2.h)**: Incremental Smoothing and Mapping 2, with fluid relinearization.
|
||||
- **[ISAM2Params](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/ISAM2Params.h)**: Parameters controlling the ISAM2 algorithm.
|
||||
- **[ISAM2Result](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/ISAM2Result.h)**: Results from ISAM2 update operations.
|
||||
- **[NonlinearISAM](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearISAM.h)**: Original iSAM implementation (mostly superseded by ISAM2).
|
||||
- [ISAM2](doc/ISAM2.ipynb): Incremental Smoothing and Mapping 2, with fluid relinearization.
|
||||
- [ISAM2Params](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/ISAM2Params.h): Parameters controlling the ISAM2 algorithm.
|
||||
- [ISAM2Result](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/ISAM2Result.h): Results from ISAM2 update operations.
|
||||
- [NonlinearISAM](doc/NonlinearISAM.ipynb): Original iSAM implementation (mostly superseded by ISAM2).
|
||||
|
||||
## Specialized Factors
|
||||
|
||||
- **[PriorFactor](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/PriorFactor.h)**: Imposes a prior constraint on a variable.
|
||||
- **[NonlinearEquality](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearEquality.h)**: Enforces equality constraints between variables.
|
||||
- **[LinearContainerFactor](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LinearContainerFactor.h)**: Wraps linear factors for inclusion in nonlinear factor graphs.
|
||||
- [PriorFactor](doc/PriorFactor.ipynb): Imposes a prior constraint on a variable.
|
||||
- [NonlinearEquality](doc/NonlinearEquality.ipynb): Enforces equality constraints between variables.
|
||||
- [LinearContainerFactor](doc/LinearContainerFactor.ipynb): Wraps linear factors for inclusion in nonlinear factor graphs.
|
||||
- [WhiteNoiseFactor](doc/WhiteNoiseFactor.ipynb): Binary factor to estimate parameters of zero-mean Gaussian white noise.
|
||||
|
||||
## Filtering and Smoothing
|
||||
|
||||
- **[ExtendedKalmanFilter](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/ExtendedKalmanFilter.h)**: Nonlinear Kalman filter implementation.
|
||||
- **[FixedLagSmoother](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/FixedLagSmoother.h)**: Base class for fixed-lag smoothers.
|
||||
- **[BatchFixedLagSmoother](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/BatchFixedLagSmoother.h)**: Implementation of a fixed-lag smoother using batch optimization.
|
||||
- [ExtendedKalmanFilter](doc/ExtendedKalmanFilter.ipynb): Nonlinear Kalman filter implementation.
|
||||
- [FixedLagSmoother](doc/FixedLagSmoother.ipynb): Base class for fixed-lag smoothers.
|
||||
- [BatchFixedLagSmoother](doc/BatchFixedLagSmoother.ipynb): Implementation of a fixed-lag smoother using batch optimization.
|
||||
- [IncrementalFixedLagSmoother](doc/IncrementalFixedLagSmoother.ipynb): Implementation of a fixed-lag smoother using iSAM2.
|
||||
|
||||
## Analysis and Visualization
|
||||
|
||||
- **[Marginals](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/Marginals.h)**: Computes marginal covariances from optimization results.
|
||||
- **[GraphvizFormatting](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GraphvizFormatting.h)**: Provides customization for factor graph visualization.
|
||||
- [Marginals](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/Marginals.h): Computes marginal covariances from optimization results.
|
||||
- [GraphvizFormatting](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GraphvizFormatting.h): Provides customization for factor graph visualization.
|
Loading…
Reference in New Issue