64 lines
3.1 KiB
Plaintext
64 lines
3.1 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6463d580",
|
|
"metadata": {},
|
|
"source": [
|
|
"# GaussNewtonOptimizer\n",
|
|
"\n",
|
|
"## Overview\n",
|
|
"\n",
|
|
"The `GaussNewtonOptimizer` class in GTSAM is designed to optimize nonlinear factor graphs using the Gauss-Newton algorithm. This class is particularly suited for problems where the cost function can be approximated well by a quadratic function near the minimum. The Gauss-Newton method is an iterative optimization technique that updates the solution by linearizing the nonlinear system at each iteration.\n",
|
|
"\n",
|
|
"The Gauss-Newton algorithm is based on the idea of linearizing the nonlinear residuals $r(x)$ around the current estimate $x_k$. The update step is derived from solving the normal equations:\n",
|
|
"\n",
|
|
"$$ J(x_k)^T J(x_k) \\Delta x = -J(x_k)^T r(x_k) $$\n",
|
|
"\n",
|
|
"where $J(x_k)$ is the Jacobian of the residuals with respect to the variables. The solution $\\Delta x$ is used to update the estimate:\n",
|
|
"\n",
|
|
"$$ x_{k+1} = x_k + \\Delta x $$\n",
|
|
"\n",
|
|
"This process is repeated iteratively until convergence.\n",
|
|
"\n",
|
|
"Key features:\n",
|
|
"\n",
|
|
"- **Iterative Optimization**: The optimizer refines the solution iteratively by linearizing the nonlinear system around the current estimate.\n",
|
|
"- **Convergence Control**: It provides mechanisms to control the convergence through parameters such as maximum iterations and relative error tolerance.\n",
|
|
"- **Integration with GTSAM**: Seamlessly integrates with GTSAM's factor graph framework, allowing it to be used with various types of factors and variables.\n",
|
|
"\n",
|
|
"## Key Methods\n",
|
|
"\n",
|
|
"Please see the base class [NonlinearOptimizer.ipynb](NonlinearOptimizer.ipynb).\n",
|
|
"\n",
|
|
"## Parameters\n",
|
|
"\n",
|
|
"The Gauss-Newton optimizer uses the standard optimization parameters inherited from `NonlinearOptimizerParams`, which include:\n",
|
|
"\n",
|
|
"- Maximum iterations\n",
|
|
"- Relative and absolute error thresholds\n",
|
|
"- Error function verbosity\n",
|
|
"- Linear solver type\n",
|
|
"\n",
|
|
"## Usage Considerations\n",
|
|
"\n",
|
|
"- **Initial Guess**: The quality of the initial guess can significantly affect the convergence and performance of the Gauss-Newton optimizer.\n",
|
|
"- **Non-convexity**: Since the method relies on linear approximations, it may struggle with highly non-convex problems or those with poor initial estimates.\n",
|
|
"- **Performance**: The Gauss-Newton method is generally faster than other nonlinear optimization methods like Levenberg-Marquardt for problems that are well-approximated by a quadratic model near the solution.\n",
|
|
"\n",
|
|
"## Files\n",
|
|
"\n",
|
|
"- [GaussNewtonOptimizer.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GaussNewtonOptimizer.h)\n",
|
|
"- [GaussNewtonOptimizer.cpp](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GaussNewtonOptimizer.cpp)"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"language_info": {
|
|
"name": "python"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|