NLO and Dogleg docs
parent
f2745c47ef
commit
8a0521bde9
|
@ -7,41 +7,9 @@
|
|||
"source": [
|
||||
"# DoglegOptimizer Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `DoglegOptimizer` class in GTSAM is a specialized optimization algorithm designed for solving nonlinear least squares problems. It implements the Dogleg method, which is a hybrid approach combining the steepest descent and Gauss-Newton methods. This optimizer is particularly effective for problems where the Hessian is difficult to compute or when the initial guess is far from the solution.\n",
|
||||
"\n",
|
||||
"## Key Features\n",
|
||||
"\n",
|
||||
"- **Hybrid Approach**: Combines the strengths of both the steepest descent and Gauss-Newton methods.\n",
|
||||
"- **Trust Region Method**: Utilizes a trust region to determine the step size, balancing between the accuracy of Gauss-Newton and the robustness of steepest descent.\n",
|
||||
"- **Efficient for Nonlinear Problems**: Designed to handle complex nonlinear least squares problems effectively.\n",
|
||||
"\n",
|
||||
"## Key Methods\n",
|
||||
"\n",
|
||||
"### Initialization and Setup\n",
|
||||
"\n",
|
||||
"- **Constructor**: Initializes the optimizer with default or specified parameters.\n",
|
||||
"- **setDeltaInitial**: Sets the initial trust region radius, $\\Delta_0$, which influences the step size in the optimization process.\n",
|
||||
"\n",
|
||||
"### Optimization Process\n",
|
||||
"\n",
|
||||
"- **optimize**: Executes the optimization process, iteratively refining the solution to minimize the error in the nonlinear least squares problem.\n",
|
||||
"- **iterate**: Performs a single iteration of the Dogleg optimization, updating the current estimate based on the trust region and the computed step.\n",
|
||||
"\n",
|
||||
"### Result Evaluation\n",
|
||||
"\n",
|
||||
"- **error**: Computes the error of the current estimate, providing a measure of how well the current solution fits the problem constraints.\n",
|
||||
"- **values**: Returns the optimized values after the optimization process is complete.\n",
|
||||
"\n",
|
||||
"### Trust Region Management\n",
|
||||
"\n",
|
||||
"- **getDelta**: Retrieves the current trust region radius, $\\Delta$, which is crucial for understanding the optimizer's step size decisions.\n",
|
||||
"- **setDelta**: Manually sets the trust region radius, allowing for fine-tuned control over the optimization process.\n",
|
||||
"\n",
|
||||
"## Mathematical Formulation\n",
|
||||
"The `DoglegOptimizer` class in GTSAM is a specialized optimization algorithm designed for solving nonlinear least squares problems. It implements the Dogleg method, which is a hybrid approach combining the steepest descent and Gauss-Newton methods.\n",
|
||||
"\n",
|
||||
"The Dogleg method is characterized by its use of two distinct steps:\n",
|
||||
"\n",
|
||||
|
@ -55,16 +23,58 @@
|
|||
"\n",
|
||||
"The Dogleg step, $p_{dl}$, is a combination of these two steps, determined by the trust region radius $\\Delta$.\n",
|
||||
"\n",
|
||||
"It's key features:\n",
|
||||
"\n",
|
||||
"- **Hybrid Approach**: Combines the strengths of both the steepest descent and Gauss-Newton methods.\n",
|
||||
"- **Trust Region Method**: Utilizes a trust region to determine the step size, balancing between the accuracy of Gauss-Newton and the robustness of steepest descent.\n",
|
||||
"- **Efficient for Nonlinear Problems**: Designed to handle complex nonlinear least squares problems effectively."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "758e347b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Key Methods\n",
|
||||
"\n",
|
||||
"Please see the base class [NonlinearOptimizer.ipynb](NonlinearOptimizer.ipynb).\n",
|
||||
"\n",
|
||||
"## Parameters\n",
|
||||
"\n",
|
||||
"The `DoglegParams` class defines parameters specific to Powell's Dogleg optimization algorithm:\n",
|
||||
"\n",
|
||||
"| Parameter | Description |\n",
|
||||
"|-----------|-------------|\n",
|
||||
"| `deltaInitial` | Initial trust region radius that controls step size (default: 1.0) |\n",
|
||||
"| `verbosityDL` | Controls algorithm-specific diagnostic output (options: SILENT, VERBOSE) |\n",
|
||||
"\n",
|
||||
"These parameters complement the standard optimization parameters inherited from `NonlinearOptimizerParams`, which include:\n",
|
||||
"\n",
|
||||
"- Maximum iterations\n",
|
||||
"- Relative and absolute error thresholds\n",
|
||||
"- Error function verbosity\n",
|
||||
"- Linear solver type\n",
|
||||
"\n",
|
||||
"Powell's Dogleg algorithm combines Gauss-Newton and gradient descent approaches within a trust region framework. The `deltaInitial` parameter defines the initial size of this trust region, which adaptively changes during optimization based on how well the linear approximation matches the nonlinear function.\n",
|
||||
"\n",
|
||||
"## Usage Considerations\n",
|
||||
"\n",
|
||||
"- **Initial Guess**: The performance of the Dogleg optimizer can be sensitive to the initial guess. A good initial estimate can significantly speed up convergence.\n",
|
||||
"- **Parameter Tuning**: The choice of the initial trust region radius and other parameters can affect the convergence rate and stability of the optimization.\n",
|
||||
"\n",
|
||||
"The `DoglegOptimizer` is a powerful tool for solving nonlinear optimization problems, particularly when dealing with large-scale systems where computational efficiency is crucial. By leveraging the hybrid approach of the Dogleg method, it provides a robust solution capable of handling a wide range of problem complexities."
|
||||
"## Files\n",
|
||||
"\n",
|
||||
"- [DoglegOptimizer.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/DoglegOptimizer.h)\n",
|
||||
"- [DoglegOptimizer.cpp](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/DoglegOptimizer.cpp)\n",
|
||||
"- [DoglegParams.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/DoglegParams.h)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
@ -7,60 +7,47 @@
|
|||
"source": [
|
||||
"# NonlinearOptimizer Class Documentation\n",
|
||||
"\n",
|
||||
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The `NonlinearOptimizer` class in GTSAM is a foundational component for solving nonlinear optimization problems. It provides a flexible interface for optimizing nonlinear factor graphs, which are commonly used in robotics and computer vision applications.\n",
|
||||
"The `NonlinearOptimizer` class in GTSAM is a the base class for (batch) nonlinear optimization solvers. It provides the basic API for optimizing nonlinear factor graphs, commonly used in robotics and computer vision applications.\n",
|
||||
"\n",
|
||||
"The primary purpose of the `NonlinearOptimizer` is to iteratively refine an initial estimate of a solution to minimize a nonlinear cost function. This class serves as a base class for specific optimization algorithms like Gauss-Newton, Levenberg-Marquardt, and Dogleg.\n",
|
||||
"The primary purpose of the `NonlinearOptimizer` is to iteratively refine an initial estimate of a solution to minimize a nonlinear cost function. Specific optimization algorithms like Gauss-Newton, Levenberg-Marquardt, and Dogleg and implemented in derived class.\n",
|
||||
"\n",
|
||||
"## Key Methods\n",
|
||||
"\n",
|
||||
"### `optimize()`\n",
|
||||
"The `optimize()` method is the core function of the `NonlinearOptimizer` class. It performs the optimization process, iteratively updating the estimate to converge to a local minimum of the cost function.\n",
|
||||
"\n",
|
||||
"### `error()`\n",
|
||||
"The `error()` method computes the total error of the current estimate. This is typically the sum of squared errors for all factors in the graph. Mathematically, the error can be expressed as:\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
"E(x) = \\sum_{i} \\| f_i(x) \\|^2\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"where $f_i(x)$ represents the residual error of the $i$-th factor.\n",
|
||||
"\n",
|
||||
"### `values()`\n",
|
||||
"The `values()` method returns the current set of variable estimates. These estimates are updated during the optimization process.\n",
|
||||
"\n",
|
||||
"### `iterations()`\n",
|
||||
"The `iterations()` method provides the number of iterations performed during the optimization process. This can be useful for analyzing the convergence behavior of the optimizer.\n",
|
||||
"\n",
|
||||
"### `params()`\n",
|
||||
"The `params()` method returns the parameters used by the optimizer. These parameters can include settings like convergence thresholds, maximum iterations, and other algorithm-specific options.\n",
|
||||
"\n",
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"The `NonlinearOptimizer` class is typically not used directly. Instead, one of its derived classes, such as `GaussNewtonOptimizer`, `LevenbergMarquardtOptimizer`, or `DoglegOptimizer`, is used to perform specific types of optimization. These derived classes implement the `optimize()` method according to their respective algorithms.\n",
|
||||
"\n",
|
||||
"## Mathematical Foundations\n",
|
||||
"## Mathematical Foundation\n",
|
||||
"\n",
|
||||
"The optimization process in `NonlinearOptimizer` is based on iterative methods that solve for the minimum of a nonlinear cost function. The general approach involves linearizing the nonlinear problem at the current estimate and solving the resulting linear system to update the estimate. This process is repeated until convergence criteria are met.\n",
|
||||
"\n",
|
||||
"The optimization problem can be formally defined as:\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
"\\min_{x} \\sum_{i} \\| f_i(x) \\|^2\n",
|
||||
"\\min_{x} \\sum_{i} \\| \\phi_i(x) \\|^2\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"where $x$ is the vector of variables to be optimized, and $f_i(x)$ are the residuals of the factors in the graph.\n",
|
||||
"where $x$ is the vector of variables to be optimized, and $\\phi_i(x)$ are the residuals of the factors in the graph.\n",
|
||||
"\n",
|
||||
"## Conclusion\n",
|
||||
"## Key Methods\n",
|
||||
"\n",
|
||||
"The `NonlinearOptimizer` class is a crucial component in GTSAM for solving nonlinear optimization problems. By providing a common interface and shared functionality, it enables the implementation of various optimization algorithms tailored to specific problem requirements. Understanding the key methods and their roles is essential for effectively utilizing this class in practical applications."
|
||||
"- The `optimize()` method is the core function of the `NonlinearOptimizer` class. It performs the optimization process, iteratively updating the estimate to converge to a local minimum of the cost function.\n",
|
||||
"- The `error()` method computes the total error of the current estimate. This is typically the sum of squared errors for all factors in the graph. Mathematically, the error can be expressed as:\n",
|
||||
" $$\n",
|
||||
" E(x) = \\sum_{i} \\| \\phi_i(x) \\|^2\n",
|
||||
" $$\n",
|
||||
" where $\\phi_i(x)$ represents the residual error of the $i$-th factor.\n",
|
||||
"- The `values()` method returns the current set of variable estimates. These estimates are updated during the optimization process.\n",
|
||||
"- The `iterations()` method provides the number of iterations performed during the optimization process. This can be useful for analyzing the convergence behavior of the optimizer.\n",
|
||||
"- The `params()` method returns the parameters used by the optimizer. These parameters can include settings like convergence thresholds, maximum iterations, and other algorithm-specific options.\n",
|
||||
"\n",
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"The `NonlinearOptimizer` class is typically not used directly. Instead, one of its derived classes, such as `GaussNewtonOptimizer`, `LevenbergMarquardtOptimizer`, or `DoglegOptimizer`, is used to perform specific types of optimization. These derived classes implement the `optimize()` method according to their respective algorithms."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue