Refine other optimizers' docs

release/4.3a0
p-zach 2025-04-05 16:00:52 -04:00
parent 7c1a1e0765
commit 159e185764
4 changed files with 139 additions and 118 deletions

View File

@ -5,41 +5,12 @@
"id": "6463d580",
"metadata": {},
"source": [
"# GaussNewtonOptimizer Class Documentation\n",
"\n",
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
"# GaussNewtonOptimizer\n",
"\n",
"## Overview\n",
"\n",
"The `GaussNewtonOptimizer` class in GTSAM is designed to optimize nonlinear factor graphs using the Gauss-Newton algorithm. This class is particularly suited for problems where the cost function can be approximated well by a quadratic function near the minimum. The Gauss-Newton method is an iterative optimization technique that updates the solution by linearizing the nonlinear system at each iteration.\n",
"\n",
"## Key Features\n",
"\n",
"- **Iterative Optimization**: The optimizer refines the solution iteratively by linearizing the nonlinear system around the current estimate.\n",
"- **Convergence Control**: It provides mechanisms to control the convergence through parameters such as maximum iterations and relative error tolerance.\n",
"- **Integration with GTSAM**: Seamlessly integrates with GTSAM's factor graph framework, allowing it to be used with various types of factors and variables.\n",
"\n",
"## Key Methods\n",
"\n",
"### Constructor\n",
"\n",
"- **GaussNewtonOptimizer**: Initializes the optimizer with a given factor graph and initial values. The constructor sets up the optimization problem and prepares it for iteration.\n",
"\n",
"### Optimization\n",
"\n",
"- **optimize**: Executes the optimization process. This method runs the Gauss-Newton iterations until convergence criteria are met, such as reaching the maximum number of iterations or achieving a relative error below a specified threshold.\n",
"\n",
"### Convergence Criteria\n",
"\n",
"- **checkConvergence**: Evaluates whether the optimization process has converged based on the change in error and the specified tolerance levels.\n",
"\n",
"### Accessors\n",
"\n",
"- **error**: Returns the current error of the factor graph with respect to the current estimate. This is useful for monitoring the progress of the optimization.\n",
"- **values**: Retrieves the current estimate of the variable values after optimization.\n",
"\n",
"## Mathematical Background\n",
"\n",
"The Gauss-Newton algorithm is based on the idea of linearizing the nonlinear residuals $r(x)$ around the current estimate $x_k$. The update step is derived from solving the normal equations:\n",
"\n",
"$$ J(x_k)^T J(x_k) \\Delta x = -J(x_k)^T r(x_k) $$\n",
@ -50,17 +21,43 @@
"\n",
"This process is repeated iteratively until convergence.\n",
"\n",
"Key features:\n",
"\n",
"- **Iterative Optimization**: The optimizer refines the solution iteratively by linearizing the nonlinear system around the current estimate.\n",
"- **Convergence Control**: It provides mechanisms to control the convergence through parameters such as maximum iterations and relative error tolerance.\n",
"- **Integration with GTSAM**: Seamlessly integrates with GTSAM's factor graph framework, allowing it to be used with various types of factors and variables.\n",
"\n",
"## Key Methods\n",
"\n",
"Please see the base class [NonlinearOptimizer.ipynb](NonlinearOptimizer.ipynb).\n",
"\n",
"## Parameters\n",
"\n",
"The Gauss-Newton optimizer uses the standard optimization parameters inherited from `NonlinearOptimizerParams`, which include:\n",
"\n",
"- Maximum iterations\n",
"- Relative and absolute error thresholds\n",
"- Error function verbosity\n",
"- Linear solver type\n",
"\n",
"## Usage Considerations\n",
"\n",
"- **Initial Guess**: The quality of the initial guess can significantly affect the convergence and performance of the Gauss-Newton optimizer.\n",
"- **Non-convexity**: Since the method relies on linear approximations, it may struggle with highly non-convex problems or those with poor initial estimates.\n",
"- **Performance**: The Gauss-Newton method is generally faster than other nonlinear optimization methods like Levenberg-Marquardt for problems that are well-approximated by a quadratic model near the solution.\n",
"\n",
"In summary, the `GaussNewtonOptimizer` is a powerful tool for solving nonlinear optimization problems in factor graphs, particularly when the problem is well-suited to quadratic approximation. Its integration with GTSAM makes it a versatile choice for various applications in robotics and computer vision."
"## Files\n",
"\n",
"- [GaussNewtonOptimizer.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GaussNewtonOptimizer.h)\n",
"- [GaussNewtonOptimizer.cpp](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GaussNewtonOptimizer.cpp)"
]
}
],
"metadata": {},
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -5,42 +5,12 @@
"id": "c950beef",
"metadata": {},
"source": [
"# GTSAM GncOptimizer Class Documentation\n",
"\n",
"*Disclaimer: This documentation was generated by AI and requires human revision to ensure accuracy and completeness.*\n",
"# GncOptimizer\n",
"\n",
"## Overview\n",
"\n",
"The `GncOptimizer` class in GTSAM is designed to perform robust optimization using Graduated Non-Convexity (GNC). This method is particularly useful in scenarios where the optimization problem is affected by outliers. The GNC approach gradually transitions from a convex approximation of the problem to the original non-convex problem, thereby improving robustness and convergence.\n",
"\n",
"## Key Features\n",
"\n",
"- **Robust Optimization**: The `GncOptimizer` is specifically tailored to handle optimization problems with outliers, using a robust cost function that can mitigate their effects.\n",
"- **Graduated Non-Convexity**: This technique allows the optimizer to start with a convex problem and gradually transform it into the original non-convex problem, which helps in avoiding local minima.\n",
"- **Customizable Parameters**: Users can adjust various parameters to control the behavior of the optimizer, such as the type of robust loss function and the parameters governing the GNC process.\n",
"\n",
"## Key Methods\n",
"\n",
"### Initialization and Setup\n",
"\n",
"- **Constructor**: The class constructor initializes the optimizer with a given nonlinear factor graph and initial estimate. It also accepts parameters specific to the GNC process.\n",
"\n",
"### Optimization Process\n",
"\n",
"- **optimize()**: This method performs the optimization process. It iteratively refines the solution by adjusting the influence of the robust cost function, following the principles of graduated non-convexity.\n",
"\n",
"### Configuration and Parameters\n",
"\n",
"- **setParams()**: Allows users to set the parameters for the GNC optimization process, including the type of robust loss function and other algorithm-specific settings.\n",
"- **getParams()**: Retrieves the current parameters used by the optimizer, providing insight into the configuration of the optimization process.\n",
"\n",
"### Utility Functions\n",
"\n",
"- **cost()**: Computes the cost of the current estimate, which is useful for evaluating the progress of the optimization.\n",
"- **error()**: Returns the error associated with the current estimate, offering a measure of how well the optimization is performing.\n",
"\n",
"## Mathematical Formulation\n",
"\n",
"The `GncOptimizer` leverages a robust cost function $\\rho(e)$, where $e$ is the error term. The goal is to minimize the sum of these robust costs over all measurements:\n",
"\n",
"$$\n",
@ -55,16 +25,55 @@
"\n",
"As $\\mu$ increases, the function $\\rho_\\mu(e)$ transitions from a convex to a non-convex shape, allowing the optimizer to handle outliers effectively.\n",
"\n",
"Key features:\n",
"\n",
"- **Robust Optimization**: The GncOptimizer is specifically tailored to handle optimization problems with outliers, using a robust cost function that can mitigate their effects.\n",
"- **Graduated Non-Convexity**: This technique allows the optimizer to start with a convex problem and gradually transform it into the original non-convex problem, which helps in avoiding local minima.\n",
"- **Customizable Parameters**: Users can adjust various parameters to control the behavior of the optimizer, such as the type of robust loss function and the parameters governing the GNC process.\n",
"\n",
"## Key Methods\n",
"\n",
"Please see the base class [NonlinearOptimizer.ipynb](NonlinearOptimizer.ipynb).\n",
"\n",
"## Parameters\n",
"\n",
"The `GncParams` class defines parameters specific to the GNC optimization algorithm:\n",
"\n",
"| Parameter | Type | Default Value | Description |\n",
"|-----------|------|---------------|-------------|\n",
"| lossType | GncLossType | TLS | Type of robust loss function (GM = Geman McClure or TLS = Truncated least squares) |\n",
"| maxIterations | size_t | 100 | Maximum number of iterations |\n",
"| muStep | double | 1.4 | Multiplicative factor to reduce/increase mu in GNC |\n",
"| relativeCostTol | double | 1e-5 | Threshold for relative cost change to stop iterating |\n",
"| weightsTol | double | 1e-4 | Threshold for weights being close to binary to stop iterating (TLS only) |\n",
"| verbosity | Verbosity enum | SILENT | Verbosity level (options: SILENT, SUMMARY, MU, WEIGHTS, VALUES) |\n",
"| knownInliers | IndexVector | Empty | Slots in factor graph for measurements known to be inliers |\n",
"| knownOutliers | IndexVector | Empty | Slots in factor graph for measurements known to be outliers |\n",
"\n",
"These parameters complement the standard optimization parameters inherited from `NonlinearOptimizerParams`, which include:\n",
"\n",
"- Maximum iterations\n",
"- Relative and absolute error thresholds\n",
"- Error function verbosity\n",
"- Linear solver type\n",
"\n",
"## Usage Considerations\n",
"\n",
"- **Outlier Rejection**: The `GncOptimizer` is particularly effective in scenarios with significant outlier presence, such as SLAM or bundle adjustment problems.\n",
"- **Parameter Tuning**: Proper tuning of the GNC parameters is crucial for achieving optimal performance. Users should experiment with different settings to find the best configuration for their specific problem.\n",
"\n",
"This high-level overview provides a starting point for understanding and utilizing the `GncOptimizer` class in GTSAM. For detailed implementation and advanced usage, users should refer to the source code and additional GTSAM documentation."
"## Files\n",
"\n",
"- [GncOptimizer.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GncOptimizer.h)\n",
"- [GncParams.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/GncParams.h)"
]
}
],
"metadata": {},
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -5,9 +5,7 @@
"id": "29642bb2",
"metadata": {},
"source": [
"# LevenbergMarquardtOptimizer Class Documentation\n",
"\n",
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
"# LevenbergMarquardtOptimizer\n",
"\n",
"## Overview\n",
"\n",
@ -15,14 +13,6 @@
"\n",
"The Levenberg-Marquardt algorithm is an iterative technique that interpolates between the Gauss-Newton algorithm and the method of gradient descent. It is particularly useful for optimizing problems where the solution is expected to be near the initial guess.\n",
"\n",
"## Key Features\n",
"\n",
"- **Non-linear Optimization**: The class is designed to handle non-linear optimization problems efficiently.\n",
"- **Damping Mechanism**: It incorporates a damping parameter to control the step size, balancing between the Gauss-Newton and gradient descent methods.\n",
"- **Iterative Improvement**: The optimizer iteratively refines the solution, reducing the error at each step.\n",
"\n",
"## Mathematical Formulation\n",
"\n",
"The Levenberg-Marquardt algorithm seeks to minimize a cost function $F(x)$ of the form:\n",
"\n",
"$$\n",
@ -37,25 +27,40 @@
"\n",
"Here, $J$ is the Jacobian matrix of the residuals, $\\lambda$ is the damping parameter, and $I$ is the identity matrix.\n",
"\n",
"Key features:\n",
"\n",
"- **Non-linear Optimization**: The class is designed to handle non-linear optimization problems efficiently.\n",
"- **Damping Mechanism**: It incorporates a damping parameter to control the step size, balancing between the Gauss-Newton and gradient descent methods.\n",
"- **Iterative Improvement**: The optimizer iteratively refines the solution, reducing the error at each step.\n",
"\n",
"## Key Methods\n",
"\n",
"### Initialization\n",
"Please see the base class [NonlinearOptimizer.ipynb](NonlinearOptimizer.ipynb).\n",
"\n",
"- **Constructor**: Initializes the optimizer with the given parameters and initial values.\n",
"## Parameters\n",
"\n",
"### Optimization\n",
"The `LevenbergMarquardtParams` class defines parameters specific to this optimization algorithm:\n",
"\n",
"- **optimize**: Executes the optimization process, iteratively updating the solution to minimize the cost function.\n",
"| Parameter | Type | Default Value | Description |\n",
"|-----------|------|---------------|-------------|\n",
"| lambdaInitial | double | 1e-5 | The initial Levenberg-Marquardt damping term |\n",
"| lambdaFactor | double | 10.0 | The amount by which to multiply or divide lambda when adjusting lambda |\n",
"| lambdaUpperBound | double | 1e5 | The maximum lambda to try before assuming the optimization has failed |\n",
"| lambdaLowerBound | double | 0.0 | The minimum lambda used in LM |\n",
"| verbosityLM | VerbosityLM | SILENT | The verbosity level for Levenberg-Marquardt |\n",
"| minModelFidelity | double | 1e-3 | Lower bound for the modelFidelity to accept the result of an LM iteration |\n",
"| logFile | std::string | \"\" | An optional CSV log file, with [iteration, time, error, lambda] |\n",
"| diagonalDamping | bool | false | If true, use diagonal of Hessian |\n",
"| useFixedLambdaFactor | bool | true | If true applies constant increase (or decrease) to lambda according to lambdaFactor |\n",
"| minDiagonal | double | 1e-6 | When using diagonal damping saturates the minimum diagonal entries |\n",
"| maxDiagonal | double | 1e32 | When using diagonal damping saturates the maximum diagonal entries |\n",
"\n",
"### Parameter Control\n",
"These parameters complement the standard optimization parameters inherited from `NonlinearOptimizerParams`, which include:\n",
"\n",
"- **setLambda**: Sets the damping parameter $\\lambda$, which influences the convergence behavior.\n",
"- **getLambda**: Retrieves the current value of the damping parameter.\n",
"\n",
"### Convergence and Termination\n",
"\n",
"- **checkConvergence**: Evaluates whether the optimization process has converged based on predefined criteria.\n",
"- **terminate**: Stops the optimization process when certain conditions are met.\n",
"- Maximum iterations\n",
"- Relative and absolute error thresholds\n",
"- Error function verbosity\n",
"- Linear solver type\n",
"\n",
"## Usage Notes\n",
"\n",
@ -63,11 +68,20 @@
"- Proper tuning of the damping parameter $\\lambda$ is crucial for balancing the convergence rate and stability.\n",
"- The optimizer is most effective when the residuals are approximately linear near the solution.\n",
"\n",
"This class is a powerful tool for tackling complex optimization problems where traditional linear methods fall short. By leveraging the strengths of both Gauss-Newton and gradient descent, the `LevenbergMarquardtOptimizer` provides a robust framework for achieving accurate solutions in non-linear least squares problems."
"## Files\n",
"\n",
"- [LevenbergMarquardtOptimizer.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtOptimizer.h)\n",
"- [LevenbergMarquardtOptimizer.cpp](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtOptimizer.cpp)\n",
"- [LevenbergMarquardtParams.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtParams.h)\n",
"- [LevenbergMarquardtParams.cpp](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtParams.cpp)"
]
}
],
"metadata": {},
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -5,37 +5,12 @@
"id": "48970ca0",
"metadata": {},
"source": [
"# NonlinearConjugateGradientOptimizer Class Documentation\n",
"\n",
"*Disclaimer: This documentation was generated by AI and may require human revision for accuracy and completeness.*\n",
"# NonlinearConjugateGradientOptimizer\n",
"\n",
"## Overview\n",
"\n",
"The `NonlinearConjugateGradientOptimizer` class in GTSAM is an implementation of the nonlinear conjugate gradient method for optimizing nonlinear functions. This optimizer is particularly useful for solving large-scale optimization problems where the Hessian matrix is not easily computed or stored. The conjugate gradient method is an iterative algorithm that seeks to find the minimum of a function by following a series of conjugate directions.\n",
"\n",
"## Key Features\n",
"\n",
"- **Optimization Method**: Implements the nonlinear conjugate gradient method, which is an extension of the linear conjugate gradient method to nonlinear optimization problems.\n",
"- **Efficiency**: Suitable for large-scale problems due to its iterative nature and reduced memory requirements compared to methods that require the Hessian matrix.\n",
"- **Flexibility**: Can be used with various line search strategies and conjugate gradient update formulas.\n",
"\n",
"## Main Methods\n",
"\n",
"### Constructor\n",
"\n",
"- **NonlinearConjugateGradientOptimizer**: Initializes the optimizer with a given nonlinear factor graph and initial values. The user can specify optimization parameters, including the choice of line search method and conjugate gradient update formula.\n",
"\n",
"### Optimization\n",
"\n",
"- **optimize**: Executes the optimization process. This method iteratively updates the solution by computing search directions and performing line searches to minimize the objective function along these directions.\n",
"\n",
"### Accessors\n",
"\n",
"- **error**: Returns the current error value of the objective function. This is useful for monitoring the convergence of the optimization process.\n",
"- **values**: Retrieves the current estimate of the optimized variables. This allows users to access the solution at any point during the optimization.\n",
"\n",
"## Mathematical Background\n",
"\n",
"The nonlinear conjugate gradient method seeks to minimize a nonlinear function $f(x)$ by iteratively updating the solution $x_k$ according to:\n",
"\n",
"$$ x_{k+1} = x_k + \\alpha_k p_k $$\n",
@ -50,17 +25,43 @@
"\n",
"The choice of $\\beta_k$ affects the convergence properties of the algorithm.\n",
"\n",
"Key features:\n",
"\n",
"- **Optimization Method**: Implements the nonlinear conjugate gradient method, which is an extension of the linear conjugate gradient method to nonlinear optimization problems.\n",
"- **Efficiency**: Suitable for large-scale problems due to its iterative nature and reduced memory requirements compared to methods that require the Hessian matrix.\n",
"- **Flexibility**: Can be used with various line search strategies and conjugate gradient update formulas.\n",
"\n",
"## Key Methods\n",
"\n",
"Please see the base class [NonlinearOptimizer.ipynb](NonlinearOptimizer.ipynb).\n",
"\n",
"## Parameters\n",
"\n",
"The nonlinear conjugate gradient optimizer uses the standard optimization parameters inherited from `NonlinearOptimizerParams`, which include:\n",
"\n",
"- Maximum iterations\n",
"- Relative and absolute error thresholds\n",
"- Error function verbosity\n",
"- Linear solver type\n",
"\n",
"## Usage Notes\n",
"\n",
"- The `NonlinearConjugateGradientOptimizer` is most effective when the problem size is large and the computation of the Hessian is impractical.\n",
"- Users should choose an appropriate line search method and conjugate gradient update formula based on the specific characteristics of their optimization problem.\n",
"- Monitoring the error and values during optimization can provide insights into the convergence behavior and help diagnose potential issues.\n",
"\n",
"This class provides a robust framework for solving complex nonlinear optimization problems efficiently, leveraging the power of the conjugate gradient method."
"## Files\n",
"\n",
"- [NonlinearConjugateGradientOptimizer.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearConjugateGradientOptimizer.h)\n",
"- [NonlinearConjugateGradientOptimizer.cpp](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/NonlinearConjugateGradientOptimizer.cpp)"
]
}
],
"metadata": {},
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}