Better docs

release/4.3a0
Frank Dellaert 2025-04-27 13:01:03 -04:00
parent cbc45c9f4e
commit be267031ce
1 changed files with 385 additions and 140 deletions

View File

@ -24,12 +24,19 @@
"- **Multiple Observations per Pose:** Allows multiple measurements associated with the *same* body pose key, but originating from different cameras within the rig.\n", "- **Multiple Observations per Pose:** Allows multiple measurements associated with the *same* body pose key, but originating from different cameras within the rig.\n",
"- **Camera Indexing:** Each measurement must be associated with both a body pose key and a `cameraId` (index) specifying which camera in the rig took the measurement.\n", "- **Camera Indexing:** Each measurement must be associated with both a body pose key and a `cameraId` (index) specifying which camera in the rig took the measurement.\n",
"- **Fixed Calibration/Extrinsics:** The intrinsics and relative extrinsics of the cameras within the rig are assumed fixed.\n", "- **Fixed Calibration/Extrinsics:** The intrinsics and relative extrinsics of the cameras within the rig are assumed fixed.\n",
"- **`CAMERA` Template:** Can be templated on various camera models (e.g., `PinholeCamera`, `SphericalCamera`), provided they adhere to the expected GTSAM camera concepts.\n", "- **`CAMERA` Template:** Can be templated on various camera models (e.g., `PinholePose`, `SphericalCamera`), provided they adhere to the expected GTSAM camera concepts. **Important Note:** See the **Note on Template Parameter `CAMERA`** below.\n",
"- **`Values` Requirement:** Requires `Pose3` objects (representing the body frame) in the `Values` container.\n", "- **`Values` Requirement:** Requires `Pose3` objects (representing the body frame) in the `Values` container.\n",
"- **Configuration:** Behavior controlled by `SmartProjectionParams`. **Note:** Currently (as of header comment), only supports `HESSIAN` linearization mode and `ZERO_ON_DEGENERACY` mode.\n", "- **Configuration:** Behavior controlled by `SmartProjectionParams`. **Note:** Currently (as of C++ header comment), only supports `HESSIAN` linearization mode and `ZERO_ON_DEGENERACY` mode.\n",
"\n", "\n",
"**Use Case:** Ideal for visual SLAM with a calibrated multi-camera rig (e.g., stereo rig, multi-fisheye system) where only the rig's pose is optimized.\n", "**Use Case:** Ideal for visual SLAM with a calibrated multi-camera rig (e.g., stereo rig, multi-fisheye system) where only the rig's pose is optimized.\n",
"\n", "\n",
"**Note on Template Parameter `CAMERA`:**\n",
"While this factor is templated on `CAMERA` for generality, the current internal implementation for linearization has limitations. It implicitly assumes that `traits<CAMERA>::dimension` matches the optimized variable dimension (`Pose3::dimension`, which is 6).\n",
"Consequently:\n",
"- It works correctly with `CAMERA` types where `dimension == 6`, such as `PinholePose<CALIBRATION>` or `SphericalCamera`.\n",
"- Using `CAMERA` types where `dimension != 6`, such as `PinholeCamera<CALIBRATION>` (dim = 6 + CalDim), **will cause compilation errors**.\n",
"- **Recommendation:** For standard pinhole cameras in a fixed rig, **use `PinholePose<CALIBRATION>`** as the `CAMERA` type when defining the `CameraSet` for this factor.\n",
"\n",
"If you are using the factor, please cite:\n", "If you are using the factor, please cite:\n",
"> **L. Carlone, Z. Kira, C. Beall, V. Indelman, F. Dellaert**, \"Eliminating conditionally independent sets in factor graphs: a unifying perspective based on smart factors\", Int. Conf. on Robotics and Automation (ICRA), 2014.\n" "> **L. Carlone, Z. Kira, C. Beall, V. Indelman, F. Dellaert**, \"Eliminating conditionally independent sets in factor graphs: a unifying perspective based on smart factors\", Int. Conf. on Robotics and Automation (ICRA), 2014.\n"
] ]
@ -63,25 +70,25 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 1, "execution_count": 2,
"metadata": { "metadata": {
"id": "imports_code" "id": "imports_code"
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"import gtsam\n", "import gtsam\n",
"import numpy as np\n",
"from gtsam import (\n", "from gtsam import (\n",
" Values,\n", " Values,\n",
" Point2,\n", " Point2,\n",
" Point3,\n", " Point3,\n",
" Pose3,\n", " Pose3,\n",
" Rot3,\n", " Rot3,\n",
" NonlinearFactorGraph,\n",
" SmartProjectionParams,\n", " SmartProjectionParams,\n",
" LinearizationMode,\n",
" DegeneracyMode,\n",
" # Use PinholePose variant for wrapping\n",
" SmartProjectionRigFactorPinholePoseCal3_S2,\n", " SmartProjectionRigFactorPinholePoseCal3_S2,\n",
" PinholePoseCal3_S2,\n", " PinholePoseCal3_S2,\n",
" CameraSetPinholePoseCal3_S2,\n",
" Cal3_S2,\n", " Cal3_S2,\n",
")\n", ")\n",
"from gtsam.symbol_shorthand import X # Key for Pose3 variable (Body Pose)" "from gtsam.symbol_shorthand import X # Key for Pose3 variable (Body Pose)"
@ -93,98 +100,34 @@
"id": "create_header_md" "id": "create_header_md"
}, },
"source": [ "source": [
"## Creating the Rig and Factor" "## Constructor"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": { "metadata": {
"id": "create_desc_md" "id": "constructor_desc_md"
}, },
"source": [ "source": [
"1. Define the camera rig configuration: Create a `CameraSet` containing the `CAMERA` objects (with fixed intrinsics and rig-relative extrinsics).\n", "You create a `SmartProjectionRigFactor` by providing:\n",
"2. Create the `SmartProjectionRigFactor` with noise, the rig, and parameters.\n", "1. A noise model for the 2D pixel measurements (typically `noiseModel.Isotropic`).\n",
"3. Add measurements, specifying the 2D point, the corresponding **body pose key**, and the **camera ID** within the rig." "2. A `CameraSet<CAMERA>` object defining the *fixed* rig configuration. Each `CAMERA` in the set contains its fixed intrinsics and its fixed pose relative to the rig's body frame (`body_P_camera`).\n",
"3. Optionally, `SmartProjectionParams` to configure linearization and degeneracy handling. Remember the current restrictions (HESSIAN, ZERO_ON_DEGENERACY)."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 3,
"metadata": { "metadata": {
"id": "create_example_code" "id": "constructor_example_code"
}, },
"outputs": [ "outputs": [
{ {
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Smart factor involves 2 measurements from 2 unique poses.\n",
"SmartFactorRig: SmartProjectionRigFactor: \n", "SmartFactorRig: SmartProjectionRigFactor: \n",
" -- Measurement nr 0\n", " SmartProjectionFactor\n",
"key: x0\n",
"cameraId: 0\n",
"camera in rig:\n",
".pose R: [\n",
"\t1, 0, 0;\n",
"\t0, 1, 0;\n",
"\t-0, 0, 1\n",
"]\n",
"t: 0.1 0 0\n",
"camera in rig:\n",
".calibration[\n",
"\t500, 0, 320;\n",
"\t0, 500, 240;\n",
"\t0, 0, 1\n",
"]\n",
"-- Measurement nr 1\n",
"key: x0\n",
"cameraId: 1\n",
"camera in rig:\n",
".pose R: [\n",
"\t1, 0, 0;\n",
"\t0, 1, 0;\n",
"\t-0, 0, 1\n",
"]\n",
"t: 0.1 -0.1 0\n",
"camera in rig:\n",
".calibration[\n",
"\t500, 0, 320;\n",
"\t0, 500, 240;\n",
"\t0, 0, 1\n",
"]\n",
"-- Measurement nr 2\n",
"key: x1\n",
"cameraId: 0\n",
"camera in rig:\n",
".pose R: [\n",
"\t1, 0, 0;\n",
"\t0, 1, 0;\n",
"\t-0, 0, 1\n",
"]\n",
"t: 0.1 0 0\n",
"camera in rig:\n",
".calibration[\n",
"\t500, 0, 320;\n",
"\t0, 500, 240;\n",
"\t0, 0, 1\n",
"]\n",
"-- Measurement nr 3\n",
"key: x1\n",
"cameraId: 1\n",
"camera in rig:\n",
".pose R: [\n",
"\t1, 0, 0;\n",
"\t0, 1, 0;\n",
"\t-0, 0, 1\n",
"]\n",
"t: 0.1 -0.1 0\n",
"camera in rig:\n",
".calibration[\n",
"\t500, 0, 320;\n",
"\t0, 500, 240;\n",
"\t0, 0, 1\n",
"]\n",
"SmartProjectionFactor\n",
"linearizationMode: 0\n", "linearizationMode: 0\n",
"triangulationParameters:\n", "triangulationParameters:\n",
"rankTolerance = 1\n", "rankTolerance = 1\n",
@ -198,122 +141,424 @@
"no point, status = 1\n", "no point, status = 1\n",
"\n", "\n",
"SmartFactorBase, z = \n", "SmartFactorBase, z = \n",
"measurement 0, px = \n", " keys = { }\n"
"300\n",
"200\n",
"noise model = unit (2) \n",
"measurement 1, px = \n",
"250\n",
"201\n",
"noise model = unit (2) \n",
"measurement 2, px = \n",
"310\n",
"210\n",
"noise model = unit (2) \n",
"measurement 3, px = \n",
"260\n",
"211\n",
"noise model = unit (2) \n",
" keys = { x0 x1 }\n"
] ]
} }
], ],
"source": [ "source": [
"# 1. Define the Camera Rig\n", "# Define the Camera Rig (using PinholePose)\n",
"K = Cal3_S2(500, 500, 0, 320, 240)\n", "K = Cal3_S2(500, 500, 0, 320, 240)\n",
"# Camera 0: Forward facing, slightly offset\n",
"body_P_cam0 = Pose3(Rot3.Ypr(0, 0, 0), Point3(0.1, 0, 0))\n", "body_P_cam0 = Pose3(Rot3.Ypr(0, 0, 0), Point3(0.1, 0, 0))\n",
"cam0 = PinholePoseCal3_S2(body_P_cam0, K)\n", "cam0 = PinholePoseCal3_S2(body_P_cam0, K)\n",
"# Camera 1: Stereo camera, right of cam0\n", "body_P_cam1 = Pose3(Rot3.Ypr(0, 0, 0), Point3(0.1, -0.1, 0)) # Baseline 0.1m\n",
"body_P_cam1 = Pose3(Rot3.Ypr(0, 0, 0), Point3(0.1, -0.1, 0)) # Baseline 0.1m\n",
"cam1 = PinholePoseCal3_S2(body_P_cam1, K)\n", "cam1 = PinholePoseCal3_S2(body_P_cam1, K)\n",
"\n", "\n",
"rig_cameras = gtsam.CameraSetPinholePoseCal3_S2()\n", "rig_cameras = gtsam.CameraSetPinholePoseCal3_S2()\n",
"rig_cameras.push_back(cam0)\n", "rig_cameras.push_back(cam0)\n",
"rig_cameras.push_back(cam1)\n", "rig_cameras.push_back(cam1)\n",
"\n", "\n",
"# 2. Create the Factor\n", "# Noise model and parameters\n",
"noise_model = gtsam.noiseModel.Isotropic.Sigma(2, 1.0)\n", "noise_model = gtsam.noiseModel.Isotropic.Sigma(2, 1.0)\n",
"# Ensure parameters are compatible (HESSIAN, ZERO_ON_DEGENERACY)\n", "smart_params = SmartProjectionParams(\n",
"smart_params = SmartProjectionParams(linMode=gtsam.LinearizationMode.HESSIAN,\n", " linMode=LinearizationMode.HESSIAN, degMode=DegeneracyMode.ZERO_ON_DEGENERACY\n",
" degMode=gtsam.DegeneracyMode.ZERO_ON_DEGENERACY)\n", ")\n",
"\n", "\n",
"# Factor type includes the Camera type\n", "# Create the Factor\n",
"smart_factor = SmartProjectionRigFactorPinholePoseCal3_S2(noise_model, rig_cameras, smart_params)\n", "smart_factor = SmartProjectionRigFactorPinholePoseCal3_S2(\n",
"\n", " noise_model, rig_cameras, smart_params\n",
"# 3. Add measurements\n", ")\n",
"# Observation from Body Pose X(0), Camera 0\n",
"smart_factor.add(Point2(300, 200), X(0), 0)\n",
"# Observation from Body Pose X(0), Camera 1 (stereo pair?)\n",
"smart_factor.add(Point2(250, 201), X(0), 1)\n",
"# Observation from Body Pose X(1), Camera 0\n",
"smart_factor.add(Point2(310, 210), X(1), 0)\n",
"# Observation from Body Pose X(1), Camera 1\n",
"smart_factor.add(Point2(260, 211), X(1), 1)\n",
"\n",
"print(f\"Smart factor involves {smart_factor.size()} measurements from {len(smart_factor.keys())} unique poses.\")\n",
"smart_factor.print(\"SmartFactorRig: \")" "smart_factor.print(\"SmartFactorRig: \")"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": { "metadata": {
"id": "eval_header_md" "id": "add_header_md"
}, },
"source": [ "source": [
"## Evaluating the Error" "## `add(measurement, poseKey, cameraId)`"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": { "metadata": {
"id": "eval_desc_md" "id": "add_desc_md"
}, },
"source": [ "source": [
"The `.error(values)` method uses the `Pose3` objects (body poses) from `values` and the fixed rig configuration to triangulate the point and compute the error." "This method adds a single 2D measurement (`Point2`) associated with a specific camera in the rig and a specific body pose.\n",
"- `measurement`: The observed pixel coordinates.\n",
"- `poseKey`: The key (`Symbol` or `Key`) of the **body's `Pose3`** variable at the time of observation.\n",
"- `cameraId`: The integer index of the camera within the `CameraSet` (provided during construction) that captured this measurement."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 4,
"metadata": { "metadata": {
"id": "eval_example_code" "id": "add_example_code"
}, },
"outputs": [ "outputs": [
{ {
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Triangulated point result:\n", "Smart factor involves 2 measurements from 2 unique poses.\n"
"<gtsam.gtsam.TriangulationResult object at 0x117264ff0>\n",
"\n",
"Triangulation failed, error calculation depends on degeneracyMode.\n",
"Error when degenerate: 0.0\n"
] ]
} }
], ],
"source": [ "source": [
"# Create Values containing Body Pose3 objects\n", "# --- Use Pre-calculated Valid Measurements ---\n",
"# These measurements were calculated offline using:\n",
"# gt_landmark = Point3(1.0, 0.5, 5.0)\n",
"# gt_pose0 = Pose3()\n",
"# gt_pose1 = Pose3(Rot3.Ypr(0.1, 0.0, 0.0), Point3(0.5, 0, 0))\n",
"# And the rig defined above.\n",
"\n",
"z00 = Point2(400.0, 290.0) # Measurement from Body Pose X(0), Camera 0\n",
"z01 = Point2(350.0, 290.0) # Measurement from Body Pose X(0), Camera 1\n",
"z10 = Point2(372.787, 297.553) # Measurement from Body Pose X(1), Camera 0\n",
"z11 = Point2(323.308, 297.674) # Measurement from Body Pose X(1), Camera 1\n",
"# --------------------------------------------\n",
"\n",
"# 3. Add pre-calculated measurements\n",
"smart_factor.add(z00, X(0), 0)\n",
"smart_factor.add(z01, X(0), 1)\n",
"smart_factor.add(z10, X(1), 0)\n",
"smart_factor.add(z11, X(1), 1)\n",
"\n",
"print(f\"Smart factor involves {smart_factor.size()} measurements from {len(smart_factor.keys())} unique poses.\")\n",
"# smart_factor.print(\"SmartFactorRig (with pre-calculated measurements): \") # Optional\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "inherited_header_md"
},
"source": [
"## Inherited and Core Methods"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "error_header_md"
},
"source": [
"### `error(Values values)`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "error_desc_md"
},
"source": [
"Inherited from `SmartFactorBase`. Calculates the total reprojection error for the landmark.\n",
"**Internal Process:**\n",
"1. Retrieves the body `Pose3` estimates for all relevant keys from the `values` object.\n",
"2. For each measurement, calculates the corresponding camera's world pose using the body pose and the fixed rig extrinsics (`world_P_sensor = world_P_body * body_P_camera`).\n",
"3. Triangulates the 3D landmark position using these calculated camera poses and the stored 2D measurements.\n",
"4. Reprojects the triangulated point back into each calculated camera view.\n",
"5. Computes the sum of squared differences between the reprojections and the original measurements, weighted by the noise model.\n",
"6. Handles degenerate cases (e.g., failed triangulation) based on the `degeneracyMode` set in `SmartProjectionParams`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "error_example_code"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Triangulation Result Status: Status.VALID\n",
"Total reprojection error (0.5 * sum(err^2/sigma^2)): 1316.4717\n"
]
}
],
"source": [
"# Assuming smart_factor created and measurements added above\n",
"\n",
"values = Values()\n", "values = Values()\n",
"pose0 = Pose3(Rot3.Ypr(0.0, 0.0, 0.0), Point3(0, 0, 0))\n", "pose0 = Pose3() # Body at origin\n",
"pose1 = Pose3(Rot3.Ypr(0.1, 0.0, 0.0), Point3(0.5, 0, 0))\n", "pose1 = Pose3(Rot3.Ypr(0.1, 0.0, 0.0), Point3(0.5, 0, 0)) # Body moved\n",
"values.insert(X(0), pose0)\n", "values.insert(X(0), pose0)\n",
"values.insert(X(1), pose1)\n", "values.insert(X(1), pose1)\n",
"\n", "\n",
"# Triangulate first to see the implicit point\n", "# Need to check triangulation first, as error calculation depends on it\n",
"# The 'cameras' method internally combines body poses with rig extrinsics\n",
"point_result = smart_factor.point(values)\n", "point_result = smart_factor.point(values)\n",
"print(f\"Triangulated point result:\\n{point_result}\")\n", "print(f\"Triangulation Result Status: {point_result.status}\")\n",
"\n",
"total_error = smart_factor.error(values)\n",
"print(f\"Total reprojection error (0.5 * sum(err^2/sigma^2)): {total_error:.4f}\")\n",
"# Note: If degenerate and DegeneracyMode is ZERO_ON_DEGENERACY, error will be 0."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "point_header_md"
},
"source": [
"### `point(Values values)`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "point_desc_md"
},
"source": [
"Inherited from `SmartProjectionFactor`. Performs the internal triangulation based on the body poses in `values` and the fixed rig geometry.\n",
"Returns a `TriangulationResult` object which contains:\n",
"- The triangulated `Point3` (if successful).\n",
"- A status indicating whether the triangulation was `VALID`, `DEGENERATE`, `BEHIND_CAMERA`, `OUTLIER`, or `FAR_POINT`."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "point_example_code"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Triangulation Result Status: Status.VALID\n",
"Triangulated Point: [0.94370846 0.79793704 7.63497051]\n"
]
}
],
"source": [
"# Assuming smart_factor and values from the previous cell\n",
"point_result = smart_factor.point(values)\n",
"print(f\"Triangulation Result Status: {point_result.status}\")\n",
"\n", "\n",
"if point_result.valid():\n", "if point_result.valid():\n",
" # Calculate error\n", " triangulated_point = point_result.get()\n",
" total_error = smart_factor.error(values)\n", " print(f\"Triangulated Point: {triangulated_point}\")\n",
" print(f\"\\nTotal reprojection error (0.5 * sum(err^2/sigma^2)): {total_error:.4f}\")\n",
"else:\n", "else:\n",
" print(\"\\nTriangulation failed, error calculation depends on degeneracyMode.\")\n", " print(\"Triangulation did not produce a valid point.\")"
" # Since mode is ZERO_ON_DEGENERACY, error should be 0\n", ]
" total_error = smart_factor.error(values)\n", },
" print(f\"Error when degenerate: {total_error}\")" {
"cell_type": "markdown",
"metadata": {
"id": "cameras_header_md"
},
"source": [
"### `cameras(Values values)`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cameras_desc_md"
},
"source": [
"Inherited from `SmartFactorBase`. Computes and returns a `CameraSet<CAMERA>` containing the effective cameras corresponding to *each measurement*.\n",
"For each measurement `i` associated with body pose key `k` and camera ID `cid`:\n",
"1. Retrieves the body pose `world_P_body = values.atPose3(k)`.\n",
"2. Retrieves the fixed relative pose `body_P_camera = rig_cameras.at(cid).pose()`.\n",
"3. Computes the camera's world pose `world_P_camera = world_P_body * body_P_camera`.\n",
"4. Creates a `CAMERA` object using this `world_P_camera` and the fixed intrinsics `rig_cameras.at(cid).calibration()`.\n",
"The returned `CameraSet` contains these calculated cameras, one for each measurement added via `add()`."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "cameras_example_code"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Pose of camera for measurement 0 (Body X(0), Cam 0):\n",
"R: [\n",
"\t1, 0, 0;\n",
"\t0, 1, 0;\n",
"\t0, 0, 1\n",
"]\n",
"t: 0.1 0 0\n",
"\n",
"\n",
"Pose of camera for measurement 1 (Body X(0), Cam 1):\n",
"R: [\n",
"\t1, 0, 0;\n",
"\t0, 1, 0;\n",
"\t0, 0, 1\n",
"]\n",
"t: 0.1 -0.1 0\n",
"\n",
"\n",
"Pose of camera for measurement 2 (Body X(1), Cam 0):\n",
"R: [\n",
"\t0.995004, -0.0998334, 0;\n",
"\t0.0998334, 0.995004, 0;\n",
"\t0, 0, 1\n",
"]\n",
"t: 0.5995 0.00998334 0\n",
"\n",
"\n",
"Pose of camera for measurement 3 (Body X(1), Cam 1):\n",
"R: [\n",
"\t0.995004, -0.0998334, 0;\n",
"\t0.0998334, 0.995004, 0;\n",
"\t0, 0, 1\n",
"]\n",
"t: 0.609484 -0.0895171 0\n",
"\n",
"\n"
]
}
],
"source": [
"# Assuming smart_factor and values from previous cells\n",
"\n",
"calculated_cameras = smart_factor.cameras(values)\n",
"\n",
"# Print the world pose of each calculated camera\n",
"print(f\"Pose of camera for measurement 0 (Body X(0), Cam 0):\\n{calculated_cameras.at(0).pose()}\\n\")\n",
"print(f\"Pose of camera for measurement 1 (Body X(0), Cam 1):\\n{calculated_cameras.at(1).pose()}\\n\")\n",
"print(f\"Pose of camera for measurement 2 (Body X(1), Cam 0):\\n{calculated_cameras.at(2).pose()}\\n\")\n",
"print(f\"Pose of camera for measurement 3 (Body X(1), Cam 1):\\n{calculated_cameras.at(3).pose()}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "linearize_header_md"
},
"source": [
"### `linearize(Values values)`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "linearize_desc_md"
},
"source": [
"Inherited from `SmartProjectionFactor`. Computes the linear approximation (GaussianFactor) of the factor at the linearization point defined by `values`.\n",
"For `SmartProjectionRigFactor`, due to current implementation limitations, this **must** be configured via `SmartProjectionParams` to use `LinearizationMode.HESSIAN`.\n",
"The resulting `RegularHessianFactor` connects the **unique body pose keys** involved in the measurements."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"id": "linearize_example_code"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Linearized Factor (HessianFactor structure):\n",
"Linear Factor: \n",
" keys: x0(6) x1(6) \n",
"Augmented information matrix: [\n",
"\t255621, 1454.13, -31747.6, 636.066, -33103.6, 3605.16, -254669, 22279.1, 15195.9, 2671.95, 33001.7, -3605.16, -5437.65;\n",
"\t1454.13, 9642.56, -1187.49, 1253.63, -198.336, -75.3949, -2405.75, -9411.71, 1088.32, -1227.56, 322.499, 75.3949, -653.552;\n",
"\t-31747.6, -1187.49, 4048.22, -209.638, 4112.44, -437.73, 31729.4, -1770.15, -1992, -201.969, -4112.82, 437.73, 740.416;\n",
"\t636.066, 1253.63, -209.638, 163.769, -83.6702, -3.45048, -757.87, -1182.15, 167.803, -154.598, 99.6018, 3.45048, -94.317;\n",
"\t-33103.6, -198.336, 4112.44, -83.6702, 4287, -466.758, 32981.3, -2875.28, -1968.94, -344.734, -4273.93, 466.758, 704.833;\n",
"\t3605.16, -75.3949, -437.73, -3.45048, -466.758, 51.9764, -3582.21, 409.075, 204.351, 50.0313, 464.082, -51.9764, -70.5256;\n",
"\t-254669, -2405.75, 31729.4, -757.87, 32981.3, -3582.21, 253816, -21248.6, -15238.8, -2538.55, -32892.2, 3582.21, 5479.25;\n",
"\t22279.1, -9411.71, -1770.15, -1182.15, -2875.28, 409.075, -21248.6, 11385.4, 332.508, 1463.29, "
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"2742.9, -409.075, 142.514;\n",
"\t15195.9, 1088.32, -1992, 167.803, -1968.94, 204.351, -15238.8, 332.508, 1007.53, 29.6019, 1975.86, -204.351, -387.999;\n",
"\t2671.95, -1227.56, -201.969, -154.598, -344.734, 50.0313, -2538.55, 1463.29, 29.6019, 188.241, 327.577, -50.0313, 23.48;\n",
"\t33001.7, 322.499, -4112.82, 99.6018, -4273.93, 464.082, -32892.2, 2742.9, 1975.86, 327.577, 4262.53, -464.082, -710.727;\n",
"\t-3605.16, 75.3949, 437.73, 3.45048, 466.758, -51.9764, 3582.21, -409.075, -204.351, -50.0313, -464.082, 51.9764, 70.5256;\n",
"\t-5437.65, -653.552, 740.416, -94.317, 704.833, -70.5256, 5479.25, 142.514, -387.999, 23.48, -710.727, 70.5256, 2632.94\n",
"]\n"
]
}
],
"source": [
"# Assuming smart_factor and values from previous cells\n",
"\n",
"# Check if triangulation succeeded before linearizing\n",
"if not smart_factor.point(values).valid():\n",
" print(\"Cannot linearize: triangulation failed or degenerate.\")\n",
"else:\n",
" linear_factor = smart_factor.linearize(values)\n",
"\n",
" if linear_factor:\n",
" print(\"\\nLinearized Factor (HessianFactor structure):\")\n",
" linear_factor.print(\"Linear Factor: \")\n",
" else:\n",
" print(\"\\nLinearization failed (potentially due to triangulation degeneracy and params setting).\")\n",
"\n",
"# Note: The printed Hessian is often zero when ZERO_ON_DEGENERACY is used and triangulation fails."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "other_methods_header_md"
},
"source": [
"### Other Inherited Methods"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "other_methods_desc_md"
},
"source": [
"The factor also inherits standard methods from `NonlinearFactor` and `SmartFactorBase`:\n",
"- **`keys()`**: Returns a `KeyVector` containing the **unique body pose keys** involved.\n",
"- **`measured()`**: Returns a `Point2Vector` containing all the added 2D measurements.\n",
"- **`dim()`**: Returns the dimension of the error vector (2 * number of measurements).\n",
"- **`size()`**: Returns the number of measurements added.\n",
"- **`print(s, keyFormatter)`**: Prints details about the factor.\n",
"- **`equals(other, tol)`**: Compares two factors for equality."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"id": "other_methods_example_code"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Keys: ['x0', 'x1']\n",
"Number of Measurements (size): 2\n",
"Dimension (dim): 8\n",
"Measurements: [array([400., 290.]), array([350., 290.]), array([372.787, 297.553]), array([323.308, 297.674])]\n"
]
}
],
"source": [
"# Assuming smart_factor from previous cells\n",
"print(f\"Keys: {[gtsam.DefaultKeyFormatter(k) for k in smart_factor.keys()]}\")\n",
"print(f\"Number of Measurements (size): {smart_factor.size()}\")\n",
"print(f\"Dimension (dim): {smart_factor.dim()}\")\n",
"print(f\"Measurements: {smart_factor.measured()}\")"
] ]
} }
], ],