Merge branch 'develop' into imu-examples
commit
d79ddb6858
|
|
@ -63,7 +63,7 @@ function configure()
|
|||
-DGTSAM_BUILD_EXAMPLES_ALWAYS=${GTSAM_BUILD_EXAMPLES_ALWAYS:-ON} \
|
||||
-DGTSAM_ALLOW_DEPRECATED_SINCE_V4=${GTSAM_ALLOW_DEPRECATED_SINCE_V4:-OFF} \
|
||||
-DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF \
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
-DCMAKE_VERBOSE_MAKEFILE=OFF
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,8 @@ addons:
|
|||
- clang-9
|
||||
- build-essential pkg-config
|
||||
- cmake
|
||||
- libpython-dev python-numpy
|
||||
- python3-dev libpython-dev
|
||||
- python3-numpy
|
||||
- libboost-all-dev
|
||||
|
||||
# before_install:
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
Cython>=0.25.2
|
||||
backports_abc>=0.5
|
||||
numpy>=1.12.0
|
||||
numpy>=1.11.0
|
||||
|
|
|
|||
|
|
@ -1,12 +0,0 @@
|
|||
# How to build a GTSAM debian package
|
||||
|
||||
To use the ``debuild`` command, install the ``devscripts`` package
|
||||
|
||||
sudo apt install devscripts
|
||||
|
||||
Change into the gtsam directory, then run:
|
||||
|
||||
debuild -us -uc -j4
|
||||
|
||||
Adjust the ``-j4`` depending on how many CPUs you want to build on in
|
||||
parallel.
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
gtsam (4.0.0-1berndpfrommer) bionic; urgency=medium
|
||||
|
||||
* initial release
|
||||
|
||||
-- Bernd Pfrommer <bernd.pfrommer@gmail.com> Wed, 18 Jul 2018 20:36:44 -0400
|
||||
|
|
@ -1 +0,0 @@
|
|||
9
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
Source: gtsam
|
||||
Section: libs
|
||||
Priority: optional
|
||||
Maintainer: Frank Dellaert <frank@cc.gatech.edu>
|
||||
Uploaders: Jose Luis Blanco Claraco <joseluisblancoc@gmail.com>, Bernd Pfrommer <bernd.pfrommer@gmail.com>
|
||||
Build-Depends: cmake, libboost-all-dev (>= 1.58), libeigen3-dev, libtbb-dev, debhelper (>=9)
|
||||
Standards-Version: 3.9.7
|
||||
Homepage: https://github.com/borglab/gtsam
|
||||
Vcs-Browser: https://github.com/borglab/gtsam
|
||||
|
||||
Package: libgtsam-dev
|
||||
Architecture: any
|
||||
Depends: ${shlibs:Depends}, ${misc:Depends}, libboost-serialization-dev, libboost-system-dev, libboost-filesystem-dev, libboost-thread-dev, libboost-program-options-dev, libboost-date-time-dev, libboost-timer-dev, libboost-chrono-dev, libboost-regex-dev
|
||||
Description: Georgia Tech Smoothing and Mapping Library
|
||||
gtsam: Georgia Tech Smoothing and Mapping library for SLAM type applications
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
|
||||
Upstream-Name: gtsam
|
||||
Source: https://bitbucket.org/gtborg/gtsam.git
|
||||
|
||||
Files: *
|
||||
Copyright: 2017, Frank Dellaert
|
||||
License: BSD
|
||||
|
||||
Files: gtsam/3rdparty/CCOLAMD/*
|
||||
Copyright: 2005-2011, Univ. of Florida. Authors: Timothy A. Davis, Sivasankaran Rajamanickam, and Stefan Larimore. Closely based on COLAMD by Davis, Stefan Larimore, in collaboration with Esmond Ng, and John Gilbert. http://www.cise.ufl.edu/research/sparse
|
||||
License: GNU LESSER GENERAL PUBLIC LICENSE
|
||||
|
||||
Files: gtsam/3rdparty/Eigen/*
|
||||
Copyright: 2017, Multiple Authors
|
||||
License: MPL2
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
#!/usr/bin/make -f
|
||||
# See debhelper(7) (uncomment to enable)
|
||||
# output every command that modifies files on the build system.
|
||||
export DH_VERBOSE = 1
|
||||
|
||||
# Makefile target name for running unit tests:
|
||||
GTSAM_TEST_TARGET = check
|
||||
|
||||
# see FEATURE AREAS in dpkg-buildflags(1)
|
||||
#export DEB_BUILD_MAINT_OPTIONS = hardening=+all
|
||||
|
||||
# see ENVIRONMENT in dpkg-buildflags(1)
|
||||
# package maintainers to append CFLAGS
|
||||
#export DEB_CFLAGS_MAINT_APPEND = -Wall -pedantic
|
||||
# package maintainers to append LDFLAGS
|
||||
#export DEB_LDFLAGS_MAINT_APPEND = -Wl,--as-needed
|
||||
|
||||
%:
|
||||
dh $@ --parallel
|
||||
|
||||
# dh_make generated override targets
|
||||
# This is example for Cmake (See https://bugs.debian.org/641051 )
|
||||
override_dh_auto_configure:
|
||||
dh_auto_configure -- -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=/usr -DGTSAM_BUILD_EXAMPLES_ALWAYS=OFF -DGTSAM_BUILD_TESTS=ON -DGTSAM_BUILD_WRAP=OFF -DGTSAM_BUILD_DOCS=OFF -DGTSAM_INSTALL_CPPUNITLITE=OFF -DGTSAM_INSTALL_GEOGRAPHICLIB=OFF -DGTSAM_BUILD_TYPE_POSTFIXES=OFF -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF
|
||||
|
||||
override_dh_auto_test-arch:
|
||||
# Tests for arch-dependent :
|
||||
echo "[override_dh_auto_test-arch]"
|
||||
dh_auto_build -O--buildsystem=cmake -- $(GTSAM_TEST_TARGET)
|
||||
|
|
@ -1 +0,0 @@
|
|||
3.0 (quilt)
|
||||
|
|
@ -2291,15 +2291,11 @@ uncalibration
|
|||
used in the residual).
|
||||
\end_layout
|
||||
|
||||
\begin_layout Standard
|
||||
\begin_inset Note Note
|
||||
status collapsed
|
||||
|
||||
\begin_layout Section
|
||||
Noise models of prior factors
|
||||
\end_layout
|
||||
|
||||
\begin_layout Plain Layout
|
||||
\begin_layout Standard
|
||||
The simplest way to describe noise models is by an example.
|
||||
Let's take a prior factor on a 3D pose
|
||||
\begin_inset Formula $x\in\SE 3$
|
||||
|
|
@ -2353,7 +2349,7 @@ e\left(x\right)=\norm{h\left(x\right)}_{\Sigma}^{2}=h\left(x\right)^{\t}\Sigma^{
|
|||
useful answer out quickly ]
|
||||
\end_layout
|
||||
|
||||
\begin_layout Plain Layout
|
||||
\begin_layout Standard
|
||||
The density induced by a noise model on the prior factor is Gaussian in
|
||||
the tangent space about the linearization point.
|
||||
Suppose that the pose is linearized at
|
||||
|
|
@ -2431,7 +2427,7 @@ Here we see that the update
|
|||
.
|
||||
\end_layout
|
||||
|
||||
\begin_layout Plain Layout
|
||||
\begin_layout Standard
|
||||
This means that to draw random pose samples, we actually draw random samples
|
||||
of
|
||||
\begin_inset Formula $\delta x$
|
||||
|
|
@ -2456,7 +2452,7 @@ This means that to draw random pose samples, we actually draw random samples
|
|||
Noise models of between factors
|
||||
\end_layout
|
||||
|
||||
\begin_layout Plain Layout
|
||||
\begin_layout Standard
|
||||
The noise model of a BetweenFactor is a bit more complicated.
|
||||
The unwhitened error is
|
||||
\begin_inset Formula
|
||||
|
|
@ -2516,11 +2512,6 @@ e\left(\delta x_{1}\right) & \approx\norm{\log\left(z^{-1}\left(x_{1}\exp\delta
|
|||
\end_inset
|
||||
|
||||
|
||||
\end_layout
|
||||
|
||||
\end_inset
|
||||
|
||||
|
||||
\end_layout
|
||||
|
||||
\end_body
|
||||
|
|
|
|||
Binary file not shown.
4
gtsam.h
4
gtsam.h
|
|
@ -281,7 +281,7 @@ virtual class Value {
|
|||
};
|
||||
|
||||
#include <gtsam/base/GenericValue.h>
|
||||
template<T = {Vector, gtsam::Point2, gtsam::Point3, gtsam::Rot2, gtsam::Rot3, gtsam::Pose2, gtsam::Pose3, gtsam::StereoPoint2, gtsam::Cal3_S2, gtsam::CalibratedCamera, gtsam::SimpleCamera, gtsam::imuBias::ConstantBias}>
|
||||
template<T = {Vector, Matrix, gtsam::Point2, gtsam::Point3, gtsam::Rot2, gtsam::Rot3, gtsam::Pose2, gtsam::Pose3, gtsam::StereoPoint2, gtsam::Cal3_S2, gtsam::Cal3DS2, gtsam::Cal3Bundler, gtsam::EssentialMatrix, gtsam::CalibratedCamera, gtsam::SimpleCamera, gtsam::imuBias::ConstantBias}>
|
||||
virtual class GenericValue : gtsam::Value {
|
||||
void serializable() const;
|
||||
};
|
||||
|
|
@ -2955,6 +2955,7 @@ class PreintegratedImuMeasurements {
|
|||
gtsam::Rot3 deltaRij() const;
|
||||
Vector deltaPij() const;
|
||||
Vector deltaVij() const;
|
||||
gtsam::imuBias::ConstantBias biasHat() const;
|
||||
Vector biasHatVector() const;
|
||||
gtsam::NavState predict(const gtsam::NavState& state_i,
|
||||
const gtsam::imuBias::ConstantBias& bias) const;
|
||||
|
|
@ -3016,6 +3017,7 @@ class PreintegratedCombinedMeasurements {
|
|||
gtsam::Rot3 deltaRij() const;
|
||||
Vector deltaPij() const;
|
||||
Vector deltaVij() const;
|
||||
gtsam::imuBias::ConstantBias biasHat() const;
|
||||
Vector biasHatVector() const;
|
||||
gtsam::NavState predict(const gtsam::NavState& state_i,
|
||||
const gtsam::imuBias::ConstantBias& bias) const;
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ if(NOT GTSAM_USE_SYSTEM_EIGEN)
|
|||
endforeach(eigen_dir)
|
||||
|
||||
if(GTSAM_WITH_EIGEN_UNSUPPORTED)
|
||||
message("-- Installing Eigen Unsupported modules")
|
||||
message(STATUS "Installing Eigen Unsupported modules")
|
||||
# do the same for the unsupported eigen folder
|
||||
file(GLOB_RECURSE unsupported_eigen_headers "${CMAKE_CURRENT_SOURCE_DIR}/Eigen/unsupported/Eigen/*.h")
|
||||
|
||||
|
|
|
|||
|
|
@ -181,7 +181,7 @@ public:
|
|||
// Alignment, see https://eigen.tuxfamily.org/dox/group__TopicStructHavingEigenMembers.html
|
||||
enum { NeedsToAlign = (sizeof(T) % 16) == 0 };
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
};
|
||||
|
||||
/// use this macro instead of BOOST_CLASS_EXPORT for GenericValues
|
||||
|
|
|
|||
|
|
@ -214,7 +214,7 @@ public:
|
|||
enum { NeedsToAlign = (sizeof(M1) % 16) == 0 || (sizeof(M2) % 16) == 0
|
||||
};
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
};
|
||||
|
||||
// Define any direct product group to be a model of the multiplicative Group concept
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ namespace gtsam {
|
|||
blockStart_(0)
|
||||
{
|
||||
fillOffsets(dimensions.begin(), dimensions.end(), appendOneDimension);
|
||||
matrix_.setZero(variableColOffsets_.back(), variableColOffsets_.back());
|
||||
matrix_.resize(variableColOffsets_.back(), variableColOffsets_.back());
|
||||
assertInvariants();
|
||||
}
|
||||
|
||||
|
|
@ -86,7 +86,7 @@ namespace gtsam {
|
|||
blockStart_(0)
|
||||
{
|
||||
fillOffsets(firstBlockDim, lastBlockDim, appendOneDimension);
|
||||
matrix_.setZero(variableColOffsets_.back(), variableColOffsets_.back());
|
||||
matrix_.resize(variableColOffsets_.back(), variableColOffsets_.back());
|
||||
assertInvariants();
|
||||
}
|
||||
|
||||
|
|
@ -95,7 +95,7 @@ namespace gtsam {
|
|||
SymmetricBlockMatrix(const CONTAINER& dimensions, const Matrix& matrix, bool appendOneDimension = false) :
|
||||
blockStart_(0)
|
||||
{
|
||||
matrix_.setZero(matrix.rows(), matrix.cols());
|
||||
matrix_.resize(matrix.rows(), matrix.cols());
|
||||
matrix_.triangularView<Eigen::Upper>() = matrix.triangularView<Eigen::Upper>();
|
||||
fillOffsets(dimensions.begin(), dimensions.end(), appendOneDimension);
|
||||
if(matrix_.rows() != matrix_.cols())
|
||||
|
|
@ -416,4 +416,3 @@ namespace gtsam {
|
|||
class CholeskyFailed;
|
||||
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,67 @@
|
|||
/* ----------------------------------------------------------------------------
|
||||
|
||||
* GTSAM Copyright 2020, Georgia Tech Research Corporation,
|
||||
* Atlanta, Georgia 30332-0415
|
||||
* All Rights Reserved
|
||||
* Authors: Frank Dellaert, et al. (see THANKS for the full author list)
|
||||
|
||||
* See LICENSE for the license information
|
||||
|
||||
* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
* @file make_shared.h
|
||||
* @brief make_shared trampoline function to ensure proper alignment
|
||||
* @author Fan Jiang
|
||||
*/
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <gtsam/base/types.h>
|
||||
|
||||
#include <Eigen/Core>
|
||||
|
||||
#include <boost/make_shared.hpp>
|
||||
|
||||
#include <type_traits>
|
||||
|
||||
namespace gtsam {
|
||||
/// An shorthand alias for accessing the ::type inside std::enable_if that can be used in a template directly
|
||||
template<bool B, class T = void>
|
||||
using enable_if_t = typename std::enable_if<B, T>::type;
|
||||
}
|
||||
|
||||
namespace gtsam {
|
||||
|
||||
/**
|
||||
* Add our own `make_shared` as a layer of wrapping on `boost::make_shared`
|
||||
* This solves the problem with the stock `make_shared` that custom alignment is not respected, causing SEGFAULTs
|
||||
* at runtime, which is notoriously hard to debug.
|
||||
*
|
||||
* Explanation
|
||||
* ===============
|
||||
* The template `needs_eigen_aligned_allocator<T>::value` will evaluate to `std::true_type` if the type alias
|
||||
* `_eigen_aligned_allocator_trait = void` is present in a class, which is automatically added by the
|
||||
* `GTSAM_MAKE_ALIGNED_OPERATOR_NEW` macro.
|
||||
*
|
||||
* This function declaration will only be taken when the above condition is true, so if some object does not need to
|
||||
* be aligned, `gtsam::make_shared` will fall back to the next definition, which is a simple wrapper for
|
||||
* `boost::make_shared`.
|
||||
*
|
||||
* @tparam T The type of object being constructed
|
||||
* @tparam Args Type of the arguments of the constructor
|
||||
* @param args Arguments of the constructor
|
||||
* @return The object created as a boost::shared_ptr<T>
|
||||
*/
|
||||
template<typename T, typename ... Args>
|
||||
gtsam::enable_if_t<needs_eigen_aligned_allocator<T>::value, boost::shared_ptr<T>> make_shared(Args &&... args) {
|
||||
return boost::allocate_shared<T>(Eigen::aligned_allocator<T>(), std::forward<Args>(args)...);
|
||||
}
|
||||
|
||||
/// Fall back to the boost version if no need for alignment
|
||||
template<typename T, typename ... Args>
|
||||
gtsam::enable_if_t<!needs_eigen_aligned_allocator<T>::value, boost::shared_ptr<T>> make_shared(Args &&... args) {
|
||||
return boost::make_shared<T>(std::forward<Args>(args)...);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -42,123 +42,218 @@
|
|||
|
||||
namespace gtsam {
|
||||
|
||||
// Serialization directly to strings in compressed format
|
||||
template<class T>
|
||||
std::string serialize(const T& input) {
|
||||
std::ostringstream out_archive_stream;
|
||||
/** @name Standard serialization
|
||||
* Serialization in default compressed format
|
||||
*/
|
||||
///@{
|
||||
/// serializes to a stream
|
||||
template <class T>
|
||||
void serializeToStream(const T& input, std::ostream& out_archive_stream) {
|
||||
boost::archive::text_oarchive out_archive(out_archive_stream);
|
||||
out_archive << input;
|
||||
return out_archive_stream.str();
|
||||
}
|
||||
|
||||
template<class T>
|
||||
void deserialize(const std::string& serialized, T& output) {
|
||||
std::istringstream in_archive_stream(serialized);
|
||||
/// deserializes from a stream
|
||||
template <class T>
|
||||
void deserializeFromStream(std::istream& in_archive_stream, T& output) {
|
||||
boost::archive::text_iarchive in_archive(in_archive_stream);
|
||||
in_archive >> output;
|
||||
}
|
||||
|
||||
template<class T>
|
||||
/// serializes to a string
|
||||
template <class T>
|
||||
std::string serializeToString(const T& input) {
|
||||
std::ostringstream out_archive_stream;
|
||||
serializeToStream(input, out_archive_stream);
|
||||
return out_archive_stream.str();
|
||||
}
|
||||
|
||||
/// deserializes from a string
|
||||
template <class T>
|
||||
void deserializeFromString(const std::string& serialized, T& output) {
|
||||
std::istringstream in_archive_stream(serialized);
|
||||
deserializeFromStream(in_archive_stream, output);
|
||||
}
|
||||
|
||||
/// serializes to a file
|
||||
template <class T>
|
||||
bool serializeToFile(const T& input, const std::string& filename) {
|
||||
std::ofstream out_archive_stream(filename.c_str());
|
||||
if (!out_archive_stream.is_open())
|
||||
return false;
|
||||
boost::archive::text_oarchive out_archive(out_archive_stream);
|
||||
out_archive << input;
|
||||
if (!out_archive_stream.is_open()) return false;
|
||||
serializeToStream(input, out_archive_stream);
|
||||
out_archive_stream.close();
|
||||
return true;
|
||||
}
|
||||
|
||||
template<class T>
|
||||
/// deserializes from a file
|
||||
template <class T>
|
||||
bool deserializeFromFile(const std::string& filename, T& output) {
|
||||
std::ifstream in_archive_stream(filename.c_str());
|
||||
if (!in_archive_stream.is_open())
|
||||
return false;
|
||||
boost::archive::text_iarchive in_archive(in_archive_stream);
|
||||
in_archive >> output;
|
||||
if (!in_archive_stream.is_open()) return false;
|
||||
deserializeFromStream(in_archive_stream, output);
|
||||
in_archive_stream.close();
|
||||
return true;
|
||||
}
|
||||
|
||||
// Serialization to XML format with named structures
|
||||
template<class T>
|
||||
std::string serializeXML(const T& input, const std::string& name="data") {
|
||||
std::ostringstream out_archive_stream;
|
||||
// braces to flush out_archive as it goes out of scope before taking str()
|
||||
// fixes crash with boost 1.66-1.68
|
||||
// see https://github.com/boostorg/serialization/issues/82
|
||||
{
|
||||
boost::archive::xml_oarchive out_archive(out_archive_stream);
|
||||
out_archive << boost::serialization::make_nvp(name.c_str(), input);
|
||||
}
|
||||
return out_archive_stream.str();
|
||||
/// serializes to a string
|
||||
template <class T>
|
||||
std::string serialize(const T& input) {
|
||||
return serializeToString(input);
|
||||
}
|
||||
|
||||
template<class T>
|
||||
void deserializeXML(const std::string& serialized, T& output, const std::string& name="data") {
|
||||
std::istringstream in_archive_stream(serialized);
|
||||
boost::archive::xml_iarchive in_archive(in_archive_stream);
|
||||
in_archive >> boost::serialization::make_nvp(name.c_str(), output);
|
||||
/// deserializes from a string
|
||||
template <class T>
|
||||
void deserialize(const std::string& serialized, T& output) {
|
||||
deserializeFromString(serialized, output);
|
||||
}
|
||||
///@}
|
||||
|
||||
template<class T>
|
||||
bool serializeToXMLFile(const T& input, const std::string& filename, const std::string& name="data") {
|
||||
std::ofstream out_archive_stream(filename.c_str());
|
||||
if (!out_archive_stream.is_open())
|
||||
return false;
|
||||
/** @name XML Serialization
|
||||
* Serialization to XML format with named structures
|
||||
*/
|
||||
///@{
|
||||
/// serializes to a stream in XML
|
||||
template <class T>
|
||||
void serializeToXMLStream(const T& input, std::ostream& out_archive_stream,
|
||||
const std::string& name = "data") {
|
||||
boost::archive::xml_oarchive out_archive(out_archive_stream);
|
||||
out_archive << boost::serialization::make_nvp(name.c_str(), input);;
|
||||
out_archive_stream.close();
|
||||
return true;
|
||||
out_archive << boost::serialization::make_nvp(name.c_str(), input);
|
||||
}
|
||||
|
||||
template<class T>
|
||||
bool deserializeFromXMLFile(const std::string& filename, T& output, const std::string& name="data") {
|
||||
std::ifstream in_archive_stream(filename.c_str());
|
||||
if (!in_archive_stream.is_open())
|
||||
return false;
|
||||
/// deserializes from a stream in XML
|
||||
template <class T>
|
||||
void deserializeFromXMLStream(std::istream& in_archive_stream, T& output,
|
||||
const std::string& name = "data") {
|
||||
boost::archive::xml_iarchive in_archive(in_archive_stream);
|
||||
in_archive >> boost::serialization::make_nvp(name.c_str(), output);
|
||||
in_archive_stream.close();
|
||||
return true;
|
||||
}
|
||||
|
||||
// Serialization to binary format with named structures
|
||||
template<class T>
|
||||
std::string serializeBinary(const T& input, const std::string& name="data") {
|
||||
/// serializes to a string in XML
|
||||
template <class T>
|
||||
std::string serializeToXMLString(const T& input,
|
||||
const std::string& name = "data") {
|
||||
std::ostringstream out_archive_stream;
|
||||
boost::archive::binary_oarchive out_archive(out_archive_stream);
|
||||
out_archive << boost::serialization::make_nvp(name.c_str(), input);
|
||||
serializeToXMLStream(input, out_archive_stream, name);
|
||||
return out_archive_stream.str();
|
||||
}
|
||||
|
||||
template<class T>
|
||||
void deserializeBinary(const std::string& serialized, T& output, const std::string& name="data") {
|
||||
/// deserializes from a string in XML
|
||||
template <class T>
|
||||
void deserializeFromXMLString(const std::string& serialized, T& output,
|
||||
const std::string& name = "data") {
|
||||
std::istringstream in_archive_stream(serialized);
|
||||
boost::archive::binary_iarchive in_archive(in_archive_stream);
|
||||
in_archive >> boost::serialization::make_nvp(name.c_str(), output);
|
||||
deserializeFromXMLStream(in_archive_stream, output, name);
|
||||
}
|
||||
|
||||
template<class T>
|
||||
bool serializeToBinaryFile(const T& input, const std::string& filename, const std::string& name="data") {
|
||||
/// serializes to an XML file
|
||||
template <class T>
|
||||
bool serializeToXMLFile(const T& input, const std::string& filename,
|
||||
const std::string& name = "data") {
|
||||
std::ofstream out_archive_stream(filename.c_str());
|
||||
if (!out_archive_stream.is_open())
|
||||
return false;
|
||||
boost::archive::binary_oarchive out_archive(out_archive_stream);
|
||||
out_archive << boost::serialization::make_nvp(name.c_str(), input);
|
||||
if (!out_archive_stream.is_open()) return false;
|
||||
serializeToXMLStream(input, out_archive_stream, name);
|
||||
out_archive_stream.close();
|
||||
return true;
|
||||
}
|
||||
|
||||
template<class T>
|
||||
bool deserializeFromBinaryFile(const std::string& filename, T& output, const std::string& name="data") {
|
||||
/// deserializes from an XML file
|
||||
template <class T>
|
||||
bool deserializeFromXMLFile(const std::string& filename, T& output,
|
||||
const std::string& name = "data") {
|
||||
std::ifstream in_archive_stream(filename.c_str());
|
||||
if (!in_archive_stream.is_open())
|
||||
return false;
|
||||
boost::archive::binary_iarchive in_archive(in_archive_stream);
|
||||
in_archive >> boost::serialization::make_nvp(name.c_str(), output);
|
||||
if (!in_archive_stream.is_open()) return false;
|
||||
deserializeFromXMLStream(in_archive_stream, output, name);
|
||||
in_archive_stream.close();
|
||||
return true;
|
||||
}
|
||||
|
||||
} // \namespace gtsam
|
||||
/// serializes to a string in XML
|
||||
template <class T>
|
||||
std::string serializeXML(const T& input,
|
||||
const std::string& name = "data") {
|
||||
return serializeToXMLString(input, name);
|
||||
}
|
||||
|
||||
/// deserializes from a string in XML
|
||||
template <class T>
|
||||
void deserializeXML(const std::string& serialized, T& output,
|
||||
const std::string& name = "data") {
|
||||
deserializeFromXMLString(serialized, output, name);
|
||||
}
|
||||
///@}
|
||||
|
||||
/** @name Binary Serialization
|
||||
* Serialization to binary format with named structures
|
||||
*/
|
||||
///@{
|
||||
/// serializes to a stream in binary
|
||||
template <class T>
|
||||
void serializeToBinaryStream(const T& input, std::ostream& out_archive_stream,
|
||||
const std::string& name = "data") {
|
||||
boost::archive::binary_oarchive out_archive(out_archive_stream);
|
||||
out_archive << boost::serialization::make_nvp(name.c_str(), input);
|
||||
}
|
||||
|
||||
/// deserializes from a stream in binary
|
||||
template <class T>
|
||||
void deserializeFromBinaryStream(std::istream& in_archive_stream, T& output,
|
||||
const std::string& name = "data") {
|
||||
boost::archive::binary_iarchive in_archive(in_archive_stream);
|
||||
in_archive >> boost::serialization::make_nvp(name.c_str(), output);
|
||||
}
|
||||
|
||||
/// serializes to a string in binary
|
||||
template <class T>
|
||||
std::string serializeToBinaryString(const T& input,
|
||||
const std::string& name = "data") {
|
||||
std::ostringstream out_archive_stream;
|
||||
serializeToBinaryStream(input, out_archive_stream, name);
|
||||
return out_archive_stream.str();
|
||||
}
|
||||
|
||||
/// deserializes from a string in binary
|
||||
template <class T>
|
||||
void deserializeFromBinaryString(const std::string& serialized, T& output,
|
||||
const std::string& name = "data") {
|
||||
std::istringstream in_archive_stream(serialized);
|
||||
deserializeFromBinaryStream(in_archive_stream, output, name);
|
||||
}
|
||||
|
||||
/// serializes to a binary file
|
||||
template <class T>
|
||||
bool serializeToBinaryFile(const T& input, const std::string& filename,
|
||||
const std::string& name = "data") {
|
||||
std::ofstream out_archive_stream(filename.c_str());
|
||||
if (!out_archive_stream.is_open()) return false;
|
||||
serializeToBinaryStream(input, out_archive_stream, name);
|
||||
out_archive_stream.close();
|
||||
return true;
|
||||
}
|
||||
|
||||
/// deserializes from a binary file
|
||||
template <class T>
|
||||
bool deserializeFromBinaryFile(const std::string& filename, T& output,
|
||||
const std::string& name = "data") {
|
||||
std::ifstream in_archive_stream(filename.c_str());
|
||||
if (!in_archive_stream.is_open()) return false;
|
||||
deserializeFromBinaryStream(in_archive_stream, output, name);
|
||||
in_archive_stream.close();
|
||||
return true;
|
||||
}
|
||||
|
||||
/// serializes to a string in binary
|
||||
template <class T>
|
||||
std::string serializeBinary(const T& input,
|
||||
const std::string& name = "data") {
|
||||
return serializeToBinaryString(input, name);
|
||||
}
|
||||
|
||||
/// deserializes from a string in binary
|
||||
template <class T>
|
||||
void deserializeBinary(const std::string& serialized, T& output,
|
||||
const std::string& name = "data") {
|
||||
deserializeFromBinaryString(serialized, output, name);
|
||||
}
|
||||
///@}
|
||||
|
||||
} // namespace gtsam
|
||||
|
|
|
|||
|
|
@ -26,6 +26,7 @@
|
|||
#include <gtsam/base/serialization.h>
|
||||
|
||||
#include <boost/serialization/serialization.hpp>
|
||||
#include <boost/filesystem.hpp>
|
||||
|
||||
|
||||
// whether to print the serialized text to stdout
|
||||
|
|
@ -40,22 +41,37 @@ T create() {
|
|||
return T();
|
||||
}
|
||||
|
||||
// Creates or empties a folder in the build folder and returns the relative path
|
||||
boost::filesystem::path resetFilesystem(
|
||||
boost::filesystem::path folder = "actual") {
|
||||
boost::filesystem::remove_all(folder);
|
||||
boost::filesystem::create_directory(folder);
|
||||
return folder;
|
||||
}
|
||||
|
||||
// Templated round-trip serialization
|
||||
template<class T>
|
||||
void roundtrip(const T& input, T& output) {
|
||||
// Serialize
|
||||
std::string serialized = serialize(input);
|
||||
if (verbose) std::cout << serialized << std::endl << std::endl;
|
||||
|
||||
deserialize(serialized, output);
|
||||
}
|
||||
|
||||
// This version requires equality operator
|
||||
// Templated round-trip serialization using a file
|
||||
template<class T>
|
||||
void roundtripFile(const T& input, T& output) {
|
||||
boost::filesystem::path path = resetFilesystem()/"graph.dat";
|
||||
serializeToFile(input, path.string());
|
||||
deserializeFromFile(path.string(), output);
|
||||
}
|
||||
|
||||
// This version requires equality operator and uses string and file round-trips
|
||||
template<class T>
|
||||
bool equality(const T& input = T()) {
|
||||
T output = create<T>();
|
||||
T output = create<T>(), outputf = create<T>();
|
||||
roundtrip<T>(input,output);
|
||||
return input==output;
|
||||
roundtripFile<T>(input,outputf);
|
||||
return (input==output) && (input==outputf);
|
||||
}
|
||||
|
||||
// This version requires Testable
|
||||
|
|
@ -77,20 +93,26 @@ bool equalsDereferenced(const T& input) {
|
|||
// Templated round-trip serialization using XML
|
||||
template<class T>
|
||||
void roundtripXML(const T& input, T& output) {
|
||||
// Serialize
|
||||
std::string serialized = serializeXML<T>(input);
|
||||
if (verbose) std::cout << serialized << std::endl << std::endl;
|
||||
|
||||
// De-serialize
|
||||
deserializeXML(serialized, output);
|
||||
}
|
||||
|
||||
// Templated round-trip serialization using XML File
|
||||
template<class T>
|
||||
void roundtripXMLFile(const T& input, T& output) {
|
||||
boost::filesystem::path path = resetFilesystem()/"graph.xml";
|
||||
serializeToXMLFile(input, path.string());
|
||||
deserializeFromXMLFile(path.string(), output);
|
||||
}
|
||||
|
||||
// This version requires equality operator
|
||||
template<class T>
|
||||
bool equalityXML(const T& input = T()) {
|
||||
T output = create<T>();
|
||||
T output = create<T>(), outputf = create<T>();
|
||||
roundtripXML<T>(input,output);
|
||||
return input==output;
|
||||
roundtripXMLFile<T>(input,outputf);
|
||||
return (input==output) && (input==outputf);
|
||||
}
|
||||
|
||||
// This version requires Testable
|
||||
|
|
@ -112,20 +134,26 @@ bool equalsDereferencedXML(const T& input = T()) {
|
|||
// Templated round-trip serialization using XML
|
||||
template<class T>
|
||||
void roundtripBinary(const T& input, T& output) {
|
||||
// Serialize
|
||||
std::string serialized = serializeBinary<T>(input);
|
||||
if (verbose) std::cout << serialized << std::endl << std::endl;
|
||||
|
||||
// De-serialize
|
||||
deserializeBinary(serialized, output);
|
||||
}
|
||||
|
||||
// Templated round-trip serialization using Binary file
|
||||
template<class T>
|
||||
void roundtripBinaryFile(const T& input, T& output) {
|
||||
boost::filesystem::path path = resetFilesystem()/"graph.bin";
|
||||
serializeToBinaryFile(input, path.string());
|
||||
deserializeFromBinaryFile(path.string(), output);
|
||||
}
|
||||
|
||||
// This version requires equality operator
|
||||
template<class T>
|
||||
bool equalityBinary(const T& input = T()) {
|
||||
T output = create<T>();
|
||||
T output = create<T>(), outputf = create<T>();
|
||||
roundtripBinary<T>(input,output);
|
||||
return input==output;
|
||||
roundtripBinaryFile<T>(input,outputf);
|
||||
return (input==output) && (input==outputf);
|
||||
}
|
||||
|
||||
// This version requires Testable
|
||||
|
|
|
|||
|
|
@ -230,3 +230,50 @@ namespace std {
|
|||
#ifdef ERROR
|
||||
#undef ERROR
|
||||
#endif
|
||||
|
||||
namespace gtsam {
|
||||
|
||||
/// Convenience void_t as we assume C++11, it will not conflict the std one in C++17 as this is in `gtsam::`
|
||||
template<typename ...> using void_t = void;
|
||||
|
||||
/**
|
||||
* A SFINAE trait to mark classes that need special alignment.
|
||||
*
|
||||
* This is required to make boost::make_shared and etc respect alignment, which is essential for the Python
|
||||
* wrappers to work properly.
|
||||
*
|
||||
* Explanation
|
||||
* =============
|
||||
* When a GTSAM type is not declared with the type alias `_eigen_aligned_allocator_trait = void`, the first template
|
||||
* will be taken so `needs_eigen_aligned_allocator` will be resolved to `std::false_type`.
|
||||
*
|
||||
* Otherwise, it will resolve to the second template, which will be resolved to `std::true_type`.
|
||||
*
|
||||
* Please refer to `gtsam/base/make_shared.h` for an example.
|
||||
*/
|
||||
template<typename, typename = void_t<>>
|
||||
struct needs_eigen_aligned_allocator : std::false_type {
|
||||
};
|
||||
template<typename T>
|
||||
struct needs_eigen_aligned_allocator<T, void_t<typename T::_eigen_aligned_allocator_trait>> : std::true_type {
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* This marks a GTSAM object to require alignment. With this macro an object will automatically be allocated in aligned
|
||||
* memory when one uses `gtsam::make_shared`. It reduces future misalignment problems that is hard to debug.
|
||||
* See https://eigen.tuxfamily.org/dox/group__DenseMatrixManipulation__Alignement.html for detailed explanation.
|
||||
*/
|
||||
#define GTSAM_MAKE_ALIGNED_OPERATOR_NEW \
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW \
|
||||
using _eigen_aligned_allocator_trait = void;
|
||||
|
||||
/**
|
||||
* This marks a GTSAM object to require alignment. With this macro an object will automatically be allocated in aligned
|
||||
* memory when one uses `gtsam::make_shared`. It reduces future misalignment problems that is hard to debug.
|
||||
* See https://eigen.tuxfamily.org/dox/group__DenseMatrixManipulation__Alignement.html for detailed explanation.
|
||||
*/
|
||||
#define GTSAM_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign) \
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign) \
|
||||
using _eigen_aligned_allocator_trait = void;
|
||||
|
|
|
|||
|
|
@ -162,7 +162,7 @@ private:
|
|||
NeedsToAlign = (sizeof(B) % 16) == 0 || (sizeof(R) % 16) == 0
|
||||
};
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
};
|
||||
|
||||
// Declare this to be both Testable and a Manifold
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@
|
|||
namespace gtsam {
|
||||
|
||||
/* ************************************************************************* */
|
||||
Cal3Fisheye::Cal3Fisheye(const Vector& v)
|
||||
Cal3Fisheye::Cal3Fisheye(const Vector9& v)
|
||||
: fx_(v[0]),
|
||||
fy_(v[1]),
|
||||
s_(v[2]),
|
||||
|
|
@ -50,76 +50,73 @@ Matrix3 Cal3Fisheye::K() const {
|
|||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
static Matrix29 D2dcalibration(const double xd, const double yd,
|
||||
const double xi, const double yi,
|
||||
const double t3, const double t5,
|
||||
const double t7, const double t9, const double r,
|
||||
Matrix2& DK) {
|
||||
// order: fx, fy, s, u0, v0
|
||||
Matrix25 DR1;
|
||||
DR1 << xd, 0.0, yd, 1.0, 0.0, 0.0, yd, 0.0, 0.0, 1.0;
|
||||
|
||||
// order: k1, k2, k3, k4
|
||||
Matrix24 DR2;
|
||||
DR2 << t3 * xi, t5 * xi, t7 * xi, t9 * xi, t3 * yi, t5 * yi, t7 * yi, t9 * yi;
|
||||
DR2 /= r;
|
||||
Matrix29 D;
|
||||
D << DR1, DK * DR2;
|
||||
return D;
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
static Matrix2 D2dintrinsic(const double xi, const double yi, const double r,
|
||||
const double td, const double t, const double tt,
|
||||
const double t4, const double t6, const double t8,
|
||||
const double k1, const double k2, const double k3,
|
||||
const double k4, const Matrix2& DK) {
|
||||
const double dr_dxi = xi / sqrt(xi * xi + yi * yi);
|
||||
const double dr_dyi = yi / sqrt(xi * xi + yi * yi);
|
||||
const double dt_dr = 1 / (1 + r * r);
|
||||
const double dtd_dt =
|
||||
1 + 3 * k1 * tt + 5 * k2 * t4 + 7 * k3 * t6 + 9 * k4 * t8;
|
||||
const double dtd_dxi = dtd_dt * dt_dr * dr_dxi;
|
||||
const double dtd_dyi = dtd_dt * dt_dr * dr_dyi;
|
||||
|
||||
const double rinv = 1 / r;
|
||||
const double rrinv = 1 / (r * r);
|
||||
const double dxd_dxi =
|
||||
dtd_dxi * xi * rinv + td * rinv + td * xi * (-rrinv) * dr_dxi;
|
||||
const double dxd_dyi = dtd_dyi * xi * rinv - td * xi * rrinv * dr_dyi;
|
||||
const double dyd_dxi = dtd_dxi * yi * rinv - td * yi * rrinv * dr_dxi;
|
||||
const double dyd_dyi =
|
||||
dtd_dyi * yi * rinv + td * rinv + td * yi * (-rrinv) * dr_dyi;
|
||||
|
||||
Matrix2 DR;
|
||||
DR << dxd_dxi, dxd_dyi, dyd_dxi, dyd_dyi;
|
||||
|
||||
return DK * DR;
|
||||
double Cal3Fisheye::Scaling(double r) {
|
||||
static constexpr double threshold = 1e-8;
|
||||
if (r > threshold || r < -threshold) {
|
||||
return atan(r) / r;
|
||||
} else {
|
||||
// Taylor expansion close to 0
|
||||
double r2 = r * r, r4 = r2 * r2;
|
||||
return 1.0 - r2 / 3 + r4 / 5;
|
||||
}
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
Point2 Cal3Fisheye::uncalibrate(const Point2& p, OptionalJacobian<2, 9> H1,
|
||||
OptionalJacobian<2, 2> H2) const {
|
||||
const double xi = p.x(), yi = p.y();
|
||||
const double r = sqrt(xi * xi + yi * yi);
|
||||
const double r2 = xi * xi + yi * yi, r = sqrt(r2);
|
||||
const double t = atan(r);
|
||||
const double tt = t * t, t4 = tt * tt, t6 = tt * t4, t8 = t4 * t4;
|
||||
const double td = t * (1 + k1_ * tt + k2_ * t4 + k3_ * t6 + k4_ * t8);
|
||||
const double td_o_r = r > 1e-8 ? td / r : 1;
|
||||
const double xd = td_o_r * xi, yd = td_o_r * yi;
|
||||
const double t2 = t * t, t4 = t2 * t2, t6 = t2 * t4, t8 = t4 * t4;
|
||||
Vector5 K, T;
|
||||
K << 1, k1_, k2_, k3_, k4_;
|
||||
T << 1, t2, t4, t6, t8;
|
||||
const double scaling = Scaling(r);
|
||||
const double s = scaling * K.dot(T);
|
||||
const double xd = s * xi, yd = s * yi;
|
||||
Point2 uv(fx_ * xd + s_ * yd + u0_, fy_ * yd + v0_);
|
||||
|
||||
Matrix2 DK;
|
||||
if (H1 || H2) DK << fx_, s_, 0.0, fy_;
|
||||
|
||||
// Derivative for calibration parameters (2 by 9)
|
||||
if (H1)
|
||||
*H1 = D2dcalibration(xd, yd, xi, yi, t * tt, t * t4, t * t6, t * t8, r, DK);
|
||||
if (H1) {
|
||||
Matrix25 DR1;
|
||||
// order: fx, fy, s, u0, v0
|
||||
DR1 << xd, 0.0, yd, 1.0, 0.0, 0.0, yd, 0.0, 0.0, 1.0;
|
||||
|
||||
// order: k1, k2, k3, k4
|
||||
Matrix24 DR2;
|
||||
auto T4 = T.tail<4>().transpose();
|
||||
DR2 << xi * T4, yi * T4;
|
||||
*H1 << DR1, DK * scaling * DR2;
|
||||
}
|
||||
|
||||
// Derivative for points in intrinsic coords (2 by 2)
|
||||
if (H2)
|
||||
*H2 =
|
||||
D2dintrinsic(xi, yi, r, td, t, tt, t4, t6, t8, k1_, k2_, k3_, k4_, DK);
|
||||
if (H2) {
|
||||
const double dtd_dt =
|
||||
1 + 3 * k1_ * t2 + 5 * k2_ * t4 + 7 * k3_ * t6 + 9 * k4_ * t8;
|
||||
const double dt_dr = 1 / (1 + r2);
|
||||
const double rinv = 1 / r;
|
||||
const double dr_dxi = xi * rinv;
|
||||
const double dr_dyi = yi * rinv;
|
||||
const double dtd_dxi = dtd_dt * dt_dr * dr_dxi;
|
||||
const double dtd_dyi = dtd_dt * dt_dr * dr_dyi;
|
||||
|
||||
const double td = t * K.dot(T);
|
||||
const double rrinv = 1 / r2;
|
||||
const double dxd_dxi =
|
||||
dtd_dxi * dr_dxi + td * rinv - td * xi * rrinv * dr_dxi;
|
||||
const double dxd_dyi = dtd_dyi * dr_dxi - td * xi * rrinv * dr_dyi;
|
||||
const double dyd_dxi = dtd_dxi * dr_dyi - td * yi * rrinv * dr_dxi;
|
||||
const double dyd_dyi =
|
||||
dtd_dyi * dr_dyi + td * rinv - td * yi * rrinv * dr_dyi;
|
||||
|
||||
Matrix2 DR;
|
||||
DR << dxd_dxi, dxd_dyi, dyd_dxi, dyd_dyi;
|
||||
|
||||
*H2 = DK * DR;
|
||||
}
|
||||
|
||||
return uv;
|
||||
}
|
||||
|
|
@ -157,39 +154,10 @@ Point2 Cal3Fisheye::calibrate(const Point2& uv, const double tol) const {
|
|||
return pi;
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
Matrix2 Cal3Fisheye::D2d_intrinsic(const Point2& p) const {
|
||||
const double xi = p.x(), yi = p.y();
|
||||
const double r = sqrt(xi * xi + yi * yi);
|
||||
const double t = atan(r);
|
||||
const double tt = t * t, t4 = tt * tt, t6 = t4 * tt, t8 = t4 * t4;
|
||||
const double td = t * (1 + k1_ * tt + k2_ * t4 + k3_ * t6 + k4_ * t8);
|
||||
|
||||
Matrix2 DK;
|
||||
DK << fx_, s_, 0.0, fy_;
|
||||
|
||||
return D2dintrinsic(xi, yi, r, td, t, tt, t4, t6, t8, k1_, k2_, k3_, k4_, DK);
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
Matrix29 Cal3Fisheye::D2d_calibration(const Point2& p) const {
|
||||
const double xi = p.x(), yi = p.y();
|
||||
const double r = sqrt(xi * xi + yi * yi);
|
||||
const double t = atan(r);
|
||||
const double tt = t * t, t4 = tt * tt, t6 = tt * t4, t8 = t4 * t4;
|
||||
const double td = t * (1 + k1_ * tt + k2_ * t4 + k3_ * t6 + k4_ * t8);
|
||||
const double xd = td / r * xi, yd = td / r * yi;
|
||||
|
||||
Matrix2 DK;
|
||||
DK << fx_, s_, 0.0, fy_;
|
||||
return D2dcalibration(xd, yd, xi, yi, t * tt, t * t4, t * t6, t * t8, r, DK);
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
void Cal3Fisheye::print(const std::string& s_) const {
|
||||
gtsam::print((Matrix)K(), s_ + ".K");
|
||||
gtsam::print(Vector(k()), s_ + ".k");
|
||||
;
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
|
|
|
|||
|
|
@ -20,6 +20,8 @@
|
|||
|
||||
#include <gtsam/geometry/Point2.h>
|
||||
|
||||
#include <string>
|
||||
|
||||
namespace gtsam {
|
||||
|
||||
/**
|
||||
|
|
@ -43,7 +45,7 @@ namespace gtsam {
|
|||
* [u; v; 1] = K*[x_d; y_d; 1]
|
||||
*/
|
||||
class GTSAM_EXPORT Cal3Fisheye {
|
||||
protected:
|
||||
private:
|
||||
double fx_, fy_, s_, u0_, v0_; // focal length, skew and principal point
|
||||
double k1_, k2_, k3_, k4_; // fisheye distortion coefficients
|
||||
|
||||
|
|
@ -78,7 +80,7 @@ class GTSAM_EXPORT Cal3Fisheye {
|
|||
/// @name Advanced Constructors
|
||||
/// @{
|
||||
|
||||
Cal3Fisheye(const Vector& v);
|
||||
explicit Cal3Fisheye(const Vector9& v);
|
||||
|
||||
/// @}
|
||||
/// @name Standard Interface
|
||||
|
|
@ -120,6 +122,9 @@ class GTSAM_EXPORT Cal3Fisheye {
|
|||
/// Return all parameters as a vector
|
||||
Vector9 vector() const;
|
||||
|
||||
/// Helper function that calculates atan(r)/r
|
||||
static double Scaling(double r);
|
||||
|
||||
/**
|
||||
* @brief convert intrinsic coordinates [x_i; y_i] to (distorted) image
|
||||
* coordinates [u; v]
|
||||
|
|
@ -136,13 +141,6 @@ class GTSAM_EXPORT Cal3Fisheye {
|
|||
/// y_i]
|
||||
Point2 calibrate(const Point2& p, const double tol = 1e-5) const;
|
||||
|
||||
/// Derivative of uncalibrate wrpt intrinsic coordinates [x_i; y_i]
|
||||
Matrix2 D2d_intrinsic(const Point2& p) const;
|
||||
|
||||
/// Derivative of uncalibrate wrpt the calibration parameters
|
||||
/// [fx, fy, s, u0, v0, k1, k2, k3, k4]
|
||||
Matrix29 D2d_calibration(const Point2& p) const;
|
||||
|
||||
/// @}
|
||||
/// @name Testable
|
||||
/// @{
|
||||
|
|
|
|||
|
|
@ -319,7 +319,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
template<class CAMERA>
|
||||
|
|
|
|||
|
|
@ -212,7 +212,7 @@ class EssentialMatrix {
|
|||
/// @}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
template<>
|
||||
|
|
|
|||
|
|
@ -325,7 +325,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
// manifold traits
|
||||
|
|
|
|||
|
|
@ -222,7 +222,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
// end of class PinholeBaseK
|
||||
|
||||
|
|
@ -425,7 +425,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
// end of class PinholePose
|
||||
|
||||
|
|
|
|||
|
|
@ -317,7 +317,7 @@ public:
|
|||
|
||||
public:
|
||||
// Align for Point2, which is either derived from, or is typedef, of Vector2
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
}; // Pose2
|
||||
|
||||
/** specialization for pose2 wedge function (generic template in Lie.h) */
|
||||
|
|
|
|||
|
|
@ -355,7 +355,7 @@ public:
|
|||
#ifdef GTSAM_USE_QUATERNIONS
|
||||
// Align if we are using Quaternions
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
#endif
|
||||
};
|
||||
// Pose3 class
|
||||
|
|
|
|||
|
|
@ -544,7 +544,7 @@ namespace gtsam {
|
|||
#ifdef GTSAM_USE_QUATERNIONS
|
||||
// only align if quaternion, Matrix3 has no alignment requirements
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
#endif
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -20,8 +20,8 @@
|
|||
|
||||
#include <gtsam/base/Lie.h>
|
||||
#include <gtsam/base/Manifold.h>
|
||||
#include <gtsam/base/make_shared.h>
|
||||
#include <gtsam/dllexport.h>
|
||||
|
||||
#include <Eigen/Core>
|
||||
|
||||
#include <iostream> // TODO(frank): how to avoid?
|
||||
|
|
@ -54,7 +54,7 @@ class SO : public LieGroup<SO<N>, internal::DimensionSO(N)> {
|
|||
using VectorN2 = Eigen::Matrix<double, internal::NSquaredSO(N), 1>;
|
||||
using MatrixDD = Eigen::Matrix<double, dimension, dimension>;
|
||||
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW_IF(true)
|
||||
|
||||
protected:
|
||||
MatrixNN matrix_; ///< Rotation matrix
|
||||
|
|
|
|||
|
|
@ -214,7 +214,7 @@ private:
|
|||
/// @}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
// Define GTSAM traits
|
||||
|
|
|
|||
|
|
@ -10,17 +10,18 @@
|
|||
* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
* @file testCal3Fisheye.cpp
|
||||
* @file testCal3DFisheye.cpp
|
||||
* @brief Unit tests for fisheye calibration class
|
||||
* @author ghaggin
|
||||
*/
|
||||
|
||||
#include <CppUnitLite/TestHarness.h>
|
||||
#include <gtsam/base/Testable.h>
|
||||
#include <gtsam/base/numericalDerivative.h>
|
||||
#include <gtsam/geometry/Cal3Fisheye.h>
|
||||
#include <gtsam/geometry/Point3.h>
|
||||
|
||||
#include <CppUnitLite/TestHarness.h>
|
||||
|
||||
using namespace gtsam;
|
||||
|
||||
GTSAM_CONCEPT_TESTABLE_INST(Cal3Fisheye)
|
||||
|
|
@ -30,12 +31,27 @@ static const double fx = 250, fy = 260, s = 0.1, u0 = 320, v0 = 240;
|
|||
static Cal3Fisheye K(fx, fy, s, u0, v0, -0.013721808247486035,
|
||||
0.020727425669427896, -0.012786476702685545,
|
||||
0.0025242267320687625);
|
||||
static Point2 p(2, 3);
|
||||
static Point2 kTestPoint2(2, 3);
|
||||
|
||||
/* ************************************************************************* */
|
||||
TEST(Cal3Fisheye, assert_equal) { CHECK(assert_equal(K, K, 1e-5)); }
|
||||
|
||||
/* ************************************************************************* */
|
||||
TEST(Cal3Fisheye, retract) {
|
||||
Cal3Fisheye expected(K.fx() + 1, K.fy() + 2, K.skew() + 3, K.px() + 4,
|
||||
K.py() + 5, K.k1() + 6, K.k2() + 7, K.k3() + 8,
|
||||
K.k4() + 9);
|
||||
Vector d(9);
|
||||
d << 1, 2, 3, 4, 5, 6, 7, 8, 9;
|
||||
Cal3Fisheye actual = K.retract(d);
|
||||
CHECK(assert_equal(expected, actual, 1e-7));
|
||||
CHECK(assert_equal(d, K.localCoordinates(actual), 1e-7));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
TEST(Cal3Fisheye, uncalibrate1) {
|
||||
// Calculate the solution
|
||||
const double xi = p.x(), yi = p.y();
|
||||
const double xi = kTestPoint2.x(), yi = kTestPoint2.y();
|
||||
const double r = sqrt(xi * xi + yi * yi);
|
||||
const double t = atan(r);
|
||||
const double tt = t * t, t4 = tt * tt, t6 = tt * t4, t8 = t4 * t4;
|
||||
|
|
@ -46,32 +62,42 @@ TEST(Cal3Fisheye, uncalibrate1) {
|
|||
|
||||
Point2 uv_sol(v[0] / v[2], v[1] / v[2]);
|
||||
|
||||
Point2 uv = K.uncalibrate(p);
|
||||
Point2 uv = K.uncalibrate(kTestPoint2);
|
||||
CHECK(assert_equal(uv, uv_sol));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
/**
|
||||
* Check that a point at (0,0) projects to the
|
||||
* image center.
|
||||
*/
|
||||
TEST(Cal3Fisheye, uncalibrate2) {
|
||||
Point2 pz(0, 0);
|
||||
auto uv = K.uncalibrate(pz);
|
||||
CHECK(assert_equal(uv, Point2(u0, v0)));
|
||||
// For numerical derivatives
|
||||
Point2 f(const Cal3Fisheye& k, const Point2& pt) { return k.uncalibrate(pt); }
|
||||
|
||||
/* ************************************************************************* */
|
||||
TEST(Cal3Fisheye, Derivatives) {
|
||||
Matrix H1, H2;
|
||||
K.uncalibrate(kTestPoint2, H1, H2);
|
||||
CHECK(assert_equal(numericalDerivative21(f, K, kTestPoint2, 1e-7), H1, 1e-5));
|
||||
CHECK(assert_equal(numericalDerivative22(f, K, kTestPoint2, 1e-7), H2, 1e-5));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
/**
|
||||
* This test uses cv2::fisheye::projectPoints to test that uncalibrate
|
||||
* properly projects a point into the image plane. One notable difference
|
||||
* between opencv and the Cal3Fisheye::uncalibrate function is the skew
|
||||
* parameter. The equivalence is alpha = s/fx.
|
||||
*
|
||||
*
|
||||
* Python script to project points with fisheye model in OpenCv
|
||||
* (script run with OpenCv version 4.2.0 and Numpy version 1.18.2)
|
||||
*/
|
||||
// Check that a point at (0,0) projects to the image center.
|
||||
TEST(Cal3Fisheye, uncalibrate2) {
|
||||
Point2 pz(0, 0);
|
||||
Matrix H1, H2;
|
||||
auto uv = K.uncalibrate(pz, H1, H2);
|
||||
CHECK(assert_equal(uv, Point2(u0, v0)));
|
||||
CHECK(assert_equal(numericalDerivative21(f, K, pz, 1e-7), H1, 1e-5));
|
||||
// TODO(frank): the second jacobian is all NaN for the image center!
|
||||
// CHECK(assert_equal(numericalDerivative22(f, K, pz, 1e-7), H2, 1e-5));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
// This test uses cv2::fisheye::projectPoints to test that uncalibrate
|
||||
// properly projects a point into the image plane. One notable difference
|
||||
// between opencv and the Cal3Fisheye::uncalibrate function is the skew
|
||||
// parameter. The equivalence is alpha = s/fx.
|
||||
//
|
||||
// Python script to project points with fisheye model in OpenCv
|
||||
// (script run with OpenCv version 4.2.0 and Numpy version 1.18.2)
|
||||
// clang-format off
|
||||
/*
|
||||
===========================================================
|
||||
|
|
@ -94,6 +120,7 @@ tvec = np.float64([[0.,0.,0.]]);
|
|||
imagePoints, jacobian = cv2.fisheye.projectPoints(objpts, rvec, tvec, cameraMatrix, distCoeffs, alpha=alpha)
|
||||
np.set_printoptions(precision=14)
|
||||
print(imagePoints)
|
||||
|
||||
===========================================================
|
||||
* Script output: [[[457.82638130304935 408.18905848512986]]]
|
||||
*/
|
||||
|
|
@ -134,21 +161,18 @@ TEST(Cal3Fisheye, calibrate1) {
|
|||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
/**
|
||||
* Check that calibrate returns (0,0) for the image center
|
||||
*/
|
||||
// Check that calibrate returns (0,0) for the image center
|
||||
TEST(Cal3Fisheye, calibrate2) {
|
||||
Point2 uv(u0, v0);
|
||||
auto xi_hat = K.calibrate(uv);
|
||||
CHECK(assert_equal(xi_hat, Point2(0, 0)))
|
||||
}
|
||||
|
||||
/**
|
||||
* Run calibrate on OpenCv test from uncalibrate3
|
||||
* (script shown above)
|
||||
* 3d point: (23, 27, 31)
|
||||
* 2d point in image plane: (457.82638130304935, 408.18905848512986)
|
||||
*/
|
||||
/* ************************************************************************* */
|
||||
// Run calibrate on OpenCv test from uncalibrate3
|
||||
// (script shown above)
|
||||
// 3d point: (23, 27, 31)
|
||||
// 2d point in image plane: (457.82638130304935, 408.18905848512986)
|
||||
TEST(Cal3Fisheye, calibrate3) {
|
||||
Point3 p3(23, 27, 31);
|
||||
Point2 xi(p3.x() / p3.z(), p3.y() / p3.z());
|
||||
|
|
@ -157,47 +181,6 @@ TEST(Cal3Fisheye, calibrate3) {
|
|||
CHECK(assert_equal(xi_hat, xi));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
// For numerical derivatives
|
||||
Point2 uncalibrate_(const Cal3Fisheye& k, const Point2& pt) {
|
||||
return k.uncalibrate(pt);
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
TEST(Cal3Fisheye, Duncalibrate1) {
|
||||
Matrix computed;
|
||||
K.uncalibrate(p, computed, boost::none);
|
||||
Matrix numerical = numericalDerivative21(uncalibrate_, K, p, 1e-7);
|
||||
CHECK(assert_equal(numerical, computed, 1e-5));
|
||||
Matrix separate = K.D2d_calibration(p);
|
||||
CHECK(assert_equal(numerical, separate, 1e-5));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
TEST(Cal3Fisheye, Duncalibrate2) {
|
||||
Matrix computed;
|
||||
K.uncalibrate(p, boost::none, computed);
|
||||
Matrix numerical = numericalDerivative22(uncalibrate_, K, p, 1e-7);
|
||||
CHECK(assert_equal(numerical, computed, 1e-5));
|
||||
Matrix separate = K.D2d_intrinsic(p);
|
||||
CHECK(assert_equal(numerical, separate, 1e-5));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
TEST(Cal3Fisheye, assert_equal) { CHECK(assert_equal(K, K, 1e-5)); }
|
||||
|
||||
/* ************************************************************************* */
|
||||
TEST(Cal3Fisheye, retract) {
|
||||
Cal3Fisheye expected(K.fx() + 1, K.fy() + 2, K.skew() + 3, K.px() + 4,
|
||||
K.py() + 5, K.k1() + 6, K.k2() + 7, K.k3() + 8,
|
||||
K.k4() + 9);
|
||||
Vector d(9);
|
||||
d << 1, 2, 3, 4, 5, 6, 7, 8, 9;
|
||||
Cal3Fisheye actual = K.retract(d);
|
||||
CHECK(assert_equal(expected, actual, 1e-7));
|
||||
CHECK(assert_equal(d, K.localCoordinates(actual), 1e-7));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
int main() {
|
||||
TestResult tr;
|
||||
|
|
|
|||
|
|
@ -215,7 +215,7 @@ struct CameraProjectionMatrix {
|
|||
private:
|
||||
const Matrix3 K_;
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -139,7 +139,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
/// traits
|
||||
|
|
@ -219,7 +219,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
/// traits
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
|
||||
|
|
@ -210,7 +210,7 @@ public:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -332,7 +332,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
// class CombinedImuFactor
|
||||
|
||||
|
|
|
|||
|
|
@ -171,7 +171,7 @@ private:
|
|||
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
/// @}
|
||||
|
||||
}; // ConstantBias class
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ struct GTSAM_EXPORT PreintegratedRotationParams {
|
|||
#ifdef GTSAM_USE_QUATERNIONS
|
||||
// Align if we are using Quaternions
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
#endif
|
||||
};
|
||||
|
||||
|
|
@ -182,7 +182,7 @@ class GTSAM_EXPORT PreintegratedRotation {
|
|||
#ifdef GTSAM_USE_QUATERNIONS
|
||||
// Align if we are using Quaternions
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
#endif
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -214,7 +214,7 @@ class GTSAM_EXPORT PreintegrationBase {
|
|||
#endif
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
} /// namespace gtsam
|
||||
|
|
|
|||
|
|
@ -84,7 +84,7 @@ protected:
|
|||
#ifdef GTSAM_USE_QUATERNIONS
|
||||
// Align if we are using Quaternions
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
#endif
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -141,7 +141,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
} /// namespace gtsam
|
||||
|
|
|
|||
|
|
@ -209,7 +209,7 @@ private:
|
|||
// Alignment, see https://eigen.tuxfamily.org/dox/group__TopicStructHavingEigenMembers.html
|
||||
enum { NeedsToAlign = (sizeof(T) % 16) == 0 };
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
};
|
||||
// ExpressionFactor
|
||||
|
||||
|
|
|
|||
|
|
@ -175,7 +175,7 @@ public:
|
|||
|
||||
/// @}
|
||||
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
|
||||
private:
|
||||
|
||||
|
|
@ -265,7 +265,7 @@ public:
|
|||
traits<X>::Print(value_, "Value");
|
||||
}
|
||||
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
|
||||
private:
|
||||
|
||||
|
|
@ -331,7 +331,7 @@ public:
|
|||
return traits<X>::Local(x1,x2);
|
||||
}
|
||||
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
|
||||
private:
|
||||
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ namespace gtsam {
|
|||
// Alignment, see https://eigen.tuxfamily.org/dox/group__TopicStructHavingEigenMembers.html
|
||||
enum { NeedsToAlign = (sizeof(T) % 16) == 0 };
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
};
|
||||
|
||||
} /// namespace gtsam
|
||||
|
|
|
|||
|
|
@ -150,7 +150,7 @@ public:
|
|||
return constant_;
|
||||
}
|
||||
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
|
|
|||
|
|
@ -126,7 +126,7 @@ namespace gtsam {
|
|||
// Alignment, see https://eigen.tuxfamily.org/dox/group__TopicStructHavingEigenMembers.html
|
||||
enum { NeedsToAlign = (sizeof(VALUE) % 16) == 0 };
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
|
||||
}; // \class BetweenFactor
|
||||
|
||||
/// traits
|
||||
|
|
|
|||
|
|
@ -105,7 +105,7 @@ private:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
// \class EssentialMatrixConstraint
|
||||
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@ public:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -201,7 +201,7 @@ public:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
// EssentialMatrixFactor2
|
||||
|
||||
|
|
@ -286,7 +286,7 @@ public:
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
// EssentialMatrixFactor3
|
||||
|
||||
|
|
|
|||
|
|
@ -189,7 +189,7 @@ namespace gtsam {
|
|||
}
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
|
||||
/// traits
|
||||
|
|
|
|||
|
|
@ -10,7 +10,11 @@
|
|||
#include <gtsam/geometry/CameraSet.h>
|
||||
#include <gtsam/linear/JacobianFactor.h>
|
||||
#include <gtsam/linear/VectorValues.h>
|
||||
|
||||
#include <iosfwd>
|
||||
#include <map>
|
||||
#include <string>
|
||||
#include <vector>
|
||||
|
||||
namespace gtsam {
|
||||
|
||||
|
|
@ -76,7 +80,7 @@ public:
|
|||
|
||||
/// print
|
||||
void print(const std::string& s = "", const KeyFormatter& keyFormatter =
|
||||
DefaultKeyFormatter) const {
|
||||
DefaultKeyFormatter) const override {
|
||||
std::cout << " RegularImplicitSchurFactor " << std::endl;
|
||||
Factor::print(s);
|
||||
for (size_t pos = 0; pos < size(); ++pos) {
|
||||
|
|
@ -88,7 +92,7 @@ public:
|
|||
}
|
||||
|
||||
/// equals
|
||||
bool equals(const GaussianFactor& lf, double tol) const {
|
||||
bool equals(const GaussianFactor& lf, double tol) const override {
|
||||
const This* f = dynamic_cast<const This*>(&lf);
|
||||
if (!f)
|
||||
return false;
|
||||
|
|
@ -104,37 +108,36 @@ public:
|
|||
}
|
||||
|
||||
/// Degrees of freedom of camera
|
||||
virtual DenseIndex getDim(const_iterator variable) const {
|
||||
DenseIndex getDim(const_iterator variable) const override {
|
||||
return D;
|
||||
}
|
||||
|
||||
virtual void updateHessian(const KeyVector& keys,
|
||||
SymmetricBlockMatrix* info) const {
|
||||
void updateHessian(const KeyVector& keys,
|
||||
SymmetricBlockMatrix* info) const override {
|
||||
throw std::runtime_error(
|
||||
"RegularImplicitSchurFactor::updateHessian non implemented");
|
||||
}
|
||||
virtual Matrix augmentedJacobian() const {
|
||||
Matrix augmentedJacobian() const override {
|
||||
throw std::runtime_error(
|
||||
"RegularImplicitSchurFactor::augmentedJacobian non implemented");
|
||||
return Matrix();
|
||||
}
|
||||
virtual std::pair<Matrix, Vector> jacobian() const {
|
||||
std::pair<Matrix, Vector> jacobian() const override {
|
||||
throw std::runtime_error(
|
||||
"RegularImplicitSchurFactor::jacobian non implemented");
|
||||
return std::make_pair(Matrix(), Vector());
|
||||
}
|
||||
|
||||
/// *Compute* full augmented information matrix
|
||||
virtual Matrix augmentedInformation() const {
|
||||
|
||||
Matrix augmentedInformation() const override {
|
||||
// Do the Schur complement
|
||||
SymmetricBlockMatrix augmentedHessian = //
|
||||
SymmetricBlockMatrix augmentedHessian =
|
||||
Set::SchurComplement(FBlocks_, E_, b_);
|
||||
return augmentedHessian.selfadjointView();
|
||||
}
|
||||
|
||||
/// *Compute* full information matrix
|
||||
virtual Matrix information() const {
|
||||
Matrix information() const override {
|
||||
Matrix augmented = augmentedInformation();
|
||||
int m = this->keys_.size();
|
||||
size_t M = D * m;
|
||||
|
|
@ -145,7 +148,7 @@ public:
|
|||
using GaussianFactor::hessianDiagonal;
|
||||
|
||||
/// Add the diagonal of the Hessian for this factor to existing VectorValues
|
||||
virtual void hessianDiagonalAdd(VectorValues &d) const override {
|
||||
void hessianDiagonalAdd(VectorValues &d) const override {
|
||||
// diag(Hessian) = diag(F' * (I - E * PointCov * E') * F);
|
||||
for (size_t k = 0; k < size(); ++k) { // for each camera
|
||||
Key j = keys_[k];
|
||||
|
|
@ -176,7 +179,7 @@ public:
|
|||
* @brief add the contribution of this factor to the diagonal of the hessian
|
||||
* d(output) = d(input) + deltaHessianFactor
|
||||
*/
|
||||
virtual void hessianDiagonal(double* d) const {
|
||||
void hessianDiagonal(double* d) const override {
|
||||
// diag(Hessian) = diag(F' * (I - E * PointCov * E') * F);
|
||||
// Use eigen magic to access raw memory
|
||||
typedef Eigen::Matrix<double, D, 1> DVector;
|
||||
|
|
@ -202,7 +205,7 @@ public:
|
|||
}
|
||||
|
||||
/// Return the block diagonal of the Hessian for this factor
|
||||
virtual std::map<Key, Matrix> hessianBlockDiagonal() const {
|
||||
std::map<Key, Matrix> hessianBlockDiagonal() const override {
|
||||
std::map<Key, Matrix> blocks;
|
||||
// F'*(I - E*P*E')*F
|
||||
for (size_t pos = 0; pos < size(); ++pos) {
|
||||
|
|
@ -227,17 +230,18 @@ public:
|
|||
return blocks;
|
||||
}
|
||||
|
||||
virtual GaussianFactor::shared_ptr clone() const {
|
||||
GaussianFactor::shared_ptr clone() const override {
|
||||
return boost::make_shared<RegularImplicitSchurFactor<CAMERA> >(keys_,
|
||||
FBlocks_, PointCovariance_, E_, b_);
|
||||
throw std::runtime_error(
|
||||
"RegularImplicitSchurFactor::clone non implemented");
|
||||
}
|
||||
virtual bool empty() const {
|
||||
|
||||
bool empty() const override {
|
||||
return false;
|
||||
}
|
||||
|
||||
virtual GaussianFactor::shared_ptr negate() const {
|
||||
GaussianFactor::shared_ptr negate() const override {
|
||||
return boost::make_shared<RegularImplicitSchurFactor<CAMERA> >(keys_,
|
||||
FBlocks_, PointCovariance_, E_, b_);
|
||||
throw std::runtime_error(
|
||||
|
|
@ -288,7 +292,7 @@ public:
|
|||
* f = nonlinear error
|
||||
* (x'*H*x - 2*x'*eta + f) = x'*F'*Q*F*x - 2*x'*F'*Q *b + f = x'*F'*Q*(F*x - 2*b) + f
|
||||
*/
|
||||
virtual double error(const VectorValues& x) const {
|
||||
double error(const VectorValues& x) const override {
|
||||
|
||||
// resize does not do malloc if correct size
|
||||
e1.resize(size());
|
||||
|
|
@ -383,13 +387,12 @@ public:
|
|||
void multiplyHessianAdd(double alpha, const double* x, double* y,
|
||||
std::vector<size_t> keys) const {
|
||||
}
|
||||
;
|
||||
|
||||
/**
|
||||
* @brief Hessian-vector multiply, i.e. y += F'*alpha*(I - E*P*E')*F*x
|
||||
*/
|
||||
void multiplyHessianAdd(double alpha, const VectorValues& x,
|
||||
VectorValues& y) const {
|
||||
VectorValues& y) const override {
|
||||
|
||||
// resize does not do malloc if correct size
|
||||
e1.resize(size());
|
||||
|
|
@ -432,7 +435,7 @@ public:
|
|||
/**
|
||||
* Calculate gradient, which is -F'Q*b, see paper
|
||||
*/
|
||||
VectorValues gradientAtZero() const {
|
||||
VectorValues gradientAtZero() const override {
|
||||
// calculate Q*b
|
||||
e1.resize(size());
|
||||
e2.resize(size());
|
||||
|
|
@ -454,7 +457,7 @@ public:
|
|||
/**
|
||||
* Calculate gradient, which is -F'Q*b, see paper - RAW MEMORY ACCESS
|
||||
*/
|
||||
virtual void gradientAtZero(double* d) const {
|
||||
void gradientAtZero(double* d) const override {
|
||||
|
||||
// Use eigen magic to access raw memory
|
||||
typedef Eigen::Matrix<double, D, 1> DVector;
|
||||
|
|
@ -474,7 +477,7 @@ public:
|
|||
}
|
||||
|
||||
/// Gradient wrt a key at any values
|
||||
Vector gradient(Key key, const VectorValues& x) const {
|
||||
Vector gradient(Key key, const VectorValues& x) const override {
|
||||
throw std::runtime_error(
|
||||
"gradient for RegularImplicitSchurFactor is not implemented yet");
|
||||
}
|
||||
|
|
|
|||
|
|
@ -113,7 +113,7 @@ public:
|
|||
return error;
|
||||
}
|
||||
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
};
|
||||
} // namespace gtsam
|
||||
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@ protected:
|
|||
mutable FBlocks Fs;
|
||||
|
||||
public:
|
||||
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
|
||||
GTSAM_MAKE_ALIGNED_OPERATOR_NEW
|
||||
|
||||
/// shorthand for a smart pointer to a factor
|
||||
typedef boost::shared_ptr<This> shared_ptr;
|
||||
|
|
|
|||
|
|
@ -37,6 +37,7 @@
|
|||
#include <boost/assign/list_inserter.hpp>
|
||||
#include <boost/filesystem/operations.hpp>
|
||||
#include <boost/filesystem/path.hpp>
|
||||
#include <boost/optional.hpp>
|
||||
|
||||
#include <cmath>
|
||||
#include <fstream>
|
||||
|
|
@ -541,10 +542,16 @@ std::map<Key, Pose3> parse3DPoses(const string& filename) {
|
|||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
BetweenFactorPose3s parse3DFactors(const string& filename) {
|
||||
BetweenFactorPose3s parse3DFactors(const string& filename,
|
||||
const noiseModel::Diagonal::shared_ptr& corruptingNoise) {
|
||||
ifstream is(filename.c_str());
|
||||
if (!is) throw invalid_argument("parse3DFactors: can not find file " + filename);
|
||||
|
||||
boost::optional<Sampler> sampler;
|
||||
if (corruptingNoise) {
|
||||
sampler = Sampler(corruptingNoise);
|
||||
}
|
||||
|
||||
std::vector<BetweenFactor<Pose3>::shared_ptr> factors;
|
||||
while (!is.eof()) {
|
||||
char buf[LINESIZE];
|
||||
|
|
@ -585,8 +592,13 @@ BetweenFactorPose3s parse3DFactors(const string& filename) {
|
|||
mgtsam.block<3, 3>(3, 0) = m.block<3, 3>(3, 0); // off diagonal
|
||||
|
||||
SharedNoiseModel model = noiseModel::Gaussian::Information(mgtsam);
|
||||
auto R12 = Rot3::Quaternion(qw, qx, qy, qz);
|
||||
if (sampler) {
|
||||
R12 = R12.retract(sampler->sample());
|
||||
}
|
||||
|
||||
factors.emplace_back(new BetweenFactor<Pose3>(
|
||||
id1, id2, Pose3(Rot3::Quaternion(qw, qx, qy, qz), {x, y, z}), model));
|
||||
id1, id2, Pose3(R12, {x, y, z}), model));
|
||||
}
|
||||
}
|
||||
return factors;
|
||||
|
|
|
|||
|
|
@ -159,7 +159,8 @@ GTSAM_EXPORT void writeG2o(const NonlinearFactorGraph& graph,
|
|||
|
||||
/// Parse edges in 3D TORO graph file into a set of BetweenFactors.
|
||||
using BetweenFactorPose3s = std::vector<gtsam::BetweenFactor<Pose3>::shared_ptr>;
|
||||
GTSAM_EXPORT BetweenFactorPose3s parse3DFactors(const std::string& filename);
|
||||
GTSAM_EXPORT BetweenFactorPose3s parse3DFactors(const std::string& filename,
|
||||
const noiseModel::Diagonal::shared_ptr& corruptingNoise=nullptr);
|
||||
|
||||
/// Parse vertices in 3D TORO graph file into a map of Pose3s.
|
||||
GTSAM_EXPORT std::map<Key, Pose3> parse3DPoses(const std::string& filename);
|
||||
|
|
|
|||
|
|
@ -9,5 +9,6 @@ set (GTSAM_USE_TBB @GTSAM_USE_TBB@)
|
|||
set (GTSAM_DEFAULT_ALLOCATOR @GTSAM_DEFAULT_ALLOCATOR@)
|
||||
|
||||
if("@GTSAM_INSTALL_CYTHON_TOOLBOX@")
|
||||
list(APPEND GTSAM_CYTHON_INSTALL_PATH "@GTSAM_CYTHON_INSTALL_PATH@")
|
||||
list(APPEND GTSAM_EIGENCY_INSTALL_PATH "@GTSAM_EIGENCY_INSTALL_PATH@")
|
||||
endif()
|
||||
|
|
|
|||
|
|
@ -0,0 +1,90 @@
|
|||
/**
|
||||
* @file PoseToPointFactor.hpp
|
||||
* @brief This factor can be used to track a 3D landmark over time by
|
||||
*providing local measurements of its location.
|
||||
* @author David Wisth
|
||||
**/
|
||||
#pragma once
|
||||
|
||||
#include <gtsam/geometry/Point3.h>
|
||||
#include <gtsam/geometry/Pose3.h>
|
||||
#include <gtsam/nonlinear/NonlinearFactor.h>
|
||||
#include <ostream>
|
||||
|
||||
namespace gtsam {
|
||||
|
||||
/**
|
||||
* A class for a measurement between a pose and a point.
|
||||
* @addtogroup SLAM
|
||||
*/
|
||||
class PoseToPointFactor : public NoiseModelFactor2<Pose3, Point3> {
|
||||
private:
|
||||
typedef PoseToPointFactor This;
|
||||
typedef NoiseModelFactor2<Pose3, Point3> Base;
|
||||
|
||||
Point3 measured_; /** the point measurement in local coordinates */
|
||||
|
||||
public:
|
||||
// shorthand for a smart pointer to a factor
|
||||
typedef boost::shared_ptr<PoseToPointFactor> shared_ptr;
|
||||
|
||||
/** default constructor - only use for serialization */
|
||||
PoseToPointFactor() {}
|
||||
|
||||
/** Constructor */
|
||||
PoseToPointFactor(Key key1, Key key2, const Point3& measured,
|
||||
const SharedNoiseModel& model)
|
||||
: Base(model, key1, key2), measured_(measured) {}
|
||||
|
||||
virtual ~PoseToPointFactor() {}
|
||||
|
||||
/** implement functions needed for Testable */
|
||||
|
||||
/** print */
|
||||
virtual void print(const std::string& s, const KeyFormatter& keyFormatter =
|
||||
DefaultKeyFormatter) const {
|
||||
std::cout << s << "PoseToPointFactor(" << keyFormatter(this->key1()) << ","
|
||||
<< keyFormatter(this->key2()) << ")\n"
|
||||
<< " measured: " << measured_.transpose() << std::endl;
|
||||
this->noiseModel_->print(" noise model: ");
|
||||
}
|
||||
|
||||
/** equals */
|
||||
virtual bool equals(const NonlinearFactor& expected,
|
||||
double tol = 1e-9) const {
|
||||
const This* e = dynamic_cast<const This*>(&expected);
|
||||
return e != nullptr && Base::equals(*e, tol) &&
|
||||
traits<Point3>::Equals(this->measured_, e->measured_, tol);
|
||||
}
|
||||
|
||||
/** implement functions needed to derive from Factor */
|
||||
|
||||
/** vector of errors
|
||||
* @brief Error = wTwi.inverse()*wPwp - measured_
|
||||
* @param wTwi The pose of the sensor in world coordinates
|
||||
* @param wPwp The estimated point location in world coordinates
|
||||
*
|
||||
* Note: measured_ and the error are in local coordiantes.
|
||||
*/
|
||||
Vector evaluateError(const Pose3& wTwi, const Point3& wPwp,
|
||||
boost::optional<Matrix&> H1 = boost::none,
|
||||
boost::optional<Matrix&> H2 = boost::none) const {
|
||||
return wTwi.transformTo(wPwp, H1, H2) - measured_;
|
||||
}
|
||||
|
||||
/** return the measured */
|
||||
const Point3& measured() const { return measured_; }
|
||||
|
||||
private:
|
||||
/** Serialization function */
|
||||
friend class boost::serialization::access;
|
||||
template <class ARCHIVE>
|
||||
void serialize(ARCHIVE& ar, const unsigned int /*version*/) {
|
||||
ar& boost::serialization::make_nvp(
|
||||
"NoiseModelFactor2", boost::serialization::base_object<Base>(*this));
|
||||
ar& BOOST_SERIALIZATION_NVP(measured_);
|
||||
}
|
||||
|
||||
}; // \class PoseToPointFactor
|
||||
|
||||
} // namespace gtsam
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
/**
|
||||
* @file testPoseToPointFactor.cpp
|
||||
* @brief
|
||||
* @author David Wisth
|
||||
* @date June 20, 2020
|
||||
*/
|
||||
|
||||
#include <CppUnitLite/TestHarness.h>
|
||||
#include <gtsam/base/numericalDerivative.h>
|
||||
#include <gtsam_unstable/slam/PoseToPointFactor.h>
|
||||
|
||||
using namespace gtsam;
|
||||
using namespace gtsam::noiseModel;
|
||||
|
||||
/// Verify zero error when there is no noise
|
||||
TEST(PoseToPointFactor, errorNoiseless) {
|
||||
Pose3 pose = Pose3::identity();
|
||||
Point3 point(1.0, 2.0, 3.0);
|
||||
Point3 noise(0.0, 0.0, 0.0);
|
||||
Point3 measured = t + noise;
|
||||
|
||||
Key pose_key(1);
|
||||
Key point_key(2);
|
||||
PoseToPointFactor factor(pose_key, point_key, measured,
|
||||
Isotropic::Sigma(3, 0.05));
|
||||
Vector expectedError = Vector3(0.0, 0.0, 0.0);
|
||||
Vector actualError = factor.evaluateError(pose, point);
|
||||
EXPECT(assert_equal(expectedError, actualError, 1E-5));
|
||||
}
|
||||
|
||||
/// Verify expected error in test scenario
|
||||
TEST(PoseToPointFactor, errorNoise) {
|
||||
Pose3 pose = Pose3::identity();
|
||||
Point3 point(1.0, 2.0, 3.0);
|
||||
Point3 noise(-1.0, 0.5, 0.3);
|
||||
Point3 measured = t + noise;
|
||||
|
||||
Key pose_key(1);
|
||||
Key point_key(2);
|
||||
PoseToPointFactor factor(pose_key, point_key, measured,
|
||||
Isotropic::Sigma(3, 0.05));
|
||||
Vector expectedError = noise;
|
||||
Vector actualError = factor.evaluateError(pose, point);
|
||||
EXPECT(assert_equal(expectedError, actualError, 1E-5));
|
||||
}
|
||||
|
||||
/// Check Jacobians are correct
|
||||
TEST(PoseToPointFactor, jacobian) {
|
||||
// Measurement
|
||||
gtsam::Point3 l_meas = gtsam::Point3(1, 2, 3);
|
||||
|
||||
// Linearisation point
|
||||
gtsam::Point3 p_t = gtsam::Point3(-5, 12, 2);
|
||||
gtsam::Rot3 p_R = gtsam::Rot3::RzRyRx(1.5 * M_PI, -0.3 * M_PI, 0.4 * M_PI);
|
||||
Pose3 p(p_R, p_t);
|
||||
|
||||
gtsam::Point3 l = gtsam::Point3(3, 0, 5);
|
||||
|
||||
// Factor
|
||||
Key pose_key(1);
|
||||
Key point_key(2);
|
||||
SharedGaussian noise = noiseModel::Diagonal::Sigmas(Vector3(0.1, 0.1, 0.1));
|
||||
PoseToPointFactor factor(pose_key, point_key, l_meas, noise);
|
||||
|
||||
// Calculate numerical derivatives
|
||||
auto f = boost::bind(&PoseToPointFactor::evaluateError, factor, _1, _2,
|
||||
boost::none, boost::none);
|
||||
Matrix numerical_H1 = numericalDerivative21<Vector, Pose3, Point3>(f, p, l);
|
||||
Matrix numerical_H2 = numericalDerivative22<Vector, Pose3, Point3>(f, p, l);
|
||||
|
||||
// Use the factor to calculate the derivative
|
||||
Matrix actual_H1;
|
||||
Matrix actual_H2;
|
||||
factor.evaluateError(p, l, actual_H1, actual_H2);
|
||||
|
||||
// Verify we get the expected error
|
||||
EXPECT_TRUE(assert_equal(numerical_H1, actual_H1, 1e-8));
|
||||
EXPECT_TRUE(assert_equal(numerical_H2, actual_H2, 1e-8));
|
||||
}
|
||||
|
||||
/* ************************************************************************* */
|
||||
int main() {
|
||||
TestResult tr;
|
||||
return TestRegistry::runAllTests(tr);
|
||||
}
|
||||
/* ************************************************************************* */
|
||||
|
|
@ -1,44 +0,0 @@
|
|||
# How to build Debian and Ubuntu Packages
|
||||
|
||||
## Preparations
|
||||
|
||||
Packages must be signed with a GPG key. First have a look of the keys
|
||||
you have available:
|
||||
|
||||
gpg --list-secret-keys
|
||||
|
||||
If you don't have one, create one, then list again.
|
||||
|
||||
Pick a secret key you like from the listed keys, for instance
|
||||
"Your Name <your.email@yourprovider.com>". Then unlock that key by
|
||||
signing a dummy file. The following line should pop up a window to
|
||||
enter the passphrase:
|
||||
|
||||
echo | gpg --local-user "Your Name <your.email@yourprovider.com>" -s >/dev/null
|
||||
|
||||
Now you can run the below scripts. Without this step they will fail
|
||||
with "No secret key" or similar messages.
|
||||
|
||||
## How to generate a Debian package
|
||||
|
||||
Run the package script, providing a name/email that matches your PGP key.
|
||||
|
||||
cd [GTSAM_SOURCE_ROOT]
|
||||
bash package_scripts/prepare_debian.sh -e "Your Name <your.email@yourprovider.com>"
|
||||
|
||||
|
||||
## How to generate Ubuntu packages for a PPA
|
||||
|
||||
Run the packaging script, passing the name of the gpg key
|
||||
(see above) with the "-e" option:
|
||||
|
||||
cd [GTSAM_SOURCE_ROOT]
|
||||
bash package_scripts/prepare_ubuntu_pkgs_for_ppa.sh -e "Your Name <your.email@yourprovider.com>"
|
||||
|
||||
Check that you have uploaded this key to the ubuntu key server, and
|
||||
have added the key to your account.
|
||||
|
||||
Upload the package to your ppa:
|
||||
|
||||
cd ~/gtsam_ubuntu
|
||||
bash [GTSAM_SOURCE_ROOT]/package_scripts/upload_all_gtsam_ppa.sh -p "ppa:your-name/ppa-name"
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
#!/bin/sh
|
||||
|
||||
# Compile boost statically, with -fPIC to allow linking it into the mex
|
||||
# module (which is a dynamic library). --disable-icu prevents depending
|
||||
# on libicu, which is unneeded and would require then linking the mex
|
||||
# module with it as well. We just stage instead of install, then the
|
||||
# toolbox_package_unix.sh script uses the staged boost.
|
||||
./b2 link=static threading=multi cxxflags=-fPIC cflags=-fPIC --disable-icu -a -j4 stage
|
||||
|
|
@ -1,187 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Prepare to build a Debian package.
|
||||
# Jose Luis Blanco Claraco, 2019 (for GTSAM)
|
||||
# Jose Luis Blanco Claraco, 2008-2018 (for MRPT)
|
||||
|
||||
set -e # end on error
|
||||
#set -x # for debugging
|
||||
|
||||
APPEND_SNAPSHOT_NUM=0
|
||||
IS_FOR_UBUNTU=0
|
||||
APPEND_LINUX_DISTRO=""
|
||||
VALUE_EXTRA_CMAKE_PARAMS=""
|
||||
while getopts "sud:c:e:" OPTION
|
||||
do
|
||||
case $OPTION in
|
||||
s)
|
||||
APPEND_SNAPSHOT_NUM=1
|
||||
;;
|
||||
u)
|
||||
IS_FOR_UBUNTU=1
|
||||
;;
|
||||
d)
|
||||
APPEND_LINUX_DISTRO=$OPTARG
|
||||
;;
|
||||
c)
|
||||
VALUE_EXTRA_CMAKE_PARAMS=$OPTARG
|
||||
;;
|
||||
e)
|
||||
PACKAGER_EMAIL=$OPTARG
|
||||
;;
|
||||
?)
|
||||
echo "Unknown command line argument!"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z ${PACKAGER_EMAIL+x} ]; then
|
||||
echo "must specify packager email via -e option!"
|
||||
exit -1
|
||||
fi
|
||||
|
||||
if [ -f CMakeLists.txt ];
|
||||
then
|
||||
source package_scripts/prepare_debian_gen_snapshot_version.sh
|
||||
else
|
||||
echo "Error: cannot find CMakeList.txt. This script is intended to be run from the root of the source tree."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Append snapshot?
|
||||
if [ $APPEND_SNAPSHOT_NUM == "1" ];
|
||||
then
|
||||
CUR_SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
source $CUR_SCRIPT_DIR/prepare_debian_gen_snapshot_version.sh # populate GTSAM_SNAPSHOT_VERSION
|
||||
|
||||
GTSAM_VERSION_STR="${GTSAM_VERSION_STR}~snapshot${GTSAM_SNAPSHOT_VERSION}${APPEND_LINUX_DISTRO}"
|
||||
else
|
||||
GTSAM_VERSION_STR="${GTSAM_VERSION_STR}${APPEND_LINUX_DISTRO}"
|
||||
fi
|
||||
|
||||
# Call prepare_release
|
||||
GTSAMSRC=`pwd`
|
||||
|
||||
if [ -f $HOME/gtsam_release/gtsam*.tar.gz ];
|
||||
then
|
||||
echo "## release file already exists. Reusing it."
|
||||
else
|
||||
source package_scripts/prepare_release.sh
|
||||
echo
|
||||
echo "## Done prepare_release.sh"
|
||||
fi
|
||||
|
||||
echo "=========== Generating GTSAM ${GTSAM_VER_MMP} Debian package =============="
|
||||
cd $GTSAMSRC
|
||||
|
||||
set -x
|
||||
if [ -z "$GTSAM_DEB_DIR" ]; then
|
||||
GTSAM_DEB_DIR="$HOME/gtsam_debian"
|
||||
fi
|
||||
GTSAM_EXTERN_DEBIAN_DIR="$GTSAMSRC/debian/"
|
||||
GTSAM_EXTERN_UBUNTU_PPA_DIR="$GTSAMSRC/debian/"
|
||||
|
||||
if [ -f ${GTSAM_EXTERN_DEBIAN_DIR}/control ];
|
||||
then
|
||||
echo "Using debian dir: ${GTSAM_EXTERN_DEBIAN_DIR}"
|
||||
else
|
||||
echo "ERROR: Cannot find ${GTSAM_EXTERN_DEBIAN_DIR}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
GTSAM_DEBSRC_DIR=$GTSAM_DEB_DIR/gtsam-${GTSAM_VERSION_STR}
|
||||
|
||||
echo "GTSAM_VERSION_STR: ${GTSAM_VERSION_STR}"
|
||||
echo "GTSAM_DEBSRC_DIR: ${GTSAM_DEBSRC_DIR}"
|
||||
|
||||
# Prepare a directory for building the debian package:
|
||||
#
|
||||
rm -fR $GTSAM_DEB_DIR || true
|
||||
mkdir -p $GTSAM_DEB_DIR || true
|
||||
|
||||
# Orig tarball:
|
||||
echo "Copying orig tarball: gtsam_${GTSAM_VERSION_STR}.orig.tar.gz"
|
||||
cp $HOME/gtsam_release/gtsam*.tar.gz $GTSAM_DEB_DIR/gtsam_${GTSAM_VERSION_STR}.orig.tar.gz
|
||||
cd ${GTSAM_DEB_DIR}
|
||||
tar -xf gtsam_${GTSAM_VERSION_STR}.orig.tar.gz
|
||||
|
||||
if [ ! -d "${GTSAM_DEBSRC_DIR}" ];
|
||||
then
|
||||
mv gtsam-* ${GTSAM_DEBSRC_DIR} # fix different dir names for Ubuntu PPA packages
|
||||
fi
|
||||
|
||||
if [ ! -f "${GTSAM_DEBSRC_DIR}/CMakeLists.txt" ];
|
||||
then
|
||||
echo "*ERROR*: Seems there was a problem copying sources to ${GTSAM_DEBSRC_DIR}... aborting script."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd ${GTSAM_DEBSRC_DIR}
|
||||
|
||||
# Copy debian directory:
|
||||
#mkdir debian
|
||||
cp -r ${GTSAM_EXTERN_DEBIAN_DIR}/* debian
|
||||
|
||||
# Use modified control & rules files for Ubuntu PPA packages:
|
||||
#if [ $IS_FOR_UBUNTU == "1" ];
|
||||
#then
|
||||
# already done: cp ${GTSAM_EXTERN_UBUNTU_PPA_DIR}/control.in debian/
|
||||
# Ubuntu: force use of gcc-7:
|
||||
#sed -i '9i\export CXX=/usr/bin/g++-7\' debian/rules
|
||||
#sed -i '9i\export CC=/usr/bin/gcc-7\' debian/rules7
|
||||
#fi
|
||||
|
||||
# Export signing pub key:
|
||||
mkdir debian/upstream/
|
||||
gpg --export --export-options export-minimal --armor > debian/upstream/signing-key.asc
|
||||
|
||||
# Parse debian/ control.in --> control
|
||||
#mv debian/control.in debian/control
|
||||
#sed -i "s/@GTSAM_VER_MM@/${GTSAM_VER_MM}/g" debian/control
|
||||
|
||||
# Replace the text "REPLACE_HERE_EXTRA_CMAKE_PARAMS" in the "debian/rules" file
|
||||
# with: ${${VALUE_EXTRA_CMAKE_PARAMS}}
|
||||
RULES_FILE=debian/rules
|
||||
sed -i -e "s/REPLACE_HERE_EXTRA_CMAKE_PARAMS/${VALUE_EXTRA_CMAKE_PARAMS}/g" $RULES_FILE
|
||||
echo "Using these extra parameters for CMake: '${VALUE_EXTRA_CMAKE_PARAMS}'"
|
||||
|
||||
# Strip my custom files...
|
||||
rm debian/*.new || true
|
||||
|
||||
|
||||
# Figure out the next Debian version number:
|
||||
echo "Detecting next Debian version number..."
|
||||
|
||||
CHANGELOG_UPSTREAM_VER=$( dpkg-parsechangelog | sed -n 's/Version:.*\([0-9]\.[0-9]*\.[0-9]*.*snapshot.*\)-.*/\1/p' )
|
||||
CHANGELOG_LAST_DEBIAN_VER=$( dpkg-parsechangelog | sed -n 's/Version:.*\([0-9]\.[0-9]*\.[0-9]*\).*-\([0-9]*\).*/\2/p' )
|
||||
|
||||
echo " -> PREVIOUS UPSTREAM: $CHANGELOG_UPSTREAM_VER -> New: ${GTSAM_VERSION_STR}"
|
||||
echo " -> PREVIOUS DEBIAN VERSION: $CHANGELOG_LAST_DEBIAN_VER"
|
||||
|
||||
# If we have the same upstream versions, increase the Debian version, otherwise create a new entry:
|
||||
if [ "$CHANGELOG_UPSTREAM_VER" = "$GTSAM_VERSION_STR" ];
|
||||
then
|
||||
NEW_DEBIAN_VER=$[$CHANGELOG_LAST_DEBIAN_VER + 1]
|
||||
echo "Changing to a new Debian version: ${GTSAM_VERSION_STR}-${NEW_DEBIAN_VER}"
|
||||
DEBCHANGE_CMD="--newversion ${GTSAM_VERSION_STR}-${NEW_DEBIAN_VER}"
|
||||
else
|
||||
DEBCHANGE_CMD="--newversion ${GTSAM_VERSION_STR}-1"
|
||||
fi
|
||||
|
||||
echo "Adding a new entry to debian/changelog..."
|
||||
|
||||
DEBEMAIL=${PACKAGER_EMAIL} debchange $DEBCHANGE_CMD -b --distribution unstable --force-distribution New version of upstream sources.
|
||||
|
||||
echo "Copying back the new changelog to a temporary file in: ${GTSAM_EXTERN_DEBIAN_DIR}changelog.new"
|
||||
cp debian/changelog ${GTSAM_EXTERN_DEBIAN_DIR}changelog.new
|
||||
|
||||
set +x
|
||||
|
||||
echo "=============================================================="
|
||||
echo "Now, you can build the source Deb package with 'debuild -S -sa'"
|
||||
echo "=============================================================="
|
||||
|
||||
cd ..
|
||||
ls -lh
|
||||
|
||||
exit 0
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# See https://reproducible-builds.org/specs/source-date-epoch/
|
||||
# get SOURCE_DATE_EPOCH with UNIX time_t
|
||||
if [ -d ".git" ];
|
||||
then
|
||||
SOURCE_DATE_EPOCH=$(git log -1 --pretty=%ct)
|
||||
else
|
||||
echo "Error: intended for use from within a git repository"
|
||||
exit 1
|
||||
fi
|
||||
GTSAM_SNAPSHOT_VERSION=$(date -d @$SOURCE_DATE_EPOCH +%Y%m%d-%H%M)
|
||||
|
||||
GTSAM_SNAPSHOT_VERSION+="-git-"
|
||||
GTSAM_SNAPSHOT_VERSION+=`git rev-parse --short=8 HEAD`
|
||||
GTSAM_SNAPSHOT_VERSION+="-"
|
||||
|
||||
# x.y.z version components:
|
||||
GTSAM_VERSION_MAJOR=$(grep "(GTSAM_VERSION_MAJOR" CMakeLists.txt | sed -r 's/^.*GTSAM_VERSION_MAJOR\s*([0-9])*.*$/\1/g')
|
||||
GTSAM_VERSION_MINOR=$(grep "(GTSAM_VERSION_MINOR" CMakeLists.txt | sed -r 's/^.*GTSAM_VERSION_MINOR\s*([0-9])*.*$/\1/g')
|
||||
GTSAM_VERSION_PATCH=$(grep "(GTSAM_VERSION_PATCH" CMakeLists.txt | sed -r 's/^.*GTSAM_VERSION_PATCH\s*([0-9])*.*$/\1/g')
|
||||
|
||||
GTSAM_VER_MM="${GTSAM_VERSION_MAJOR}.${GTSAM_VERSION_MINOR}"
|
||||
GTSAM_VER_MMP="${GTSAM_VERSION_MAJOR}.${GTSAM_VERSION_MINOR}.${GTSAM_VERSION_PATCH}"
|
||||
GTSAM_VERSION_STR=$GTSAM_VER_MMP
|
||||
|
|
@ -1,71 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Export sources from a git tree and prepare it for a public release.
|
||||
# Jose Luis Blanco Claraco, 2019 (for GTSAM)
|
||||
# Jose Luis Blanco Claraco, 2008-2018 (for MRPT)
|
||||
|
||||
set -e # exit on error
|
||||
#set -x # for debugging
|
||||
|
||||
# Checks
|
||||
# --------------------------------
|
||||
if [ -f version_prefix.txt ];
|
||||
then
|
||||
if [ -z ${GTSAM_VERSION_STR+x} ];
|
||||
then
|
||||
source package_scripts/prepare_debian_gen_snapshot_version.sh
|
||||
fi
|
||||
echo "ERROR: Run this script from the GTSAM source tree root directory."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
GTSAM_SRC=`pwd`
|
||||
OUT_RELEASES_DIR="$HOME/gtsam_release"
|
||||
|
||||
OUT_DIR=$OUT_RELEASES_DIR/gtsam-${GTSAM_VERSION_STR}
|
||||
|
||||
echo "=========== Generating GTSAM release ${GTSAM_VER_MMP} =================="
|
||||
echo "GTSAM_VERSION_STR : ${GTSAM_VERSION_STR}"
|
||||
echo "OUT_DIR : ${OUT_DIR}"
|
||||
echo "============================================================"
|
||||
echo
|
||||
|
||||
# Prepare output directory:
|
||||
rm -fR $OUT_RELEASES_DIR || true
|
||||
mkdir -p ${OUT_DIR}
|
||||
|
||||
# Export / copy sources to target dir:
|
||||
if [ -d "$GTSAM_SRC/.git" ];
|
||||
then
|
||||
echo "# Exporting git source tree to ${OUT_DIR}"
|
||||
git archive --format=tar HEAD | tar -x -C ${OUT_DIR}
|
||||
|
||||
# Remove VCS control files:
|
||||
find ${OUT_DIR} -name '.gitignore' | xargs rm
|
||||
|
||||
# Generate ./SOURCE_DATE_EPOCH with UNIX time_t
|
||||
SOURCE_DATE_EPOCH=$(git log -1 --pretty=%ct)
|
||||
else
|
||||
echo "# Copying sources to ${OUT_DIR}"
|
||||
cp -R . ${OUT_DIR}
|
||||
|
||||
# Generate ./SOURCE_DATE_EPOCH with UNIX time_t
|
||||
SOURCE_DATE_EPOCH=$(date +%s)
|
||||
fi
|
||||
|
||||
# See https://reproducible-builds.org/specs/source-date-epoch/
|
||||
echo $SOURCE_DATE_EPOCH > ${OUT_DIR}/SOURCE_DATE_EPOCH
|
||||
|
||||
cd ${OUT_DIR}
|
||||
|
||||
# Dont include Debian files in releases:
|
||||
rm -fR package_scripts
|
||||
|
||||
# Orig tarball:
|
||||
cd ..
|
||||
echo "# Creating orig tarball: gtsam-${GTSAM_VERSION_STR}.tar.gz"
|
||||
tar czf gtsam-${GTSAM_VERSION_STR}.tar.gz gtsam-${GTSAM_VERSION_STR}
|
||||
|
||||
rm -fr gtsam-${GTSAM_VERSION_STR}
|
||||
|
||||
# GPG signature:
|
||||
gpg --armor --detach-sign gtsam-${GTSAM_VERSION_STR}.tar.gz
|
||||
|
|
@ -1,123 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Creates a set of packages for each different Ubuntu distribution, with the
|
||||
# intention of uploading them to a PPA on launchpad
|
||||
#
|
||||
# JLBC, 2010
|
||||
# [Addition 2012:]
|
||||
#
|
||||
# You can declare a variable (in the caller shell) with extra flags for the
|
||||
# CMake in the final ./configure like:
|
||||
#
|
||||
# GTSAM_PKG_CUSTOM_CMAKE_PARAMS="\"-DDISABLE_SSE3=ON\""
|
||||
#
|
||||
|
||||
|
||||
function show_help {
|
||||
echo "USAGE:"
|
||||
echo ""
|
||||
echo "- to display this help: "
|
||||
echo "prepare_ubuntu_packages_for_ppa.sh -h or -?"
|
||||
echo ""
|
||||
echo "- to package to your PPA: "
|
||||
echo "prepare_ubuntu_packages_for_ppa.sh -e email_of_your_gpg_key"
|
||||
echo ""
|
||||
echo "to pass custom config for GTSAM, set the following"
|
||||
echo "environment variable beforehand: "
|
||||
echo ""
|
||||
echo "GTSAM_PKG_CUSTOM_CMAKE_PARAMS=\"\"-DDISABLE_SSE3=ON\"\""
|
||||
echo ""
|
||||
}
|
||||
|
||||
while getopts "h?e:" opt; do
|
||||
case "$opt" in
|
||||
h|\?)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
e) PACKAGER_EMAIL=$OPTARG
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z ${PACKAGER_EMAIL+x} ]; then
|
||||
show_help
|
||||
exit -1
|
||||
fi
|
||||
|
||||
|
||||
set -e
|
||||
|
||||
# List of distributions to create PPA packages for:
|
||||
LST_DISTROS=(xenial bionic eoan focal)
|
||||
|
||||
# Checks
|
||||
# --------------------------------
|
||||
if [ -f CMakeLists.txt ];
|
||||
then
|
||||
source package_scripts/prepare_debian_gen_snapshot_version.sh
|
||||
echo "GTSAM version: ${GTSAM_VER_MMP}"
|
||||
else
|
||||
echo "ERROR: Run this script from the GTSAM root directory."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "${gtsam_ubuntu_OUT_DIR}" ]; then
|
||||
export gtsam_ubuntu_OUT_DIR="$HOME/gtsam_ubuntu"
|
||||
fi
|
||||
GTSAMSRC=`pwd`
|
||||
if [ -z "${GTSAM_DEB_DIR}" ]; then
|
||||
export GTSAM_DEB_DIR="$HOME/gtsam_debian"
|
||||
fi
|
||||
GTSAM_EXTERN_DEBIAN_DIR="$GTSAMSRC/debian/"
|
||||
|
||||
# Clean out dirs:
|
||||
rm -fr $gtsam_ubuntu_OUT_DIR/
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# And now create the custom packages for each Ubuntu distribution:
|
||||
# -------------------------------------------------------------------
|
||||
count=${#LST_DISTROS[@]}
|
||||
IDXS=$(seq 0 $(expr $count - 1))
|
||||
|
||||
cp ${GTSAM_EXTERN_DEBIAN_DIR}/changelog /tmp/my_changelog
|
||||
|
||||
for IDX in ${IDXS};
|
||||
do
|
||||
DEBIAN_DIST=${LST_DISTROS[$IDX]}
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Call the standard "prepare_debian.sh" script:
|
||||
# -------------------------------------------------------------------
|
||||
cd ${GTSAMSRC}
|
||||
bash package_scripts/prepare_debian.sh -e "$PACKAGER_EMAIL" -s -u -d ${DEBIAN_DIST} -c "${GTSAM_PKG_CUSTOM_CMAKE_PARAMS}"
|
||||
|
||||
CUR_SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
source $CUR_SCRIPT_DIR/prepare_debian_gen_snapshot_version.sh # populate GTSAM_SNAPSHOT_VERSION
|
||||
|
||||
echo "===== Distribution: ${DEBIAN_DIST} ========="
|
||||
cd ${GTSAM_DEB_DIR}/gtsam-${GTSAM_VER_MMP}~snapshot${GTSAM_SNAPSHOT_VERSION}${DEBIAN_DIST}/debian
|
||||
#cp ${GTSAM_EXTERN_DEBIAN_DIR}/changelog changelog
|
||||
cp /tmp/my_changelog changelog
|
||||
DEBCHANGE_CMD="--newversion ${GTSAM_VERSION_STR}~snapshot${GTSAM_SNAPSHOT_VERSION}${DEBIAN_DIST}-1"
|
||||
echo "Changing to a new Debian version: ${DEBCHANGE_CMD}"
|
||||
echo "Adding a new entry to debian/changelog for distribution ${DEBIAN_DIST}"
|
||||
DEBEMAIL="${PACKAGER_EMAIL}" debchange $DEBCHANGE_CMD -b --distribution ${DEBIAN_DIST} --force-distribution New version of upstream sources.
|
||||
|
||||
cp changelog /tmp/my_changelog
|
||||
|
||||
echo "Now, let's build the source Deb package with 'debuild -S -sa':"
|
||||
cd ..
|
||||
# -S: source package
|
||||
# -sa: force inclusion of sources
|
||||
# -d: don't check dependencies in this system
|
||||
debuild -S -sa -d
|
||||
|
||||
# Make a copy of all these packages:
|
||||
cd ..
|
||||
mkdir -p $gtsam_ubuntu_OUT_DIR/$DEBIAN_DIST
|
||||
cp gtsam_* $gtsam_ubuntu_OUT_DIR/${DEBIAN_DIST}/
|
||||
echo ">>>>>> Saving packages to: $gtsam_ubuntu_OUT_DIR/$DEBIAN_DIST/"
|
||||
done
|
||||
|
||||
|
||||
exit 0
|
||||
|
|
@ -1,64 +0,0 @@
|
|||
#!/bin/sh
|
||||
|
||||
# Script to build a tarball with the matlab toolbox
|
||||
|
||||
# Detect platform
|
||||
os=`uname -s`
|
||||
arch=`uname -m`
|
||||
if [ "$os" = "Linux" -a "$arch" = "x86_64" ]; then
|
||||
platform=lin64
|
||||
elif [ "$os" = "Linux" -a "$arch" = "i686" ]; then
|
||||
platform=lin32
|
||||
elif [ "$os" = "Darwin" -a "$arch" = "x86_64" ]; then
|
||||
platform=mac64
|
||||
else
|
||||
echo "Unrecognized platform"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Platform is ${platform}"
|
||||
|
||||
# Check for empty diectory
|
||||
if [ ! -z "`ls`" ]; then
|
||||
echo "Please run this script from an empty build directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for boost
|
||||
if [ -z "$1" ]; then
|
||||
echo "Usage: $0 BOOSTTREE"
|
||||
echo "BOOSTTREE should be a boost source tree compiled with toolbox_build_boost."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Run cmake
|
||||
cmake -DCMAKE_BUILD_TYPE=Release \
|
||||
-DGTSAM_INSTALL_MATLAB_TOOLBOX:BOOL=ON \
|
||||
-DCMAKE_INSTALL_PREFIX="$PWD/stage" \
|
||||
-DBoost_NO_SYSTEM_PATHS:BOOL=ON \
|
||||
-DBoost_USE_STATIC_LIBS:BOOL=ON \
|
||||
-DBOOST_ROOT="$1" \
|
||||
-DGTSAM_BUILD_TESTS:BOOL=OFF \
|
||||
-DGTSAM_BUILD_TIMING:BOOL=OFF \
|
||||
-DGTSAM_BUILD_EXAMPLES_ALWAYS:BOOL=OFF \
|
||||
-DGTSAM_WITH_TBB:BOOL=OFF \
|
||||
-DGTSAM_SUPPORT_NESTED_DISSECTION:BOOL=OFF \
|
||||
-DGTSAM_INSTALL_GEOGRAPHICLIB:BOOL=OFF \
|
||||
-DGTSAM_BUILD_UNSTABLE:BOOL=OFF \
|
||||
-DGTSAM_MEX_BUILD_STATIC_MODULE:BOOL=ON ..
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "CMake failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Compile
|
||||
make -j8 install
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Compile failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create package
|
||||
tar czf gtsam-toolbox-3.2.0-$platform.tgz -C stage/gtsam_toolbox toolbox
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
function show_help {
|
||||
echo "USAGE:"
|
||||
echo ""
|
||||
echo "- to display this help: "
|
||||
echo "upload_all_gtsam_ppa.sh -h or -?"
|
||||
echo ""
|
||||
echo "- to upload to your PPA: "
|
||||
echo "upload_all_gtsam_ppa.sh -p ppa:your_name/name_of_ppa"
|
||||
echo ""
|
||||
}
|
||||
|
||||
while getopts "h?p:" opt; do
|
||||
case "$opt" in
|
||||
h|\?)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
p) ppa_name=$OPTARG
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z ${ppa_name+x} ]; then
|
||||
show_help
|
||||
exit -1
|
||||
fi
|
||||
|
||||
|
||||
find . -name '*.changes' | xargs -I FIL dput ${ppa_name} FIL
|
||||
|
|
@ -350,7 +350,10 @@ void Module::emit_cython_pxd(FileWriter& pxdFile) const {
|
|||
" T* get()\n"
|
||||
" long use_count() const\n"
|
||||
" T& operator*()\n\n"
|
||||
" cdef shared_ptr[T] dynamic_pointer_cast[T,U](const shared_ptr[U]& r)\n"
|
||||
" cdef shared_ptr[T] dynamic_pointer_cast[T,U](const shared_ptr[U]& r)\n\n";
|
||||
|
||||
// gtsam alignment-friendly shared_ptr
|
||||
pxdFile.oss << "cdef extern from \"gtsam/base/make_shared.h\" namespace \"gtsam\":\n"
|
||||
" cdef shared_ptr[T] make_shared[T](const T& r)\n\n";
|
||||
|
||||
for(const TypedefPair& types: typedefs)
|
||||
|
|
|
|||
|
|
@ -1 +0,0 @@
|
|||
Subproject commit b3bf248eec9cad8260753c982e1ae6cb72fff470
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
version: 1.0.{build}
|
||||
image:
|
||||
- Visual Studio 2017
|
||||
- Visual Studio 2015
|
||||
test: off
|
||||
skip_branch_with_pr: true
|
||||
build:
|
||||
parallel: true
|
||||
platform:
|
||||
- x64
|
||||
- x86
|
||||
environment:
|
||||
matrix:
|
||||
- PYTHON: 36
|
||||
CPP: 14
|
||||
CONFIG: Debug
|
||||
- PYTHON: 27
|
||||
CPP: 14
|
||||
CONFIG: Debug
|
||||
- CONDA: 36
|
||||
CPP: latest
|
||||
CONFIG: Release
|
||||
matrix:
|
||||
exclude:
|
||||
- image: Visual Studio 2015
|
||||
platform: x86
|
||||
- image: Visual Studio 2015
|
||||
CPP: latest
|
||||
- image: Visual Studio 2017
|
||||
CPP: latest
|
||||
platform: x86
|
||||
install:
|
||||
- ps: |
|
||||
if ($env:PLATFORM -eq "x64") { $env:CMAKE_ARCH = "x64" }
|
||||
if ($env:APPVEYOR_JOB_NAME -like "*Visual Studio 2017*") {
|
||||
$env:CMAKE_GENERATOR = "Visual Studio 15 2017"
|
||||
$env:CMAKE_INCLUDE_PATH = "C:\Libraries\boost_1_64_0"
|
||||
$env:CXXFLAGS = "-permissive-"
|
||||
} else {
|
||||
$env:CMAKE_GENERATOR = "Visual Studio 14 2015"
|
||||
}
|
||||
if ($env:PYTHON) {
|
||||
if ($env:PLATFORM -eq "x64") { $env:PYTHON = "$env:PYTHON-x64" }
|
||||
$env:PATH = "C:\Python$env:PYTHON\;C:\Python$env:PYTHON\Scripts\;$env:PATH"
|
||||
python -W ignore -m pip install --upgrade pip wheel
|
||||
python -W ignore -m pip install pytest numpy --no-warn-script-location
|
||||
} elseif ($env:CONDA) {
|
||||
if ($env:CONDA -eq "27") { $env:CONDA = "" }
|
||||
if ($env:PLATFORM -eq "x64") { $env:CONDA = "$env:CONDA-x64" }
|
||||
$env:PATH = "C:\Miniconda$env:CONDA\;C:\Miniconda$env:CONDA\Scripts\;$env:PATH"
|
||||
$env:PYTHONHOME = "C:\Miniconda$env:CONDA"
|
||||
conda --version
|
||||
conda install -y -q pytest numpy scipy
|
||||
}
|
||||
- ps: |
|
||||
Start-FileDownload 'http://bitbucket.org/eigen/eigen/get/3.3.3.zip'
|
||||
7z x 3.3.3.zip -y > $null
|
||||
$env:CMAKE_INCLUDE_PATH = "eigen-eigen-67e894c6cd8f;$env:CMAKE_INCLUDE_PATH"
|
||||
build_script:
|
||||
- cmake -G "%CMAKE_GENERATOR%" -A "%CMAKE_ARCH%"
|
||||
-DPYBIND11_CPP_STANDARD=/std:c++%CPP%
|
||||
-DPYBIND11_WERROR=ON
|
||||
-DDOWNLOAD_CATCH=ON
|
||||
-DCMAKE_SUPPRESS_REGENERATION=1
|
||||
.
|
||||
- set MSBuildLogger="C:\Program Files\AppVeyor\BuildAgent\Appveyor.MSBuildLogger.dll"
|
||||
- cmake --build . --config %CONFIG% --target pytest -- /m /v:m /logger:%MSBuildLogger%
|
||||
- cmake --build . --config %CONFIG% --target cpptest -- /m /v:m /logger:%MSBuildLogger%
|
||||
- if "%CPP%"=="latest" (cmake --build . --config %CONFIG% --target test_cmake_build -- /m /v:m /logger:%MSBuildLogger%)
|
||||
on_failure: if exist "tests\test_cmake_build" type tests\test_cmake_build\*.log*
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
CMakeCache.txt
|
||||
CMakeFiles
|
||||
Makefile
|
||||
cmake_install.cmake
|
||||
.DS_Store
|
||||
*.so
|
||||
*.pyd
|
||||
*.dll
|
||||
*.sln
|
||||
*.sdf
|
||||
*.opensdf
|
||||
*.vcxproj
|
||||
*.filters
|
||||
example.dir
|
||||
Win32
|
||||
x64
|
||||
Release
|
||||
Debug
|
||||
.vs
|
||||
CTestTestfile.cmake
|
||||
Testing
|
||||
autogen
|
||||
MANIFEST
|
||||
/.ninja_*
|
||||
/*.ninja
|
||||
/docs/.build
|
||||
*.py[co]
|
||||
*.egg-info
|
||||
*~
|
||||
.*.swp
|
||||
.DS_Store
|
||||
/dist
|
||||
/build
|
||||
/cmake/
|
||||
.cache/
|
||||
sosize-*.txt
|
||||
pybind11Config*.cmake
|
||||
pybind11Targets.cmake
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
[submodule "tools/clang"]
|
||||
path = tools/clang
|
||||
url = ../../wjakob/clang-cindex-python3
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
python:
|
||||
version: 3
|
||||
requirements_file: docs/requirements.txt
|
||||
|
|
@ -0,0 +1,280 @@
|
|||
language: cpp
|
||||
matrix:
|
||||
include:
|
||||
# This config does a few things:
|
||||
# - Checks C++ and Python code styles (check-style.sh and flake8).
|
||||
# - Makes sure sphinx can build the docs without any errors or warnings.
|
||||
# - Tests setup.py sdist and install (all header files should be present).
|
||||
# - Makes sure that everything still works without optional deps (numpy/scipy/eigen) and
|
||||
# also tests the automatic discovery functions in CMake (Python version, C++ standard).
|
||||
- os: linux
|
||||
dist: xenial # Necessary to run doxygen 1.8.15
|
||||
name: Style, docs, and pip
|
||||
cache: false
|
||||
before_install:
|
||||
- pyenv global $(pyenv whence 2to3) # activate all python versions
|
||||
- PY_CMD=python3
|
||||
- $PY_CMD -m pip install --user --upgrade pip wheel setuptools
|
||||
install:
|
||||
- $PY_CMD -m pip install --user --upgrade sphinx sphinx_rtd_theme breathe flake8 pep8-naming pytest
|
||||
- curl -fsSL https://sourceforge.net/projects/doxygen/files/rel-1.8.15/doxygen-1.8.15.linux.bin.tar.gz/download | tar xz
|
||||
- export PATH="$PWD/doxygen-1.8.15/bin:$PATH"
|
||||
script:
|
||||
- tools/check-style.sh
|
||||
- flake8
|
||||
- $PY_CMD -m sphinx -W -b html docs docs/.build
|
||||
- |
|
||||
# Make sure setup.py distributes and installs all the headers
|
||||
$PY_CMD setup.py sdist
|
||||
$PY_CMD -m pip install --user -U ./dist/*
|
||||
installed=$($PY_CMD -c "import pybind11; print(pybind11.get_include(True) + '/pybind11')")
|
||||
diff -rq $installed ./include/pybind11
|
||||
- |
|
||||
# Barebones build
|
||||
cmake -DCMAKE_BUILD_TYPE=Debug -DPYBIND11_WERROR=ON -DDOWNLOAD_CATCH=ON -DPYTHON_EXECUTABLE=$(which $PY_CMD) .
|
||||
make pytest -j 2
|
||||
make cpptest -j 2
|
||||
# The following are regular test configurations, including optional dependencies.
|
||||
# With regard to each other they differ in Python version, C++ standard and compiler.
|
||||
- os: linux
|
||||
dist: trusty
|
||||
name: Python 2.7, c++11, gcc 4.8
|
||||
env: PYTHON=2.7 CPP=11 GCC=4.8
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- cmake=2.\*
|
||||
- cmake-data=2.\*
|
||||
- os: linux
|
||||
dist: trusty
|
||||
name: Python 3.6, c++11, gcc 4.8
|
||||
env: PYTHON=3.6 CPP=11 GCC=4.8
|
||||
addons:
|
||||
apt:
|
||||
sources:
|
||||
- deadsnakes
|
||||
packages:
|
||||
- python3.6-dev
|
||||
- python3.6-venv
|
||||
- cmake=2.\*
|
||||
- cmake-data=2.\*
|
||||
- os: linux
|
||||
dist: trusty
|
||||
env: PYTHON=2.7 CPP=14 GCC=6 CMAKE=1
|
||||
name: Python 2.7, c++14, gcc 4.8, CMake test
|
||||
addons:
|
||||
apt:
|
||||
sources:
|
||||
- ubuntu-toolchain-r-test
|
||||
packages:
|
||||
- g++-6
|
||||
- os: linux
|
||||
dist: trusty
|
||||
name: Python 3.5, c++14, gcc 6, Debug build
|
||||
# N.B. `ensurepip` could be installed transitively by `python3.5-venv`, but
|
||||
# seems to have apt conflicts (at least for Trusty). Use Docker instead.
|
||||
services: docker
|
||||
env: DOCKER=debian:stretch PYTHON=3.5 CPP=14 GCC=6 DEBUG=1
|
||||
- os: linux
|
||||
dist: xenial
|
||||
env: PYTHON=3.6 CPP=17 GCC=7
|
||||
name: Python 3.6, c++17, gcc 7
|
||||
addons:
|
||||
apt:
|
||||
sources:
|
||||
- deadsnakes
|
||||
- ubuntu-toolchain-r-test
|
||||
packages:
|
||||
- g++-7
|
||||
- python3.6-dev
|
||||
- python3.6-venv
|
||||
- os: linux
|
||||
dist: xenial
|
||||
env: PYTHON=3.6 CPP=17 CLANG=7
|
||||
name: Python 3.6, c++17, Clang 7
|
||||
addons:
|
||||
apt:
|
||||
sources:
|
||||
- deadsnakes
|
||||
- llvm-toolchain-xenial-7
|
||||
packages:
|
||||
- python3.6-dev
|
||||
- python3.6-venv
|
||||
- clang-7
|
||||
- libclang-7-dev
|
||||
- llvm-7-dev
|
||||
- lld-7
|
||||
- libc++-7-dev
|
||||
- libc++abi-7-dev # Why is this necessary???
|
||||
- os: osx
|
||||
name: Python 2.7, c++14, AppleClang 7.3, CMake test
|
||||
osx_image: xcode7.3
|
||||
env: PYTHON=2.7 CPP=14 CLANG CMAKE=1
|
||||
- os: osx
|
||||
name: Python 3.7, c++14, AppleClang 9, Debug build
|
||||
osx_image: xcode9
|
||||
env: PYTHON=3.7 CPP=14 CLANG DEBUG=1
|
||||
# Test a PyPy 2.7 build
|
||||
- os: linux
|
||||
dist: trusty
|
||||
env: PYPY=5.8 PYTHON=2.7 CPP=11 GCC=4.8
|
||||
name: PyPy 5.8, Python 2.7, c++11, gcc 4.8
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- libblas-dev
|
||||
- liblapack-dev
|
||||
- gfortran
|
||||
# Build in 32-bit mode and tests against the CMake-installed version
|
||||
- os: linux
|
||||
dist: trusty
|
||||
services: docker
|
||||
env: DOCKER=i386/debian:stretch PYTHON=3.5 CPP=14 GCC=6 INSTALL=1
|
||||
name: Python 3.4, c++14, gcc 6, 32-bit
|
||||
script:
|
||||
- |
|
||||
# Consolidated 32-bit Docker Build + Install
|
||||
set -ex
|
||||
$SCRIPT_RUN_PREFIX sh -c "
|
||||
set -ex
|
||||
cmake ${CMAKE_EXTRA_ARGS} -DPYBIND11_INSTALL=1 -DPYBIND11_TEST=0 .
|
||||
make install
|
||||
cp -a tests /pybind11-tests
|
||||
mkdir /build-tests && cd /build-tests
|
||||
cmake ../pybind11-tests ${CMAKE_EXTRA_ARGS} -DPYBIND11_WERROR=ON
|
||||
make pytest -j 2"
|
||||
set +ex
|
||||
cache:
|
||||
directories:
|
||||
- $HOME/.local/bin
|
||||
- $HOME/.local/lib
|
||||
- $HOME/.local/include
|
||||
- $HOME/Library/Python
|
||||
before_install:
|
||||
- |
|
||||
# Configure build variables
|
||||
set -ex
|
||||
if [ "$TRAVIS_OS_NAME" = "linux" ]; then
|
||||
if [ -n "$CLANG" ]; then
|
||||
export CXX=clang++-$CLANG CC=clang-$CLANG
|
||||
EXTRA_PACKAGES+=" clang-$CLANG llvm-$CLANG-dev"
|
||||
else
|
||||
if [ -z "$GCC" ]; then GCC=4.8
|
||||
else EXTRA_PACKAGES+=" g++-$GCC"
|
||||
fi
|
||||
export CXX=g++-$GCC CC=gcc-$GCC
|
||||
fi
|
||||
elif [ "$TRAVIS_OS_NAME" = "osx" ]; then
|
||||
export CXX=clang++ CC=clang;
|
||||
fi
|
||||
if [ -n "$CPP" ]; then CPP=-std=c++$CPP; fi
|
||||
if [ "${PYTHON:0:1}" = "3" ]; then PY=3; fi
|
||||
if [ -n "$DEBUG" ]; then CMAKE_EXTRA_ARGS+=" -DCMAKE_BUILD_TYPE=Debug"; fi
|
||||
set +ex
|
||||
- |
|
||||
# Initialize environment
|
||||
set -ex
|
||||
if [ -n "$DOCKER" ]; then
|
||||
docker pull $DOCKER
|
||||
|
||||
containerid=$(docker run --detach --tty \
|
||||
--volume="$PWD":/pybind11 --workdir=/pybind11 \
|
||||
--env="CC=$CC" --env="CXX=$CXX" --env="DEBIAN_FRONTEND=$DEBIAN_FRONTEND" \
|
||||
--env=GCC_COLORS=\ \
|
||||
$DOCKER)
|
||||
SCRIPT_RUN_PREFIX="docker exec --tty $containerid"
|
||||
$SCRIPT_RUN_PREFIX sh -c 'for s in 0 15; do sleep $s; apt-get update && apt-get -qy dist-upgrade && break; done'
|
||||
else
|
||||
if [ "$PYPY" = "5.8" ]; then
|
||||
curl -fSL https://bitbucket.org/pypy/pypy/downloads/pypy2-v5.8.0-linux64.tar.bz2 | tar xj
|
||||
PY_CMD=$(echo `pwd`/pypy2-v5.8.0-linux64/bin/pypy)
|
||||
CMAKE_EXTRA_ARGS+=" -DPYTHON_EXECUTABLE:FILEPATH=$PY_CMD"
|
||||
else
|
||||
PY_CMD=python$PYTHON
|
||||
if [ "$TRAVIS_OS_NAME" = "osx" ]; then
|
||||
if [ "$PY" = "3" ]; then
|
||||
brew update && brew upgrade python
|
||||
else
|
||||
curl -fsSL https://bootstrap.pypa.io/get-pip.py | $PY_CMD - --user
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
if [ "$PY" = 3 ] || [ -n "$PYPY" ]; then
|
||||
$PY_CMD -m ensurepip --user
|
||||
fi
|
||||
$PY_CMD --version
|
||||
$PY_CMD -m pip install --user --upgrade pip wheel
|
||||
fi
|
||||
set +ex
|
||||
install:
|
||||
- |
|
||||
# Install dependencies
|
||||
set -ex
|
||||
cmake --version
|
||||
if [ -n "$DOCKER" ]; then
|
||||
if [ -n "$DEBUG" ]; then
|
||||
PY_DEBUG="python$PYTHON-dbg python$PY-scipy-dbg"
|
||||
CMAKE_EXTRA_ARGS+=" -DPYTHON_EXECUTABLE=/usr/bin/python${PYTHON}dm"
|
||||
fi
|
||||
$SCRIPT_RUN_PREFIX sh -c "for s in 0 15; do sleep \$s; \
|
||||
apt-get -qy --no-install-recommends install \
|
||||
$PY_DEBUG python$PYTHON-dev python$PY-pytest python$PY-scipy \
|
||||
libeigen3-dev libboost-dev cmake make ${EXTRA_PACKAGES} && break; done"
|
||||
else
|
||||
|
||||
if [ "$CLANG" = "7" ]; then
|
||||
export CXXFLAGS="-stdlib=libc++"
|
||||
fi
|
||||
|
||||
export NPY_NUM_BUILD_JOBS=2
|
||||
echo "Installing pytest, numpy, scipy..."
|
||||
local PIP_CMD=""
|
||||
if [ -n $PYPY ]; then
|
||||
# For expediency, install only versions that are available on the extra index.
|
||||
travis_wait 30 \
|
||||
$PY_CMD -m pip install --user --upgrade --extra-index-url https://imaginary.ca/trusty-pypi \
|
||||
pytest numpy==1.15.4 scipy==1.2.0
|
||||
else
|
||||
$PY_CMD -m pip install --user --upgrade pytest numpy scipy
|
||||
fi
|
||||
echo "done."
|
||||
|
||||
mkdir eigen
|
||||
curl -fsSL https://bitbucket.org/eigen/eigen/get/3.3.4.tar.bz2 | \
|
||||
tar --extract -j --directory=eigen --strip-components=1
|
||||
export CMAKE_INCLUDE_PATH="${CMAKE_INCLUDE_PATH:+$CMAKE_INCLUDE_PATH:}$PWD/eigen"
|
||||
fi
|
||||
set +ex
|
||||
script:
|
||||
- |
|
||||
# CMake Configuration
|
||||
set -ex
|
||||
$SCRIPT_RUN_PREFIX cmake ${CMAKE_EXTRA_ARGS} \
|
||||
-DPYBIND11_PYTHON_VERSION=$PYTHON \
|
||||
-DPYBIND11_CPP_STANDARD=$CPP \
|
||||
-DPYBIND11_WERROR=${WERROR:-ON} \
|
||||
-DDOWNLOAD_CATCH=${DOWNLOAD_CATCH:-ON} \
|
||||
.
|
||||
set +ex
|
||||
- |
|
||||
# pytest
|
||||
set -ex
|
||||
$SCRIPT_RUN_PREFIX make pytest -j 2 VERBOSE=1
|
||||
set +ex
|
||||
- |
|
||||
# cpptest
|
||||
set -ex
|
||||
$SCRIPT_RUN_PREFIX make cpptest -j 2
|
||||
set +ex
|
||||
- |
|
||||
# CMake Build Interface
|
||||
set -ex
|
||||
if [ -n "$CMAKE" ]; then $SCRIPT_RUN_PREFIX make test_cmake_build; fi
|
||||
set +ex
|
||||
after_failure: cat tests/test_cmake_build/*.log*
|
||||
after_script:
|
||||
- |
|
||||
# Cleanup (Docker)
|
||||
set -ex
|
||||
if [ -n "$DOCKER" ]; then docker stop "$containerid"; docker rm "$containerid"; fi
|
||||
set +ex
|
||||
|
|
@ -0,0 +1,157 @@
|
|||
# CMakeLists.txt -- Build system for the pybind11 modules
|
||||
#
|
||||
# Copyright (c) 2015 Wenzel Jakob <wenzel@inf.ethz.ch>
|
||||
#
|
||||
# All rights reserved. Use of this source code is governed by a
|
||||
# BSD-style license that can be found in the LICENSE file.
|
||||
|
||||
cmake_minimum_required(VERSION 2.8.12)
|
||||
|
||||
if (POLICY CMP0048)
|
||||
# cmake warns if loaded from a min-3.0-required parent dir, so silence the warning:
|
||||
cmake_policy(SET CMP0048 NEW)
|
||||
endif()
|
||||
|
||||
# CMake versions < 3.4.0 do not support try_compile/pthread checks without C as active language.
|
||||
if(CMAKE_VERSION VERSION_LESS 3.4.0)
|
||||
project(pybind11)
|
||||
else()
|
||||
project(pybind11 CXX)
|
||||
endif()
|
||||
|
||||
# Check if pybind11 is being used directly or via add_subdirectory
|
||||
set(PYBIND11_MASTER_PROJECT OFF)
|
||||
if (CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_SOURCE_DIR)
|
||||
set(PYBIND11_MASTER_PROJECT ON)
|
||||
endif()
|
||||
|
||||
option(PYBIND11_INSTALL "Install pybind11 header files?" ${PYBIND11_MASTER_PROJECT})
|
||||
option(PYBIND11_TEST "Build pybind11 test suite?" ${PYBIND11_MASTER_PROJECT})
|
||||
|
||||
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/tools")
|
||||
|
||||
include(pybind11Tools)
|
||||
|
||||
# Cache variables so pybind11_add_module can be used in parent projects
|
||||
set(PYBIND11_INCLUDE_DIR "${CMAKE_CURRENT_LIST_DIR}/include" CACHE INTERNAL "")
|
||||
set(PYTHON_INCLUDE_DIRS ${PYTHON_INCLUDE_DIRS} CACHE INTERNAL "")
|
||||
set(PYTHON_LIBRARIES ${PYTHON_LIBRARIES} CACHE INTERNAL "")
|
||||
set(PYTHON_MODULE_PREFIX ${PYTHON_MODULE_PREFIX} CACHE INTERNAL "")
|
||||
set(PYTHON_MODULE_EXTENSION ${PYTHON_MODULE_EXTENSION} CACHE INTERNAL "")
|
||||
set(PYTHON_VERSION_MAJOR ${PYTHON_VERSION_MAJOR} CACHE INTERNAL "")
|
||||
set(PYTHON_VERSION_MINOR ${PYTHON_VERSION_MINOR} CACHE INTERNAL "")
|
||||
|
||||
# NB: when adding a header don't forget to also add it to setup.py
|
||||
set(PYBIND11_HEADERS
|
||||
include/pybind11/detail/class.h
|
||||
include/pybind11/detail/common.h
|
||||
include/pybind11/detail/descr.h
|
||||
include/pybind11/detail/init.h
|
||||
include/pybind11/detail/internals.h
|
||||
include/pybind11/detail/typeid.h
|
||||
include/pybind11/attr.h
|
||||
include/pybind11/buffer_info.h
|
||||
include/pybind11/cast.h
|
||||
include/pybind11/chrono.h
|
||||
include/pybind11/common.h
|
||||
include/pybind11/complex.h
|
||||
include/pybind11/options.h
|
||||
include/pybind11/eigen.h
|
||||
include/pybind11/embed.h
|
||||
include/pybind11/eval.h
|
||||
include/pybind11/functional.h
|
||||
include/pybind11/numpy.h
|
||||
include/pybind11/operators.h
|
||||
include/pybind11/pybind11.h
|
||||
include/pybind11/pytypes.h
|
||||
include/pybind11/stl.h
|
||||
include/pybind11/stl_bind.h
|
||||
)
|
||||
string(REPLACE "include/" "${CMAKE_CURRENT_SOURCE_DIR}/include/"
|
||||
PYBIND11_HEADERS "${PYBIND11_HEADERS}")
|
||||
|
||||
if (PYBIND11_TEST)
|
||||
add_subdirectory(tests)
|
||||
endif()
|
||||
|
||||
include(GNUInstallDirs)
|
||||
include(CMakePackageConfigHelpers)
|
||||
|
||||
# extract project version from source
|
||||
file(STRINGS "${PYBIND11_INCLUDE_DIR}/pybind11/detail/common.h" pybind11_version_defines
|
||||
REGEX "#define PYBIND11_VERSION_(MAJOR|MINOR|PATCH) ")
|
||||
foreach(ver ${pybind11_version_defines})
|
||||
if (ver MATCHES "#define PYBIND11_VERSION_(MAJOR|MINOR|PATCH) +([^ ]+)$")
|
||||
set(PYBIND11_VERSION_${CMAKE_MATCH_1} "${CMAKE_MATCH_2}" CACHE INTERNAL "")
|
||||
endif()
|
||||
endforeach()
|
||||
set(${PROJECT_NAME}_VERSION ${PYBIND11_VERSION_MAJOR}.${PYBIND11_VERSION_MINOR}.${PYBIND11_VERSION_PATCH})
|
||||
message(STATUS "pybind11 v${${PROJECT_NAME}_VERSION}")
|
||||
|
||||
option (USE_PYTHON_INCLUDE_DIR "Install pybind11 headers in Python include directory instead of default installation prefix" OFF)
|
||||
if (USE_PYTHON_INCLUDE_DIR)
|
||||
file(RELATIVE_PATH CMAKE_INSTALL_INCLUDEDIR ${CMAKE_INSTALL_PREFIX} ${PYTHON_INCLUDE_DIRS})
|
||||
endif()
|
||||
|
||||
if(NOT (CMAKE_VERSION VERSION_LESS 3.0)) # CMake >= 3.0
|
||||
# Build an interface library target:
|
||||
add_library(pybind11 INTERFACE)
|
||||
add_library(pybind11::pybind11 ALIAS pybind11) # to match exported target
|
||||
target_include_directories(pybind11 INTERFACE $<BUILD_INTERFACE:${PYBIND11_INCLUDE_DIR}>
|
||||
$<BUILD_INTERFACE:${PYTHON_INCLUDE_DIRS}>
|
||||
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>)
|
||||
target_compile_options(pybind11 INTERFACE $<BUILD_INTERFACE:${PYBIND11_CPP_STANDARD}>)
|
||||
|
||||
add_library(module INTERFACE)
|
||||
add_library(pybind11::module ALIAS module)
|
||||
if(NOT MSVC)
|
||||
target_compile_options(module INTERFACE -fvisibility=hidden)
|
||||
endif()
|
||||
target_link_libraries(module INTERFACE pybind11::pybind11)
|
||||
if(WIN32 OR CYGWIN)
|
||||
target_link_libraries(module INTERFACE $<BUILD_INTERFACE:${PYTHON_LIBRARIES}>)
|
||||
elseif(APPLE)
|
||||
target_link_libraries(module INTERFACE "-undefined dynamic_lookup")
|
||||
endif()
|
||||
|
||||
add_library(embed INTERFACE)
|
||||
add_library(pybind11::embed ALIAS embed)
|
||||
target_link_libraries(embed INTERFACE pybind11::pybind11 $<BUILD_INTERFACE:${PYTHON_LIBRARIES}>)
|
||||
endif()
|
||||
|
||||
if (PYBIND11_INSTALL)
|
||||
install(DIRECTORY ${PYBIND11_INCLUDE_DIR}/pybind11 DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
|
||||
# GNUInstallDirs "DATADIR" wrong here; CMake search path wants "share".
|
||||
set(PYBIND11_CMAKECONFIG_INSTALL_DIR "share/cmake/${PROJECT_NAME}" CACHE STRING "install path for pybind11Config.cmake")
|
||||
|
||||
configure_package_config_file(tools/${PROJECT_NAME}Config.cmake.in
|
||||
"${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake"
|
||||
INSTALL_DESTINATION ${PYBIND11_CMAKECONFIG_INSTALL_DIR})
|
||||
# Remove CMAKE_SIZEOF_VOID_P from ConfigVersion.cmake since the library does
|
||||
# not depend on architecture specific settings or libraries.
|
||||
set(_PYBIND11_CMAKE_SIZEOF_VOID_P ${CMAKE_SIZEOF_VOID_P})
|
||||
unset(CMAKE_SIZEOF_VOID_P)
|
||||
write_basic_package_version_file(${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake
|
||||
VERSION ${${PROJECT_NAME}_VERSION}
|
||||
COMPATIBILITY AnyNewerVersion)
|
||||
set(CMAKE_SIZEOF_VOID_P ${_PYBIND11_CMAKE_SIZEOF_VOID_P})
|
||||
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake
|
||||
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake
|
||||
tools/FindPythonLibsNew.cmake
|
||||
tools/pybind11Tools.cmake
|
||||
DESTINATION ${PYBIND11_CMAKECONFIG_INSTALL_DIR})
|
||||
|
||||
if(NOT (CMAKE_VERSION VERSION_LESS 3.0))
|
||||
if(NOT PYBIND11_EXPORT_NAME)
|
||||
set(PYBIND11_EXPORT_NAME "${PROJECT_NAME}Targets")
|
||||
endif()
|
||||
|
||||
install(TARGETS pybind11 module embed
|
||||
EXPORT "${PYBIND11_EXPORT_NAME}")
|
||||
if(PYBIND11_MASTER_PROJECT)
|
||||
install(EXPORT "${PYBIND11_EXPORT_NAME}"
|
||||
NAMESPACE "${PROJECT_NAME}::"
|
||||
DESTINATION ${PYBIND11_CMAKECONFIG_INSTALL_DIR})
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
Thank you for your interest in this project! Please refer to the following
|
||||
sections on how to contribute code and bug reports.
|
||||
|
||||
### Reporting bugs
|
||||
|
||||
At the moment, this project is run in the spare time of a single person
|
||||
([Wenzel Jakob](http://rgl.epfl.ch/people/wjakob)) with very limited resources
|
||||
for issue tracker tickets. Thus, before submitting a question or bug report,
|
||||
please take a moment of your time and ensure that your issue isn't already
|
||||
discussed in the project documentation provided at
|
||||
[http://pybind11.readthedocs.org/en/latest](http://pybind11.readthedocs.org/en/latest).
|
||||
|
||||
Assuming that you have identified a previously unknown problem or an important
|
||||
question, it's essential that you submit a self-contained and minimal piece of
|
||||
code that reproduces the problem. In other words: no external dependencies,
|
||||
isolate the function(s) that cause breakage, submit matched and complete C++
|
||||
and Python snippets that can be easily compiled and run on my end.
|
||||
|
||||
## Pull requests
|
||||
Contributions are submitted, reviewed, and accepted using Github pull requests.
|
||||
Please refer to [this
|
||||
article](https://help.github.com/articles/using-pull-requests) for details and
|
||||
adhere to the following rules to make the process as smooth as possible:
|
||||
|
||||
* Make a new branch for every feature you're working on.
|
||||
* Make small and clean pull requests that are easy to review but make sure they
|
||||
do add value by themselves.
|
||||
* Add tests for any new functionality and run the test suite (``make pytest``)
|
||||
to ensure that no existing features break.
|
||||
* Please run ``flake8`` and ``tools/check-style.sh`` to check your code matches
|
||||
the project style. (Note that ``check-style.sh`` requires ``gawk``.)
|
||||
* This project has a strong focus on providing general solutions using a
|
||||
minimal amount of code, thus small pull requests are greatly preferred.
|
||||
|
||||
### Licensing of contributions
|
||||
|
||||
pybind11 is provided under a BSD-style license that can be found in the
|
||||
``LICENSE`` file. By using, distributing, or contributing to this project, you
|
||||
agree to the terms and conditions of this license.
|
||||
|
||||
You are under no obligation whatsoever to provide any bug fixes, patches, or
|
||||
upgrades to the features, functionality or performance of the source code
|
||||
("Enhancements") to anyone; however, if you choose to make your Enhancements
|
||||
available either publicly, or directly to the author of this software, without
|
||||
imposing a separate written license agreement for such Enhancements, then you
|
||||
hereby grant the following license: a non-exclusive, royalty-free perpetual
|
||||
license to install, use, modify, prepare derivative works, incorporate into
|
||||
other computer software, distribute, and sublicense such enhancements or
|
||||
derivative works thereof, in binary and source code form.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
Make sure you've completed the following steps before submitting your issue -- thank you!
|
||||
|
||||
1. Check if your question has already been answered in the [FAQ](http://pybind11.readthedocs.io/en/latest/faq.html) section.
|
||||
2. Make sure you've read the [documentation](http://pybind11.readthedocs.io/en/latest/). Your issue may be addressed there.
|
||||
3. If those resources didn't help and you only have a short question (not a bug report), consider asking in the [Gitter chat room](https://gitter.im/pybind/Lobby).
|
||||
4. If you have a genuine bug report or a more complex question which is not answered in the previous items (or not suitable for chat), please fill in the details below.
|
||||
5. Include a self-contained and minimal piece of code that reproduces the problem. If that's not possible, try to make the description as clear as possible.
|
||||
|
||||
*After reading, remove this checklist and the template text in parentheses below.*
|
||||
|
||||
## Issue description
|
||||
|
||||
(Provide a short description, state the expected behavior and what actually happens.)
|
||||
|
||||
## Reproducible example code
|
||||
|
||||
(The code should be minimal, have no external dependencies, isolate the function(s) that cause breakage. Submit matched and complete C++ and Python snippets that can be easily compiled and run to diagnose the issue.)
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
Copyright (c) 2016 Wenzel Jakob <wenzel.jakob@epfl.ch>, All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
Please also refer to the file CONTRIBUTING.md, which clarifies licensing of
|
||||
external contributions to this project including patches, pull requests, etc.
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
recursive-include include/pybind11 *.h
|
||||
include LICENSE README.md CONTRIBUTING.md
|
||||
|
|
@ -0,0 +1,129 @@
|
|||

|
||||
|
||||
# pybind11 — Seamless operability between C++11 and Python
|
||||
|
||||
[](http://pybind11.readthedocs.org/en/master/?badge=master)
|
||||
[](http://pybind11.readthedocs.org/en/stable/?badge=stable)
|
||||
[](https://gitter.im/pybind/Lobby)
|
||||
[](https://travis-ci.org/pybind/pybind11)
|
||||
[](https://ci.appveyor.com/project/wjakob/pybind11)
|
||||
|
||||
**pybind11** is a lightweight header-only library that exposes C++ types in Python
|
||||
and vice versa, mainly to create Python bindings of existing C++ code. Its
|
||||
goals and syntax are similar to the excellent
|
||||
[Boost.Python](http://www.boost.org/doc/libs/1_58_0/libs/python/doc/) library
|
||||
by David Abrahams: to minimize boilerplate code in traditional extension
|
||||
modules by inferring type information using compile-time introspection.
|
||||
|
||||
The main issue with Boost.Python—and the reason for creating such a similar
|
||||
project—is Boost. Boost is an enormously large and complex suite of utility
|
||||
libraries that works with almost every C++ compiler in existence. This
|
||||
compatibility has its cost: arcane template tricks and workarounds are
|
||||
necessary to support the oldest and buggiest of compiler specimens. Now that
|
||||
C++11-compatible compilers are widely available, this heavy machinery has
|
||||
become an excessively large and unnecessary dependency.
|
||||
|
||||
Think of this library as a tiny self-contained version of Boost.Python with
|
||||
everything stripped away that isn't relevant for binding generation. Without
|
||||
comments, the core header files only require ~4K lines of code and depend on
|
||||
Python (2.7 or 3.x, or PyPy2.7 >= 5.7) and the C++ standard library. This
|
||||
compact implementation was possible thanks to some of the new C++11 language
|
||||
features (specifically: tuples, lambda functions and variadic templates). Since
|
||||
its creation, this library has grown beyond Boost.Python in many ways, leading
|
||||
to dramatically simpler binding code in many common situations.
|
||||
|
||||
Tutorial and reference documentation is provided at
|
||||
[http://pybind11.readthedocs.org/en/master](http://pybind11.readthedocs.org/en/master).
|
||||
A PDF version of the manual is available
|
||||
[here](https://media.readthedocs.org/pdf/pybind11/master/pybind11.pdf).
|
||||
|
||||
## Core features
|
||||
pybind11 can map the following core C++ features to Python
|
||||
|
||||
- Functions accepting and returning custom data structures per value, reference, or pointer
|
||||
- Instance methods and static methods
|
||||
- Overloaded functions
|
||||
- Instance attributes and static attributes
|
||||
- Arbitrary exception types
|
||||
- Enumerations
|
||||
- Callbacks
|
||||
- Iterators and ranges
|
||||
- Custom operators
|
||||
- Single and multiple inheritance
|
||||
- STL data structures
|
||||
- Smart pointers with reference counting like ``std::shared_ptr``
|
||||
- Internal references with correct reference counting
|
||||
- C++ classes with virtual (and pure virtual) methods can be extended in Python
|
||||
|
||||
## Goodies
|
||||
In addition to the core functionality, pybind11 provides some extra goodies:
|
||||
|
||||
- Python 2.7, 3.x, and PyPy (PyPy2.7 >= 5.7) are supported with an
|
||||
implementation-agnostic interface.
|
||||
|
||||
- It is possible to bind C++11 lambda functions with captured variables. The
|
||||
lambda capture data is stored inside the resulting Python function object.
|
||||
|
||||
- pybind11 uses C++11 move constructors and move assignment operators whenever
|
||||
possible to efficiently transfer custom data types.
|
||||
|
||||
- It's easy to expose the internal storage of custom data types through
|
||||
Pythons' buffer protocols. This is handy e.g. for fast conversion between
|
||||
C++ matrix classes like Eigen and NumPy without expensive copy operations.
|
||||
|
||||
- pybind11 can automatically vectorize functions so that they are transparently
|
||||
applied to all entries of one or more NumPy array arguments.
|
||||
|
||||
- Python's slice-based access and assignment operations can be supported with
|
||||
just a few lines of code.
|
||||
|
||||
- Everything is contained in just a few header files; there is no need to link
|
||||
against any additional libraries.
|
||||
|
||||
- Binaries are generally smaller by a factor of at least 2 compared to
|
||||
equivalent bindings generated by Boost.Python. A recent pybind11 conversion
|
||||
of PyRosetta, an enormous Boost.Python binding project,
|
||||
[reported](http://graylab.jhu.edu/RosettaCon2016/PyRosetta-4.pdf) a binary
|
||||
size reduction of **5.4x** and compile time reduction by **5.8x**.
|
||||
|
||||
- Function signatures are precomputed at compile time (using ``constexpr``),
|
||||
leading to smaller binaries.
|
||||
|
||||
- With little extra effort, C++ types can be pickled and unpickled similar to
|
||||
regular Python objects.
|
||||
|
||||
## Supported compilers
|
||||
|
||||
1. Clang/LLVM 3.3 or newer (for Apple Xcode's clang, this is 5.0.0 or newer)
|
||||
2. GCC 4.8 or newer
|
||||
3. Microsoft Visual Studio 2015 Update 3 or newer
|
||||
4. Intel C++ compiler 17 or newer (16 with pybind11 v2.0 and 15 with pybind11 v2.0 and a [workaround](https://github.com/pybind/pybind11/issues/276))
|
||||
5. Cygwin/GCC (tested on 2.5.1)
|
||||
|
||||
## About
|
||||
|
||||
This project was created by [Wenzel Jakob](http://rgl.epfl.ch/people/wjakob).
|
||||
Significant features and/or improvements to the code were contributed by
|
||||
Jonas Adler,
|
||||
Lori A. Burns,
|
||||
Sylvain Corlay,
|
||||
Trent Houliston,
|
||||
Axel Huebl,
|
||||
@hulucc,
|
||||
Sergey Lyskov
|
||||
Johan Mabille,
|
||||
Tomasz Miąsko,
|
||||
Dean Moldovan,
|
||||
Ben Pritchard,
|
||||
Jason Rhinelander,
|
||||
Boris Schäling,
|
||||
Pim Schellart,
|
||||
Henry Schreiner,
|
||||
Ivan Smirnov, and
|
||||
Patrick Stewart.
|
||||
|
||||
### License
|
||||
|
||||
pybind11 is provided under a BSD-style license that can be found in the
|
||||
``LICENSE`` file. By using, distributing, or contributing to this project,
|
||||
you agree to the terms and conditions of this license.
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
PROJECT_NAME = pybind11
|
||||
INPUT = ../include/pybind11/
|
||||
RECURSIVE = YES
|
||||
|
||||
GENERATE_HTML = NO
|
||||
GENERATE_LATEX = NO
|
||||
GENERATE_XML = YES
|
||||
XML_OUTPUT = .build/doxygenxml
|
||||
XML_PROGRAMLISTING = YES
|
||||
|
||||
MACRO_EXPANSION = YES
|
||||
EXPAND_ONLY_PREDEF = YES
|
||||
EXPAND_AS_DEFINED = PYBIND11_RUNTIME_EXCEPTION
|
||||
|
||||
ALIASES = "rst=\verbatim embed:rst"
|
||||
ALIASES += "endrst=\endverbatim"
|
||||
|
||||
QUIET = YES
|
||||
WARNINGS = YES
|
||||
WARN_IF_UNDOCUMENTED = NO
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
.wy-table-responsive table td,
|
||||
.wy-table-responsive table th {
|
||||
white-space: initial !important;
|
||||
}
|
||||
.rst-content table.docutils td {
|
||||
vertical-align: top !important;
|
||||
}
|
||||
div[class^='highlight'] pre {
|
||||
white-space: pre;
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
|
|
@ -0,0 +1,81 @@
|
|||
Chrono
|
||||
======
|
||||
|
||||
When including the additional header file :file:`pybind11/chrono.h` conversions
|
||||
from C++11 chrono datatypes to python datetime objects are automatically enabled.
|
||||
This header also enables conversions of python floats (often from sources such
|
||||
as ``time.monotonic()``, ``time.perf_counter()`` and ``time.process_time()``)
|
||||
into durations.
|
||||
|
||||
An overview of clocks in C++11
|
||||
------------------------------
|
||||
|
||||
A point of confusion when using these conversions is the differences between
|
||||
clocks provided in C++11. There are three clock types defined by the C++11
|
||||
standard and users can define their own if needed. Each of these clocks have
|
||||
different properties and when converting to and from python will give different
|
||||
results.
|
||||
|
||||
The first clock defined by the standard is ``std::chrono::system_clock``. This
|
||||
clock measures the current date and time. However, this clock changes with to
|
||||
updates to the operating system time. For example, if your time is synchronised
|
||||
with a time server this clock will change. This makes this clock a poor choice
|
||||
for timing purposes but good for measuring the wall time.
|
||||
|
||||
The second clock defined in the standard is ``std::chrono::steady_clock``.
|
||||
This clock ticks at a steady rate and is never adjusted. This makes it excellent
|
||||
for timing purposes, however the value in this clock does not correspond to the
|
||||
current date and time. Often this clock will be the amount of time your system
|
||||
has been on, although it does not have to be. This clock will never be the same
|
||||
clock as the system clock as the system clock can change but steady clocks
|
||||
cannot.
|
||||
|
||||
The third clock defined in the standard is ``std::chrono::high_resolution_clock``.
|
||||
This clock is the clock that has the highest resolution out of the clocks in the
|
||||
system. It is normally a typedef to either the system clock or the steady clock
|
||||
but can be its own independent clock. This is important as when using these
|
||||
conversions as the types you get in python for this clock might be different
|
||||
depending on the system.
|
||||
If it is a typedef of the system clock, python will get datetime objects, but if
|
||||
it is a different clock they will be timedelta objects.
|
||||
|
||||
Provided conversions
|
||||
--------------------
|
||||
|
||||
.. rubric:: C++ to Python
|
||||
|
||||
- ``std::chrono::system_clock::time_point`` → ``datetime.datetime``
|
||||
System clock times are converted to python datetime instances. They are
|
||||
in the local timezone, but do not have any timezone information attached
|
||||
to them (they are naive datetime objects).
|
||||
|
||||
- ``std::chrono::duration`` → ``datetime.timedelta``
|
||||
Durations are converted to timedeltas, any precision in the duration
|
||||
greater than microseconds is lost by rounding towards zero.
|
||||
|
||||
- ``std::chrono::[other_clocks]::time_point`` → ``datetime.timedelta``
|
||||
Any clock time that is not the system clock is converted to a time delta.
|
||||
This timedelta measures the time from the clocks epoch to now.
|
||||
|
||||
.. rubric:: Python to C++
|
||||
|
||||
- ``datetime.datetime`` → ``std::chrono::system_clock::time_point``
|
||||
Date/time objects are converted into system clock timepoints. Any
|
||||
timezone information is ignored and the type is treated as a naive
|
||||
object.
|
||||
|
||||
- ``datetime.timedelta`` → ``std::chrono::duration``
|
||||
Time delta are converted into durations with microsecond precision.
|
||||
|
||||
- ``datetime.timedelta`` → ``std::chrono::[other_clocks]::time_point``
|
||||
Time deltas that are converted into clock timepoints are treated as
|
||||
the amount of time from the start of the clocks epoch.
|
||||
|
||||
- ``float`` → ``std::chrono::duration``
|
||||
Floats that are passed to C++ as durations be interpreted as a number of
|
||||
seconds. These will be converted to the duration using ``duration_cast``
|
||||
from the float.
|
||||
|
||||
- ``float`` → ``std::chrono::[other_clocks]::time_point``
|
||||
Floats that are passed to C++ as time points will be interpreted as the
|
||||
number of seconds from the start of the clocks epoch.
|
||||
|
|
@ -0,0 +1,91 @@
|
|||
Custom type casters
|
||||
===================
|
||||
|
||||
In very rare cases, applications may require custom type casters that cannot be
|
||||
expressed using the abstractions provided by pybind11, thus requiring raw
|
||||
Python C API calls. This is fairly advanced usage and should only be pursued by
|
||||
experts who are familiar with the intricacies of Python reference counting.
|
||||
|
||||
The following snippets demonstrate how this works for a very simple ``inty``
|
||||
type that that should be convertible from Python types that provide a
|
||||
``__int__(self)`` method.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
struct inty { long long_value; };
|
||||
|
||||
void print(inty s) {
|
||||
std::cout << s.long_value << std::endl;
|
||||
}
|
||||
|
||||
The following Python snippet demonstrates the intended usage from the Python side:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class A:
|
||||
def __int__(self):
|
||||
return 123
|
||||
|
||||
from example import print
|
||||
print(A())
|
||||
|
||||
To register the necessary conversion routines, it is necessary to add
|
||||
a partial overload to the ``pybind11::detail::type_caster<T>`` template.
|
||||
Although this is an implementation detail, adding partial overloads to this
|
||||
type is explicitly allowed.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
namespace pybind11 { namespace detail {
|
||||
template <> struct type_caster<inty> {
|
||||
public:
|
||||
/**
|
||||
* This macro establishes the name 'inty' in
|
||||
* function signatures and declares a local variable
|
||||
* 'value' of type inty
|
||||
*/
|
||||
PYBIND11_TYPE_CASTER(inty, _("inty"));
|
||||
|
||||
/**
|
||||
* Conversion part 1 (Python->C++): convert a PyObject into a inty
|
||||
* instance or return false upon failure. The second argument
|
||||
* indicates whether implicit conversions should be applied.
|
||||
*/
|
||||
bool load(handle src, bool) {
|
||||
/* Extract PyObject from handle */
|
||||
PyObject *source = src.ptr();
|
||||
/* Try converting into a Python integer value */
|
||||
PyObject *tmp = PyNumber_Long(source);
|
||||
if (!tmp)
|
||||
return false;
|
||||
/* Now try to convert into a C++ int */
|
||||
value.long_value = PyLong_AsLong(tmp);
|
||||
Py_DECREF(tmp);
|
||||
/* Ensure return code was OK (to avoid out-of-range errors etc) */
|
||||
return !(value.long_value == -1 && !PyErr_Occurred());
|
||||
}
|
||||
|
||||
/**
|
||||
* Conversion part 2 (C++ -> Python): convert an inty instance into
|
||||
* a Python object. The second and third arguments are used to
|
||||
* indicate the return value policy and parent object (for
|
||||
* ``return_value_policy::reference_internal``) and are generally
|
||||
* ignored by implicit casters.
|
||||
*/
|
||||
static handle cast(inty src, return_value_policy /* policy */, handle /* parent */) {
|
||||
return PyLong_FromLong(src.long_value);
|
||||
}
|
||||
};
|
||||
}} // namespace pybind11::detail
|
||||
|
||||
.. note::
|
||||
|
||||
A ``type_caster<T>`` defined with ``PYBIND11_TYPE_CASTER(T, ...)`` requires
|
||||
that ``T`` is default-constructible (``value`` is first default constructed
|
||||
and then ``load()`` assigns to it).
|
||||
|
||||
.. warning::
|
||||
|
||||
When using custom type casters, it's important to declare them consistently
|
||||
in every compilation unit of the Python extension module. Otherwise,
|
||||
undefined behavior can ensue.
|
||||
|
|
@ -0,0 +1,310 @@
|
|||
Eigen
|
||||
#####
|
||||
|
||||
`Eigen <http://eigen.tuxfamily.org>`_ is C++ header-based library for dense and
|
||||
sparse linear algebra. Due to its popularity and widespread adoption, pybind11
|
||||
provides transparent conversion and limited mapping support between Eigen and
|
||||
Scientific Python linear algebra data types.
|
||||
|
||||
To enable the built-in Eigen support you must include the optional header file
|
||||
:file:`pybind11/eigen.h`.
|
||||
|
||||
Pass-by-value
|
||||
=============
|
||||
|
||||
When binding a function with ordinary Eigen dense object arguments (for
|
||||
example, ``Eigen::MatrixXd``), pybind11 will accept any input value that is
|
||||
already (or convertible to) a ``numpy.ndarray`` with dimensions compatible with
|
||||
the Eigen type, copy its values into a temporary Eigen variable of the
|
||||
appropriate type, then call the function with this temporary variable.
|
||||
|
||||
Sparse matrices are similarly copied to or from
|
||||
``scipy.sparse.csr_matrix``/``scipy.sparse.csc_matrix`` objects.
|
||||
|
||||
Pass-by-reference
|
||||
=================
|
||||
|
||||
One major limitation of the above is that every data conversion implicitly
|
||||
involves a copy, which can be both expensive (for large matrices) and disallows
|
||||
binding functions that change their (Matrix) arguments. Pybind11 allows you to
|
||||
work around this by using Eigen's ``Eigen::Ref<MatrixType>`` class much as you
|
||||
would when writing a function taking a generic type in Eigen itself (subject to
|
||||
some limitations discussed below).
|
||||
|
||||
When calling a bound function accepting a ``Eigen::Ref<const MatrixType>``
|
||||
type, pybind11 will attempt to avoid copying by using an ``Eigen::Map`` object
|
||||
that maps into the source ``numpy.ndarray`` data: this requires both that the
|
||||
data types are the same (e.g. ``dtype='float64'`` and ``MatrixType::Scalar`` is
|
||||
``double``); and that the storage is layout compatible. The latter limitation
|
||||
is discussed in detail in the section below, and requires careful
|
||||
consideration: by default, numpy matrices and Eigen matrices are *not* storage
|
||||
compatible.
|
||||
|
||||
If the numpy matrix cannot be used as is (either because its types differ, e.g.
|
||||
passing an array of integers to an Eigen parameter requiring doubles, or
|
||||
because the storage is incompatible), pybind11 makes a temporary copy and
|
||||
passes the copy instead.
|
||||
|
||||
When a bound function parameter is instead ``Eigen::Ref<MatrixType>`` (note the
|
||||
lack of ``const``), pybind11 will only allow the function to be called if it
|
||||
can be mapped *and* if the numpy array is writeable (that is
|
||||
``a.flags.writeable`` is true). Any access (including modification) made to
|
||||
the passed variable will be transparently carried out directly on the
|
||||
``numpy.ndarray``.
|
||||
|
||||
This means you can can write code such as the following and have it work as
|
||||
expected:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
void scale_by_2(Eigen::Ref<Eigen::VectorXd> v) {
|
||||
v *= 2;
|
||||
}
|
||||
|
||||
Note, however, that you will likely run into limitations due to numpy and
|
||||
Eigen's difference default storage order for data; see the below section on
|
||||
:ref:`storage_orders` for details on how to bind code that won't run into such
|
||||
limitations.
|
||||
|
||||
.. note::
|
||||
|
||||
Passing by reference is not supported for sparse types.
|
||||
|
||||
Returning values to Python
|
||||
==========================
|
||||
|
||||
When returning an ordinary dense Eigen matrix type to numpy (e.g.
|
||||
``Eigen::MatrixXd`` or ``Eigen::RowVectorXf``) pybind11 keeps the matrix and
|
||||
returns a numpy array that directly references the Eigen matrix: no copy of the
|
||||
data is performed. The numpy array will have ``array.flags.owndata`` set to
|
||||
``False`` to indicate that it does not own the data, and the lifetime of the
|
||||
stored Eigen matrix will be tied to the returned ``array``.
|
||||
|
||||
If you bind a function with a non-reference, ``const`` return type (e.g.
|
||||
``const Eigen::MatrixXd``), the same thing happens except that pybind11 also
|
||||
sets the numpy array's ``writeable`` flag to false.
|
||||
|
||||
If you return an lvalue reference or pointer, the usual pybind11 rules apply,
|
||||
as dictated by the binding function's return value policy (see the
|
||||
documentation on :ref:`return_value_policies` for full details). That means,
|
||||
without an explicit return value policy, lvalue references will be copied and
|
||||
pointers will be managed by pybind11. In order to avoid copying, you should
|
||||
explicitly specify an appropriate return value policy, as in the following
|
||||
example:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
class MyClass {
|
||||
Eigen::MatrixXd big_mat = Eigen::MatrixXd::Zero(10000, 10000);
|
||||
public:
|
||||
Eigen::MatrixXd &getMatrix() { return big_mat; }
|
||||
const Eigen::MatrixXd &viewMatrix() { return big_mat; }
|
||||
};
|
||||
|
||||
// Later, in binding code:
|
||||
py::class_<MyClass>(m, "MyClass")
|
||||
.def(py::init<>())
|
||||
.def("copy_matrix", &MyClass::getMatrix) // Makes a copy!
|
||||
.def("get_matrix", &MyClass::getMatrix, py::return_value_policy::reference_internal)
|
||||
.def("view_matrix", &MyClass::viewMatrix, py::return_value_policy::reference_internal)
|
||||
;
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
a = MyClass()
|
||||
m = a.get_matrix() # flags.writeable = True, flags.owndata = False
|
||||
v = a.view_matrix() # flags.writeable = False, flags.owndata = False
|
||||
c = a.copy_matrix() # flags.writeable = True, flags.owndata = True
|
||||
# m[5,6] and v[5,6] refer to the same element, c[5,6] does not.
|
||||
|
||||
Note in this example that ``py::return_value_policy::reference_internal`` is
|
||||
used to tie the life of the MyClass object to the life of the returned arrays.
|
||||
|
||||
You may also return an ``Eigen::Ref``, ``Eigen::Map`` or other map-like Eigen
|
||||
object (for example, the return value of ``matrix.block()`` and related
|
||||
methods) that map into a dense Eigen type. When doing so, the default
|
||||
behaviour of pybind11 is to simply reference the returned data: you must take
|
||||
care to ensure that this data remains valid! You may ask pybind11 to
|
||||
explicitly *copy* such a return value by using the
|
||||
``py::return_value_policy::copy`` policy when binding the function. You may
|
||||
also use ``py::return_value_policy::reference_internal`` or a
|
||||
``py::keep_alive`` to ensure the data stays valid as long as the returned numpy
|
||||
array does.
|
||||
|
||||
When returning such a reference of map, pybind11 additionally respects the
|
||||
readonly-status of the returned value, marking the numpy array as non-writeable
|
||||
if the reference or map was itself read-only.
|
||||
|
||||
.. note::
|
||||
|
||||
Sparse types are always copied when returned.
|
||||
|
||||
.. _storage_orders:
|
||||
|
||||
Storage orders
|
||||
==============
|
||||
|
||||
Passing arguments via ``Eigen::Ref`` has some limitations that you must be
|
||||
aware of in order to effectively pass matrices by reference. First and
|
||||
foremost is that the default ``Eigen::Ref<MatrixType>`` class requires
|
||||
contiguous storage along columns (for column-major types, the default in Eigen)
|
||||
or rows if ``MatrixType`` is specifically an ``Eigen::RowMajor`` storage type.
|
||||
The former, Eigen's default, is incompatible with ``numpy``'s default row-major
|
||||
storage, and so you will not be able to pass numpy arrays to Eigen by reference
|
||||
without making one of two changes.
|
||||
|
||||
(Note that this does not apply to vectors (or column or row matrices): for such
|
||||
types the "row-major" and "column-major" distinction is meaningless).
|
||||
|
||||
The first approach is to change the use of ``Eigen::Ref<MatrixType>`` to the
|
||||
more general ``Eigen::Ref<MatrixType, 0, Eigen::Stride<Eigen::Dynamic,
|
||||
Eigen::Dynamic>>`` (or similar type with a fully dynamic stride type in the
|
||||
third template argument). Since this is a rather cumbersome type, pybind11
|
||||
provides a ``py::EigenDRef<MatrixType>`` type alias for your convenience (along
|
||||
with EigenDMap for the equivalent Map, and EigenDStride for just the stride
|
||||
type).
|
||||
|
||||
This type allows Eigen to map into any arbitrary storage order. This is not
|
||||
the default in Eigen for performance reasons: contiguous storage allows
|
||||
vectorization that cannot be done when storage is not known to be contiguous at
|
||||
compile time. The default ``Eigen::Ref`` stride type allows non-contiguous
|
||||
storage along the outer dimension (that is, the rows of a column-major matrix
|
||||
or columns of a row-major matrix), but not along the inner dimension.
|
||||
|
||||
This type, however, has the added benefit of also being able to map numpy array
|
||||
slices. For example, the following (contrived) example uses Eigen with a numpy
|
||||
slice to multiply by 2 all coefficients that are both on even rows (0, 2, 4,
|
||||
...) and in columns 2, 5, or 8:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("scale", [](py::EigenDRef<Eigen::MatrixXd> m, double c) { m *= c; });
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# a = np.array(...)
|
||||
scale_by_2(myarray[0::2, 2:9:3])
|
||||
|
||||
The second approach to avoid copying is more intrusive: rearranging the
|
||||
underlying data types to not run into the non-contiguous storage problem in the
|
||||
first place. In particular, that means using matrices with ``Eigen::RowMajor``
|
||||
storage, where appropriate, such as:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
using RowMatrixXd = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
|
||||
// Use RowMatrixXd instead of MatrixXd
|
||||
|
||||
Now bound functions accepting ``Eigen::Ref<RowMatrixXd>`` arguments will be
|
||||
callable with numpy's (default) arrays without involving a copying.
|
||||
|
||||
You can, alternatively, change the storage order that numpy arrays use by
|
||||
adding the ``order='F'`` option when creating an array:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
myarray = np.array(source, order='F')
|
||||
|
||||
Such an object will be passable to a bound function accepting an
|
||||
``Eigen::Ref<MatrixXd>`` (or similar column-major Eigen type).
|
||||
|
||||
One major caveat with this approach, however, is that it is not entirely as
|
||||
easy as simply flipping all Eigen or numpy usage from one to the other: some
|
||||
operations may alter the storage order of a numpy array. For example, ``a2 =
|
||||
array.transpose()`` results in ``a2`` being a view of ``array`` that references
|
||||
the same data, but in the opposite storage order!
|
||||
|
||||
While this approach allows fully optimized vectorized calculations in Eigen, it
|
||||
cannot be used with array slices, unlike the first approach.
|
||||
|
||||
When *returning* a matrix to Python (either a regular matrix, a reference via
|
||||
``Eigen::Ref<>``, or a map/block into a matrix), no special storage
|
||||
consideration is required: the created numpy array will have the required
|
||||
stride that allows numpy to properly interpret the array, whatever its storage
|
||||
order.
|
||||
|
||||
Failing rather than copying
|
||||
===========================
|
||||
|
||||
The default behaviour when binding ``Eigen::Ref<const MatrixType>`` Eigen
|
||||
references is to copy matrix values when passed a numpy array that does not
|
||||
conform to the element type of ``MatrixType`` or does not have a compatible
|
||||
stride layout. If you want to explicitly avoid copying in such a case, you
|
||||
should bind arguments using the ``py::arg().noconvert()`` annotation (as
|
||||
described in the :ref:`nonconverting_arguments` documentation).
|
||||
|
||||
The following example shows an example of arguments that don't allow data
|
||||
copying to take place:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// The method and function to be bound:
|
||||
class MyClass {
|
||||
// ...
|
||||
double some_method(const Eigen::Ref<const MatrixXd> &matrix) { /* ... */ }
|
||||
};
|
||||
float some_function(const Eigen::Ref<const MatrixXf> &big,
|
||||
const Eigen::Ref<const MatrixXf> &small) {
|
||||
// ...
|
||||
}
|
||||
|
||||
// The associated binding code:
|
||||
using namespace pybind11::literals; // for "arg"_a
|
||||
py::class_<MyClass>(m, "MyClass")
|
||||
// ... other class definitions
|
||||
.def("some_method", &MyClass::some_method, py::arg().noconvert());
|
||||
|
||||
m.def("some_function", &some_function,
|
||||
"big"_a.noconvert(), // <- Don't allow copying for this arg
|
||||
"small"_a // <- This one can be copied if needed
|
||||
);
|
||||
|
||||
With the above binding code, attempting to call the the ``some_method(m)``
|
||||
method on a ``MyClass`` object, or attempting to call ``some_function(m, m2)``
|
||||
will raise a ``RuntimeError`` rather than making a temporary copy of the array.
|
||||
It will, however, allow the ``m2`` argument to be copied into a temporary if
|
||||
necessary.
|
||||
|
||||
Note that explicitly specifying ``.noconvert()`` is not required for *mutable*
|
||||
Eigen references (e.g. ``Eigen::Ref<MatrixXd>`` without ``const`` on the
|
||||
``MatrixXd``): mutable references will never be called with a temporary copy.
|
||||
|
||||
Vectors versus column/row matrices
|
||||
==================================
|
||||
|
||||
Eigen and numpy have fundamentally different notions of a vector. In Eigen, a
|
||||
vector is simply a matrix with the number of columns or rows set to 1 at
|
||||
compile time (for a column vector or row vector, respectively). Numpy, in
|
||||
contrast, has comparable 2-dimensional 1xN and Nx1 arrays, but *also* has
|
||||
1-dimensional arrays of size N.
|
||||
|
||||
When passing a 2-dimensional 1xN or Nx1 array to Eigen, the Eigen type must
|
||||
have matching dimensions: That is, you cannot pass a 2-dimensional Nx1 numpy
|
||||
array to an Eigen value expecting a row vector, or a 1xN numpy array as a
|
||||
column vector argument.
|
||||
|
||||
On the other hand, pybind11 allows you to pass 1-dimensional arrays of length N
|
||||
as Eigen parameters. If the Eigen type can hold a column vector of length N it
|
||||
will be passed as such a column vector. If not, but the Eigen type constraints
|
||||
will accept a row vector, it will be passed as a row vector. (The column
|
||||
vector takes precedence when both are supported, for example, when passing a
|
||||
1D numpy array to a MatrixXd argument). Note that the type need not be
|
||||
explicitly a vector: it is permitted to pass a 1D numpy array of size 5 to an
|
||||
Eigen ``Matrix<double, Dynamic, 5>``: you would end up with a 1x5 Eigen matrix.
|
||||
Passing the same to an ``Eigen::MatrixXd`` would result in a 5x1 Eigen matrix.
|
||||
|
||||
When returning an Eigen vector to numpy, the conversion is ambiguous: a row
|
||||
vector of length 4 could be returned as either a 1D array of length 4, or as a
|
||||
2D array of size 1x4. When encountering such a situation, pybind11 compromises
|
||||
by considering the returned Eigen type: if it is a compile-time vector--that
|
||||
is, the type has either the number of rows or columns set to 1 at compile
|
||||
time--pybind11 converts to a 1D numpy array when returning the value. For
|
||||
instances that are a vector only at run-time (e.g. ``MatrixXd``,
|
||||
``Matrix<float, Dynamic, 4>``), pybind11 returns the vector as a 2D array to
|
||||
numpy. If this isn't want you want, you can use ``array.reshape(...)`` to get
|
||||
a view of the same data in the desired dimensions.
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_eigen.cpp` contains a complete example that
|
||||
shows how to pass Eigen sparse and dense data types in more detail.
|
||||
|
|
@ -0,0 +1,109 @@
|
|||
Functional
|
||||
##########
|
||||
|
||||
The following features must be enabled by including :file:`pybind11/functional.h`.
|
||||
|
||||
|
||||
Callbacks and passing anonymous functions
|
||||
=========================================
|
||||
|
||||
The C++11 standard brought lambda functions and the generic polymorphic
|
||||
function wrapper ``std::function<>`` to the C++ programming language, which
|
||||
enable powerful new ways of working with functions. Lambda functions come in
|
||||
two flavors: stateless lambda function resemble classic function pointers that
|
||||
link to an anonymous piece of code, while stateful lambda functions
|
||||
additionally depend on captured variables that are stored in an anonymous
|
||||
*lambda closure object*.
|
||||
|
||||
Here is a simple example of a C++ function that takes an arbitrary function
|
||||
(stateful or stateless) with signature ``int -> int`` as an argument and runs
|
||||
it with the value 10.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
int func_arg(const std::function<int(int)> &f) {
|
||||
return f(10);
|
||||
}
|
||||
|
||||
The example below is more involved: it takes a function of signature ``int -> int``
|
||||
and returns another function of the same kind. The return value is a stateful
|
||||
lambda function, which stores the value ``f`` in the capture object and adds 1 to
|
||||
its return value upon execution.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
std::function<int(int)> func_ret(const std::function<int(int)> &f) {
|
||||
return [f](int i) {
|
||||
return f(i) + 1;
|
||||
};
|
||||
}
|
||||
|
||||
This example demonstrates using python named parameters in C++ callbacks which
|
||||
requires using ``py::cpp_function`` as a wrapper. Usage is similar to defining
|
||||
methods of classes:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::cpp_function func_cpp() {
|
||||
return py::cpp_function([](int i) { return i+1; },
|
||||
py::arg("number"));
|
||||
}
|
||||
|
||||
After including the extra header file :file:`pybind11/functional.h`, it is almost
|
||||
trivial to generate binding code for all of these functions.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/functional.h>
|
||||
|
||||
PYBIND11_MODULE(example, m) {
|
||||
m.def("func_arg", &func_arg);
|
||||
m.def("func_ret", &func_ret);
|
||||
m.def("func_cpp", &func_cpp);
|
||||
}
|
||||
|
||||
The following interactive session shows how to call them from Python.
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
$ python
|
||||
>>> import example
|
||||
>>> def square(i):
|
||||
... return i * i
|
||||
...
|
||||
>>> example.func_arg(square)
|
||||
100L
|
||||
>>> square_plus_1 = example.func_ret(square)
|
||||
>>> square_plus_1(4)
|
||||
17L
|
||||
>>> plus_1 = func_cpp()
|
||||
>>> plus_1(number=43)
|
||||
44L
|
||||
|
||||
.. warning::
|
||||
|
||||
Keep in mind that passing a function from C++ to Python (or vice versa)
|
||||
will instantiate a piece of wrapper code that translates function
|
||||
invocations between the two languages. Naturally, this translation
|
||||
increases the computational cost of each function call somewhat. A
|
||||
problematic situation can arise when a function is copied back and forth
|
||||
between Python and C++ many times in a row, in which case the underlying
|
||||
wrappers will accumulate correspondingly. The resulting long sequence of
|
||||
C++ -> Python -> C++ -> ... roundtrips can significantly decrease
|
||||
performance.
|
||||
|
||||
There is one exception: pybind11 detects case where a stateless function
|
||||
(i.e. a function pointer or a lambda function without captured variables)
|
||||
is passed as an argument to another C++ function exposed in Python. In this
|
||||
case, there is no overhead. Pybind11 will extract the underlying C++
|
||||
function pointer from the wrapped function to sidestep a potential C++ ->
|
||||
Python -> C++ roundtrip. This is demonstrated in :file:`tests/test_callbacks.cpp`.
|
||||
|
||||
.. note::
|
||||
|
||||
This functionality is very useful when generating bindings for callbacks in
|
||||
C++ libraries (e.g. GUI libraries, asynchronous networking libraries, etc.).
|
||||
|
||||
The file :file:`tests/test_callbacks.cpp` contains a complete example
|
||||
that demonstrates how to work with callbacks and anonymous functions in
|
||||
more detail.
|
||||
|
|
@ -0,0 +1,42 @@
|
|||
Type conversions
|
||||
################
|
||||
|
||||
Apart from enabling cross-language function calls, a fundamental problem
|
||||
that a binding tool like pybind11 must address is to provide access to
|
||||
native Python types in C++ and vice versa. There are three fundamentally
|
||||
different ways to do this—which approach is preferable for a particular type
|
||||
depends on the situation at hand.
|
||||
|
||||
1. Use a native C++ type everywhere. In this case, the type must be wrapped
|
||||
using pybind11-generated bindings so that Python can interact with it.
|
||||
|
||||
2. Use a native Python type everywhere. It will need to be wrapped so that
|
||||
C++ functions can interact with it.
|
||||
|
||||
3. Use a native C++ type on the C++ side and a native Python type on the
|
||||
Python side. pybind11 refers to this as a *type conversion*.
|
||||
|
||||
Type conversions are the most "natural" option in the sense that native
|
||||
(non-wrapped) types are used everywhere. The main downside is that a copy
|
||||
of the data must be made on every Python ↔ C++ transition: this is
|
||||
needed since the C++ and Python versions of the same type generally won't
|
||||
have the same memory layout.
|
||||
|
||||
pybind11 can perform many kinds of conversions automatically. An overview
|
||||
is provided in the table ":ref:`conversion_table`".
|
||||
|
||||
The following subsections discuss the differences between these options in more
|
||||
detail. The main focus in this section is on type conversions, which represent
|
||||
the last case of the above list.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
overview
|
||||
strings
|
||||
stl
|
||||
functional
|
||||
chrono
|
||||
eigen
|
||||
custom
|
||||
|
||||
|
|
@ -0,0 +1,165 @@
|
|||
Overview
|
||||
########
|
||||
|
||||
.. rubric:: 1. Native type in C++, wrapper in Python
|
||||
|
||||
Exposing a custom C++ type using :class:`py::class_` was covered in detail
|
||||
in the :doc:`/classes` section. There, the underlying data structure is
|
||||
always the original C++ class while the :class:`py::class_` wrapper provides
|
||||
a Python interface. Internally, when an object like this is sent from C++ to
|
||||
Python, pybind11 will just add the outer wrapper layer over the native C++
|
||||
object. Getting it back from Python is just a matter of peeling off the
|
||||
wrapper.
|
||||
|
||||
.. rubric:: 2. Wrapper in C++, native type in Python
|
||||
|
||||
This is the exact opposite situation. Now, we have a type which is native to
|
||||
Python, like a ``tuple`` or a ``list``. One way to get this data into C++ is
|
||||
with the :class:`py::object` family of wrappers. These are explained in more
|
||||
detail in the :doc:`/advanced/pycpp/object` section. We'll just give a quick
|
||||
example here:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
void print_list(py::list my_list) {
|
||||
for (auto item : my_list)
|
||||
std::cout << item << " ";
|
||||
}
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
>>> print_list([1, 2, 3])
|
||||
1 2 3
|
||||
|
||||
The Python ``list`` is not converted in any way -- it's just wrapped in a C++
|
||||
:class:`py::list` class. At its core it's still a Python object. Copying a
|
||||
:class:`py::list` will do the usual reference-counting like in Python.
|
||||
Returning the object to Python will just remove the thin wrapper.
|
||||
|
||||
.. rubric:: 3. Converting between native C++ and Python types
|
||||
|
||||
In the previous two cases we had a native type in one language and a wrapper in
|
||||
the other. Now, we have native types on both sides and we convert between them.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
void print_vector(const std::vector<int> &v) {
|
||||
for (auto item : v)
|
||||
std::cout << item << "\n";
|
||||
}
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
>>> print_vector([1, 2, 3])
|
||||
1 2 3
|
||||
|
||||
In this case, pybind11 will construct a new ``std::vector<int>`` and copy each
|
||||
element from the Python ``list``. The newly constructed object will be passed
|
||||
to ``print_vector``. The same thing happens in the other direction: a new
|
||||
``list`` is made to match the value returned from C++.
|
||||
|
||||
Lots of these conversions are supported out of the box, as shown in the table
|
||||
below. They are very convenient, but keep in mind that these conversions are
|
||||
fundamentally based on copying data. This is perfectly fine for small immutable
|
||||
types but it may become quite expensive for large data structures. This can be
|
||||
avoided by overriding the automatic conversion with a custom wrapper (i.e. the
|
||||
above-mentioned approach 1). This requires some manual effort and more details
|
||||
are available in the :ref:`opaque` section.
|
||||
|
||||
.. _conversion_table:
|
||||
|
||||
List of all builtin conversions
|
||||
-------------------------------
|
||||
|
||||
The following basic data types are supported out of the box (some may require
|
||||
an additional extension header to be included). To pass other data structures
|
||||
as arguments and return values, refer to the section on binding :ref:`classes`.
|
||||
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| Data type | Description | Header file |
|
||||
+====================================+===========================+===============================+
|
||||
| ``int8_t``, ``uint8_t`` | 8-bit integers | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``int16_t``, ``uint16_t`` | 16-bit integers | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``int32_t``, ``uint32_t`` | 32-bit integers | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``int64_t``, ``uint64_t`` | 64-bit integers | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``ssize_t``, ``size_t`` | Platform-dependent size | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``float``, ``double`` | Floating point types | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``bool`` | Two-state Boolean type | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``char`` | Character literal | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``char16_t`` | UTF-16 character literal | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``char32_t`` | UTF-32 character literal | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``wchar_t`` | Wide character literal | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``const char *`` | UTF-8 string literal | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``const char16_t *`` | UTF-16 string literal | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``const char32_t *`` | UTF-32 string literal | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``const wchar_t *`` | Wide string literal | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::string`` | STL dynamic UTF-8 string | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::u16string`` | STL dynamic UTF-16 string | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::u32string`` | STL dynamic UTF-32 string | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::wstring`` | STL dynamic wide string | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::string_view``, | STL C++17 string views | :file:`pybind11/pybind11.h` |
|
||||
| ``std::u16string_view``, etc. | | |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::pair<T1, T2>`` | Pair of two custom types | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::tuple<...>`` | Arbitrary tuple of types | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::reference_wrapper<...>`` | Reference type wrapper | :file:`pybind11/pybind11.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::complex<T>`` | Complex numbers | :file:`pybind11/complex.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::array<T, Size>`` | STL static array | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::vector<T>`` | STL dynamic array | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::deque<T>`` | STL double-ended queue | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::valarray<T>`` | STL value array | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::list<T>`` | STL linked list | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::map<T1, T2>`` | STL ordered map | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::unordered_map<T1, T2>`` | STL unordered map | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::set<T>`` | STL ordered set | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::unordered_set<T>`` | STL unordered set | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::optional<T>`` | STL optional type (C++17) | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::experimental::optional<T>`` | STL optional type (exp.) | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::variant<...>`` | Type-safe union (C++17) | :file:`pybind11/stl.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::function<...>`` | STL polymorphic function | :file:`pybind11/functional.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::chrono::duration<...>`` | STL time duration | :file:`pybind11/chrono.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``std::chrono::time_point<...>`` | STL date/time | :file:`pybind11/chrono.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``Eigen::Matrix<...>`` | Eigen: dense matrix | :file:`pybind11/eigen.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``Eigen::Map<...>`` | Eigen: mapped memory | :file:`pybind11/eigen.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
| ``Eigen::SparseMatrix<...>`` | Eigen: sparse matrix | :file:`pybind11/eigen.h` |
|
||||
+------------------------------------+---------------------------+-------------------------------+
|
||||
|
|
@ -0,0 +1,240 @@
|
|||
STL containers
|
||||
##############
|
||||
|
||||
Automatic conversion
|
||||
====================
|
||||
|
||||
When including the additional header file :file:`pybind11/stl.h`, conversions
|
||||
between ``std::vector<>``/``std::deque<>``/``std::list<>``/``std::array<>``,
|
||||
``std::set<>``/``std::unordered_set<>``, and
|
||||
``std::map<>``/``std::unordered_map<>`` and the Python ``list``, ``set`` and
|
||||
``dict`` data structures are automatically enabled. The types ``std::pair<>``
|
||||
and ``std::tuple<>`` are already supported out of the box with just the core
|
||||
:file:`pybind11/pybind11.h` header.
|
||||
|
||||
The major downside of these implicit conversions is that containers must be
|
||||
converted (i.e. copied) on every Python->C++ and C++->Python transition, which
|
||||
can have implications on the program semantics and performance. Please read the
|
||||
next sections for more details and alternative approaches that avoid this.
|
||||
|
||||
.. note::
|
||||
|
||||
Arbitrary nesting of any of these types is possible.
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_stl.cpp` contains a complete
|
||||
example that demonstrates how to pass STL data types in more detail.
|
||||
|
||||
.. _cpp17_container_casters:
|
||||
|
||||
C++17 library containers
|
||||
========================
|
||||
|
||||
The :file:`pybind11/stl.h` header also includes support for ``std::optional<>``
|
||||
and ``std::variant<>``. These require a C++17 compiler and standard library.
|
||||
In C++14 mode, ``std::experimental::optional<>`` is supported if available.
|
||||
|
||||
Various versions of these containers also exist for C++11 (e.g. in Boost).
|
||||
pybind11 provides an easy way to specialize the ``type_caster`` for such
|
||||
types:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// `boost::optional` as an example -- can be any `std::optional`-like container
|
||||
namespace pybind11 { namespace detail {
|
||||
template <typename T>
|
||||
struct type_caster<boost::optional<T>> : optional_caster<boost::optional<T>> {};
|
||||
}}
|
||||
|
||||
The above should be placed in a header file and included in all translation units
|
||||
where automatic conversion is needed. Similarly, a specialization can be provided
|
||||
for custom variant types:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// `boost::variant` as an example -- can be any `std::variant`-like container
|
||||
namespace pybind11 { namespace detail {
|
||||
template <typename... Ts>
|
||||
struct type_caster<boost::variant<Ts...>> : variant_caster<boost::variant<Ts...>> {};
|
||||
|
||||
// Specifies the function used to visit the variant -- `apply_visitor` instead of `visit`
|
||||
template <>
|
||||
struct visit_helper<boost::variant> {
|
||||
template <typename... Args>
|
||||
static auto call(Args &&...args) -> decltype(boost::apply_visitor(args...)) {
|
||||
return boost::apply_visitor(args...);
|
||||
}
|
||||
};
|
||||
}} // namespace pybind11::detail
|
||||
|
||||
The ``visit_helper`` specialization is not required if your ``name::variant`` provides
|
||||
a ``name::visit()`` function. For any other function name, the specialization must be
|
||||
included to tell pybind11 how to visit the variant.
|
||||
|
||||
.. note::
|
||||
|
||||
pybind11 only supports the modern implementation of ``boost::variant``
|
||||
which makes use of variadic templates. This requires Boost 1.56 or newer.
|
||||
Additionally, on Windows, MSVC 2017 is required because ``boost::variant``
|
||||
falls back to the old non-variadic implementation on MSVC 2015.
|
||||
|
||||
.. _opaque:
|
||||
|
||||
Making opaque types
|
||||
===================
|
||||
|
||||
pybind11 heavily relies on a template matching mechanism to convert parameters
|
||||
and return values that are constructed from STL data types such as vectors,
|
||||
linked lists, hash tables, etc. This even works in a recursive manner, for
|
||||
instance to deal with lists of hash maps of pairs of elementary and custom
|
||||
types, etc.
|
||||
|
||||
However, a fundamental limitation of this approach is that internal conversions
|
||||
between Python and C++ types involve a copy operation that prevents
|
||||
pass-by-reference semantics. What does this mean?
|
||||
|
||||
Suppose we bind the following function
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
void append_1(std::vector<int> &v) {
|
||||
v.push_back(1);
|
||||
}
|
||||
|
||||
and call it from Python, the following happens:
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
>>> v = [5, 6]
|
||||
>>> append_1(v)
|
||||
>>> print(v)
|
||||
[5, 6]
|
||||
|
||||
As you can see, when passing STL data structures by reference, modifications
|
||||
are not propagated back the Python side. A similar situation arises when
|
||||
exposing STL data structures using the ``def_readwrite`` or ``def_readonly``
|
||||
functions:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
/* ... definition ... */
|
||||
|
||||
class MyClass {
|
||||
std::vector<int> contents;
|
||||
};
|
||||
|
||||
/* ... binding code ... */
|
||||
|
||||
py::class_<MyClass>(m, "MyClass")
|
||||
.def(py::init<>())
|
||||
.def_readwrite("contents", &MyClass::contents);
|
||||
|
||||
In this case, properties can be read and written in their entirety. However, an
|
||||
``append`` operation involving such a list type has no effect:
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
>>> m = MyClass()
|
||||
>>> m.contents = [5, 6]
|
||||
>>> print(m.contents)
|
||||
[5, 6]
|
||||
>>> m.contents.append(7)
|
||||
>>> print(m.contents)
|
||||
[5, 6]
|
||||
|
||||
Finally, the involved copy operations can be costly when dealing with very
|
||||
large lists. To deal with all of the above situations, pybind11 provides a
|
||||
macro named ``PYBIND11_MAKE_OPAQUE(T)`` that disables the template-based
|
||||
conversion machinery of types, thus rendering them *opaque*. The contents of
|
||||
opaque objects are never inspected or extracted, hence they *can* be passed by
|
||||
reference. For instance, to turn ``std::vector<int>`` into an opaque type, add
|
||||
the declaration
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
PYBIND11_MAKE_OPAQUE(std::vector<int>);
|
||||
|
||||
before any binding code (e.g. invocations to ``class_::def()``, etc.). This
|
||||
macro must be specified at the top level (and outside of any namespaces), since
|
||||
it instantiates a partial template overload. If your binding code consists of
|
||||
multiple compilation units, it must be present in every file (typically via a
|
||||
common header) preceding any usage of ``std::vector<int>``. Opaque types must
|
||||
also have a corresponding ``class_`` declaration to associate them with a name
|
||||
in Python, and to define a set of available operations, e.g.:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<std::vector<int>>(m, "IntVector")
|
||||
.def(py::init<>())
|
||||
.def("clear", &std::vector<int>::clear)
|
||||
.def("pop_back", &std::vector<int>::pop_back)
|
||||
.def("__len__", [](const std::vector<int> &v) { return v.size(); })
|
||||
.def("__iter__", [](std::vector<int> &v) {
|
||||
return py::make_iterator(v.begin(), v.end());
|
||||
}, py::keep_alive<0, 1>()) /* Keep vector alive while iterator is used */
|
||||
// ....
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_opaque_types.cpp` contains a complete
|
||||
example that demonstrates how to create and expose opaque types using
|
||||
pybind11 in more detail.
|
||||
|
||||
.. _stl_bind:
|
||||
|
||||
Binding STL containers
|
||||
======================
|
||||
|
||||
The ability to expose STL containers as native Python objects is a fairly
|
||||
common request, hence pybind11 also provides an optional header file named
|
||||
:file:`pybind11/stl_bind.h` that does exactly this. The mapped containers try
|
||||
to match the behavior of their native Python counterparts as much as possible.
|
||||
|
||||
The following example showcases usage of :file:`pybind11/stl_bind.h`:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Don't forget this
|
||||
#include <pybind11/stl_bind.h>
|
||||
|
||||
PYBIND11_MAKE_OPAQUE(std::vector<int>);
|
||||
PYBIND11_MAKE_OPAQUE(std::map<std::string, double>);
|
||||
|
||||
// ...
|
||||
|
||||
// later in binding code:
|
||||
py::bind_vector<std::vector<int>>(m, "VectorInt");
|
||||
py::bind_map<std::map<std::string, double>>(m, "MapStringDouble");
|
||||
|
||||
When binding STL containers pybind11 considers the types of the container's
|
||||
elements to decide whether the container should be confined to the local module
|
||||
(via the :ref:`module_local` feature). If the container element types are
|
||||
anything other than already-bound custom types bound without
|
||||
``py::module_local()`` the container binding will have ``py::module_local()``
|
||||
applied. This includes converting types such as numeric types, strings, Eigen
|
||||
types; and types that have not yet been bound at the time of the stl container
|
||||
binding. This module-local binding is designed to avoid potential conflicts
|
||||
between module bindings (for example, from two separate modules each attempting
|
||||
to bind ``std::vector<int>`` as a python type).
|
||||
|
||||
It is possible to override this behavior to force a definition to be either
|
||||
module-local or global. To do so, you can pass the attributes
|
||||
``py::module_local()`` (to make the binding module-local) or
|
||||
``py::module_local(false)`` (to make the binding global) into the
|
||||
``py::bind_vector`` or ``py::bind_map`` arguments:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::bind_vector<std::vector<int>>(m, "VectorInt", py::module_local(false));
|
||||
|
||||
Note, however, that such a global binding would make it impossible to load this
|
||||
module at the same time as any other pybind module that also attempts to bind
|
||||
the same container type (``std::vector<int>`` in the above example).
|
||||
|
||||
See :ref:`module_local` for more details on module-local bindings.
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_stl_binders.cpp` shows how to use the
|
||||
convenience STL container wrappers.
|
||||
|
|
@ -0,0 +1,305 @@
|
|||
Strings, bytes and Unicode conversions
|
||||
######################################
|
||||
|
||||
.. note::
|
||||
|
||||
This section discusses string handling in terms of Python 3 strings. For
|
||||
Python 2.7, replace all occurrences of ``str`` with ``unicode`` and
|
||||
``bytes`` with ``str``. Python 2.7 users may find it best to use ``from
|
||||
__future__ import unicode_literals`` to avoid unintentionally using ``str``
|
||||
instead of ``unicode``.
|
||||
|
||||
Passing Python strings to C++
|
||||
=============================
|
||||
|
||||
When a Python ``str`` is passed from Python to a C++ function that accepts
|
||||
``std::string`` or ``char *`` as arguments, pybind11 will encode the Python
|
||||
string to UTF-8. All Python ``str`` can be encoded in UTF-8, so this operation
|
||||
does not fail.
|
||||
|
||||
The C++ language is encoding agnostic. It is the responsibility of the
|
||||
programmer to track encodings. It's often easiest to simply `use UTF-8
|
||||
everywhere <http://utf8everywhere.org/>`_.
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
m.def("utf8_test",
|
||||
[](const std::string &s) {
|
||||
cout << "utf-8 is icing on the cake.\n";
|
||||
cout << s;
|
||||
}
|
||||
);
|
||||
m.def("utf8_charptr",
|
||||
[](const char *s) {
|
||||
cout << "My favorite food is\n";
|
||||
cout << s;
|
||||
}
|
||||
);
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> utf8_test('🎂')
|
||||
utf-8 is icing on the cake.
|
||||
🎂
|
||||
|
||||
>>> utf8_charptr('🍕')
|
||||
My favorite food is
|
||||
🍕
|
||||
|
||||
.. note::
|
||||
|
||||
Some terminal emulators do not support UTF-8 or emoji fonts and may not
|
||||
display the example above correctly.
|
||||
|
||||
The results are the same whether the C++ function accepts arguments by value or
|
||||
reference, and whether or not ``const`` is used.
|
||||
|
||||
Passing bytes to C++
|
||||
--------------------
|
||||
|
||||
A Python ``bytes`` object will be passed to C++ functions that accept
|
||||
``std::string`` or ``char*`` *without* conversion. On Python 3, in order to
|
||||
make a function *only* accept ``bytes`` (and not ``str``), declare it as taking
|
||||
a ``py::bytes`` argument.
|
||||
|
||||
|
||||
Returning C++ strings to Python
|
||||
===============================
|
||||
|
||||
When a C++ function returns a ``std::string`` or ``char*`` to a Python caller,
|
||||
**pybind11 will assume that the string is valid UTF-8** and will decode it to a
|
||||
native Python ``str``, using the same API as Python uses to perform
|
||||
``bytes.decode('utf-8')``. If this implicit conversion fails, pybind11 will
|
||||
raise a ``UnicodeDecodeError``.
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
m.def("std_string_return",
|
||||
[]() {
|
||||
return std::string("This string needs to be UTF-8 encoded");
|
||||
}
|
||||
);
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> isinstance(example.std_string_return(), str)
|
||||
True
|
||||
|
||||
|
||||
Because UTF-8 is inclusive of pure ASCII, there is never any issue with
|
||||
returning a pure ASCII string to Python. If there is any possibility that the
|
||||
string is not pure ASCII, it is necessary to ensure the encoding is valid
|
||||
UTF-8.
|
||||
|
||||
.. warning::
|
||||
|
||||
Implicit conversion assumes that a returned ``char *`` is null-terminated.
|
||||
If there is no null terminator a buffer overrun will occur.
|
||||
|
||||
Explicit conversions
|
||||
--------------------
|
||||
|
||||
If some C++ code constructs a ``std::string`` that is not a UTF-8 string, one
|
||||
can perform a explicit conversion and return a ``py::str`` object. Explicit
|
||||
conversion has the same overhead as implicit conversion.
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
// This uses the Python C API to convert Latin-1 to Unicode
|
||||
m.def("str_output",
|
||||
[]() {
|
||||
std::string s = "Send your r\xe9sum\xe9 to Alice in HR"; // Latin-1
|
||||
py::str py_s = PyUnicode_DecodeLatin1(s.data(), s.length());
|
||||
return py_s;
|
||||
}
|
||||
);
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> str_output()
|
||||
'Send your résumé to Alice in HR'
|
||||
|
||||
The `Python C API
|
||||
<https://docs.python.org/3/c-api/unicode.html#built-in-codecs>`_ provides
|
||||
several built-in codecs.
|
||||
|
||||
|
||||
One could also use a third party encoding library such as libiconv to transcode
|
||||
to UTF-8.
|
||||
|
||||
Return C++ strings without conversion
|
||||
-------------------------------------
|
||||
|
||||
If the data in a C++ ``std::string`` does not represent text and should be
|
||||
returned to Python as ``bytes``, then one can return the data as a
|
||||
``py::bytes`` object.
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
m.def("return_bytes",
|
||||
[]() {
|
||||
std::string s("\xba\xd0\xba\xd0"); // Not valid UTF-8
|
||||
return py::bytes(s); // Return the data without transcoding
|
||||
}
|
||||
);
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> example.return_bytes()
|
||||
b'\xba\xd0\xba\xd0'
|
||||
|
||||
|
||||
Note the asymmetry: pybind11 will convert ``bytes`` to ``std::string`` without
|
||||
encoding, but cannot convert ``std::string`` back to ``bytes`` implicitly.
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
m.def("asymmetry",
|
||||
[](std::string s) { // Accepts str or bytes from Python
|
||||
return s; // Looks harmless, but implicitly converts to str
|
||||
}
|
||||
);
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> isinstance(example.asymmetry(b"have some bytes"), str)
|
||||
True
|
||||
|
||||
>>> example.asymmetry(b"\xba\xd0\xba\xd0") # invalid utf-8 as bytes
|
||||
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xba in position 0: invalid start byte
|
||||
|
||||
|
||||
Wide character strings
|
||||
======================
|
||||
|
||||
When a Python ``str`` is passed to a C++ function expecting ``std::wstring``,
|
||||
``wchar_t*``, ``std::u16string`` or ``std::u32string``, the ``str`` will be
|
||||
encoded to UTF-16 or UTF-32 depending on how the C++ compiler implements each
|
||||
type, in the platform's native endianness. When strings of these types are
|
||||
returned, they are assumed to contain valid UTF-16 or UTF-32, and will be
|
||||
decoded to Python ``str``.
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
#define UNICODE
|
||||
#include <windows.h>
|
||||
|
||||
m.def("set_window_text",
|
||||
[](HWND hwnd, std::wstring s) {
|
||||
// Call SetWindowText with null-terminated UTF-16 string
|
||||
::SetWindowText(hwnd, s.c_str());
|
||||
}
|
||||
);
|
||||
m.def("get_window_text",
|
||||
[](HWND hwnd) {
|
||||
const int buffer_size = ::GetWindowTextLength(hwnd) + 1;
|
||||
auto buffer = std::make_unique< wchar_t[] >(buffer_size);
|
||||
|
||||
::GetWindowText(hwnd, buffer.data(), buffer_size);
|
||||
|
||||
std::wstring text(buffer.get());
|
||||
|
||||
// wstring will be converted to Python str
|
||||
return text;
|
||||
}
|
||||
);
|
||||
|
||||
.. warning::
|
||||
|
||||
Wide character strings may not work as described on Python 2.7 or Python
|
||||
3.3 compiled with ``--enable-unicode=ucs2``.
|
||||
|
||||
Strings in multibyte encodings such as Shift-JIS must transcoded to a
|
||||
UTF-8/16/32 before being returned to Python.
|
||||
|
||||
|
||||
Character literals
|
||||
==================
|
||||
|
||||
C++ functions that accept character literals as input will receive the first
|
||||
character of a Python ``str`` as their input. If the string is longer than one
|
||||
Unicode character, trailing characters will be ignored.
|
||||
|
||||
When a character literal is returned from C++ (such as a ``char`` or a
|
||||
``wchar_t``), it will be converted to a ``str`` that represents the single
|
||||
character.
|
||||
|
||||
.. code-block:: c++
|
||||
|
||||
m.def("pass_char", [](char c) { return c; });
|
||||
m.def("pass_wchar", [](wchar_t w) { return w; });
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> example.pass_char('A')
|
||||
'A'
|
||||
|
||||
While C++ will cast integers to character types (``char c = 0x65;``), pybind11
|
||||
does not convert Python integers to characters implicitly. The Python function
|
||||
``chr()`` can be used to convert integers to characters.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> example.pass_char(0x65)
|
||||
TypeError
|
||||
|
||||
>>> example.pass_char(chr(0x65))
|
||||
'A'
|
||||
|
||||
If the desire is to work with an 8-bit integer, use ``int8_t`` or ``uint8_t``
|
||||
as the argument type.
|
||||
|
||||
Grapheme clusters
|
||||
-----------------
|
||||
|
||||
A single grapheme may be represented by two or more Unicode characters. For
|
||||
example 'é' is usually represented as U+00E9 but can also be expressed as the
|
||||
combining character sequence U+0065 U+0301 (that is, the letter 'e' followed by
|
||||
a combining acute accent). The combining character will be lost if the
|
||||
two-character sequence is passed as an argument, even though it renders as a
|
||||
single grapheme.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> example.pass_wchar('é')
|
||||
'é'
|
||||
|
||||
>>> combining_e_acute = 'e' + '\u0301'
|
||||
|
||||
>>> combining_e_acute
|
||||
'é'
|
||||
|
||||
>>> combining_e_acute == 'é'
|
||||
False
|
||||
|
||||
>>> example.pass_wchar(combining_e_acute)
|
||||
'e'
|
||||
|
||||
Normalizing combining characters before passing the character literal to C++
|
||||
may resolve *some* of these issues:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> example.pass_wchar(unicodedata.normalize('NFC', combining_e_acute))
|
||||
'é'
|
||||
|
||||
In some languages (Thai for example), there are `graphemes that cannot be
|
||||
expressed as a single Unicode code point
|
||||
<http://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries>`_, so there is
|
||||
no way to capture them in a C++ character type.
|
||||
|
||||
|
||||
C++17 string views
|
||||
==================
|
||||
|
||||
C++17 string views are automatically supported when compiling in C++17 mode.
|
||||
They follow the same rules for encoding and decoding as the corresponding STL
|
||||
string type (for example, a ``std::u16string_view`` argument will be passed
|
||||
UTF-16-encoded data, and a returned ``std::string_view`` will be decoded as
|
||||
UTF-8).
|
||||
|
||||
References
|
||||
==========
|
||||
|
||||
* `The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) <https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/>`_
|
||||
* `C++ - Using STL Strings at Win32 API Boundaries <https://msdn.microsoft.com/en-ca/magazine/mt238407.aspx>`_
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,261 @@
|
|||
.. _embedding:
|
||||
|
||||
Embedding the interpreter
|
||||
#########################
|
||||
|
||||
While pybind11 is mainly focused on extending Python using C++, it's also
|
||||
possible to do the reverse: embed the Python interpreter into a C++ program.
|
||||
All of the other documentation pages still apply here, so refer to them for
|
||||
general pybind11 usage. This section will cover a few extra things required
|
||||
for embedding.
|
||||
|
||||
Getting started
|
||||
===============
|
||||
|
||||
A basic executable with an embedded interpreter can be created with just a few
|
||||
lines of CMake and the ``pybind11::embed`` target, as shown below. For more
|
||||
information, see :doc:`/compiling`.
|
||||
|
||||
.. code-block:: cmake
|
||||
|
||||
cmake_minimum_required(VERSION 3.0)
|
||||
project(example)
|
||||
|
||||
find_package(pybind11 REQUIRED) # or `add_subdirectory(pybind11)`
|
||||
|
||||
add_executable(example main.cpp)
|
||||
target_link_libraries(example PRIVATE pybind11::embed)
|
||||
|
||||
The essential structure of the ``main.cpp`` file looks like this:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/embed.h> // everything needed for embedding
|
||||
namespace py = pybind11;
|
||||
|
||||
int main() {
|
||||
py::scoped_interpreter guard{}; // start the interpreter and keep it alive
|
||||
|
||||
py::print("Hello, World!"); // use the Python API
|
||||
}
|
||||
|
||||
The interpreter must be initialized before using any Python API, which includes
|
||||
all the functions and classes in pybind11. The RAII guard class `scoped_interpreter`
|
||||
takes care of the interpreter lifetime. After the guard is destroyed, the interpreter
|
||||
shuts down and clears its memory. No Python functions can be called after this.
|
||||
|
||||
Executing Python code
|
||||
=====================
|
||||
|
||||
There are a few different ways to run Python code. One option is to use `eval`,
|
||||
`exec` or `eval_file`, as explained in :ref:`eval`. Here is a quick example in
|
||||
the context of an executable with an embedded interpreter:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/embed.h>
|
||||
namespace py = pybind11;
|
||||
|
||||
int main() {
|
||||
py::scoped_interpreter guard{};
|
||||
|
||||
py::exec(R"(
|
||||
kwargs = dict(name="World", number=42)
|
||||
message = "Hello, {name}! The answer is {number}".format(**kwargs)
|
||||
print(message)
|
||||
)");
|
||||
}
|
||||
|
||||
Alternatively, similar results can be achieved using pybind11's API (see
|
||||
:doc:`/advanced/pycpp/index` for more details).
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/embed.h>
|
||||
namespace py = pybind11;
|
||||
using namespace py::literals;
|
||||
|
||||
int main() {
|
||||
py::scoped_interpreter guard{};
|
||||
|
||||
auto kwargs = py::dict("name"_a="World", "number"_a=42);
|
||||
auto message = "Hello, {name}! The answer is {number}"_s.format(**kwargs);
|
||||
py::print(message);
|
||||
}
|
||||
|
||||
The two approaches can also be combined:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/embed.h>
|
||||
#include <iostream>
|
||||
|
||||
namespace py = pybind11;
|
||||
using namespace py::literals;
|
||||
|
||||
int main() {
|
||||
py::scoped_interpreter guard{};
|
||||
|
||||
auto locals = py::dict("name"_a="World", "number"_a=42);
|
||||
py::exec(R"(
|
||||
message = "Hello, {name}! The answer is {number}".format(**locals())
|
||||
)", py::globals(), locals);
|
||||
|
||||
auto message = locals["message"].cast<std::string>();
|
||||
std::cout << message;
|
||||
}
|
||||
|
||||
Importing modules
|
||||
=================
|
||||
|
||||
Python modules can be imported using `module::import()`:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::module sys = py::module::import("sys");
|
||||
py::print(sys.attr("path"));
|
||||
|
||||
For convenience, the current working directory is included in ``sys.path`` when
|
||||
embedding the interpreter. This makes it easy to import local Python files:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
"""calc.py located in the working directory"""
|
||||
|
||||
def add(i, j):
|
||||
return i + j
|
||||
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::module calc = py::module::import("calc");
|
||||
py::object result = calc.attr("add")(1, 2);
|
||||
int n = result.cast<int>();
|
||||
assert(n == 3);
|
||||
|
||||
Modules can be reloaded using `module::reload()` if the source is modified e.g.
|
||||
by an external process. This can be useful in scenarios where the application
|
||||
imports a user defined data processing script which needs to be updated after
|
||||
changes by the user. Note that this function does not reload modules recursively.
|
||||
|
||||
.. _embedding_modules:
|
||||
|
||||
Adding embedded modules
|
||||
=======================
|
||||
|
||||
Embedded binary modules can be added using the `PYBIND11_EMBEDDED_MODULE` macro.
|
||||
Note that the definition must be placed at global scope. They can be imported
|
||||
like any other module.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/embed.h>
|
||||
namespace py = pybind11;
|
||||
|
||||
PYBIND11_EMBEDDED_MODULE(fast_calc, m) {
|
||||
// `m` is a `py::module` which is used to bind functions and classes
|
||||
m.def("add", [](int i, int j) {
|
||||
return i + j;
|
||||
});
|
||||
}
|
||||
|
||||
int main() {
|
||||
py::scoped_interpreter guard{};
|
||||
|
||||
auto fast_calc = py::module::import("fast_calc");
|
||||
auto result = fast_calc.attr("add")(1, 2).cast<int>();
|
||||
assert(result == 3);
|
||||
}
|
||||
|
||||
Unlike extension modules where only a single binary module can be created, on
|
||||
the embedded side an unlimited number of modules can be added using multiple
|
||||
`PYBIND11_EMBEDDED_MODULE` definitions (as long as they have unique names).
|
||||
|
||||
These modules are added to Python's list of builtins, so they can also be
|
||||
imported in pure Python files loaded by the interpreter. Everything interacts
|
||||
naturally:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
"""py_module.py located in the working directory"""
|
||||
import cpp_module
|
||||
|
||||
a = cpp_module.a
|
||||
b = a + 1
|
||||
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/embed.h>
|
||||
namespace py = pybind11;
|
||||
|
||||
PYBIND11_EMBEDDED_MODULE(cpp_module, m) {
|
||||
m.attr("a") = 1;
|
||||
}
|
||||
|
||||
int main() {
|
||||
py::scoped_interpreter guard{};
|
||||
|
||||
auto py_module = py::module::import("py_module");
|
||||
|
||||
auto locals = py::dict("fmt"_a="{} + {} = {}", **py_module.attr("__dict__"));
|
||||
assert(locals["a"].cast<int>() == 1);
|
||||
assert(locals["b"].cast<int>() == 2);
|
||||
|
||||
py::exec(R"(
|
||||
c = a + b
|
||||
message = fmt.format(a, b, c)
|
||||
)", py::globals(), locals);
|
||||
|
||||
assert(locals["c"].cast<int>() == 3);
|
||||
assert(locals["message"].cast<std::string>() == "1 + 2 = 3");
|
||||
}
|
||||
|
||||
|
||||
Interpreter lifetime
|
||||
====================
|
||||
|
||||
The Python interpreter shuts down when `scoped_interpreter` is destroyed. After
|
||||
this, creating a new instance will restart the interpreter. Alternatively, the
|
||||
`initialize_interpreter` / `finalize_interpreter` pair of functions can be used
|
||||
to directly set the state at any time.
|
||||
|
||||
Modules created with pybind11 can be safely re-initialized after the interpreter
|
||||
has been restarted. However, this may not apply to third-party extension modules.
|
||||
The issue is that Python itself cannot completely unload extension modules and
|
||||
there are several caveats with regard to interpreter restarting. In short, not
|
||||
all memory may be freed, either due to Python reference cycles or user-created
|
||||
global data. All the details can be found in the CPython documentation.
|
||||
|
||||
.. warning::
|
||||
|
||||
Creating two concurrent `scoped_interpreter` guards is a fatal error. So is
|
||||
calling `initialize_interpreter` for a second time after the interpreter
|
||||
has already been initialized.
|
||||
|
||||
Do not use the raw CPython API functions ``Py_Initialize`` and
|
||||
``Py_Finalize`` as these do not properly handle the lifetime of
|
||||
pybind11's internal data.
|
||||
|
||||
|
||||
Sub-interpreter support
|
||||
=======================
|
||||
|
||||
Creating multiple copies of `scoped_interpreter` is not possible because it
|
||||
represents the main Python interpreter. Sub-interpreters are something different
|
||||
and they do permit the existence of multiple interpreters. This is an advanced
|
||||
feature of the CPython API and should be handled with care. pybind11 does not
|
||||
currently offer a C++ interface for sub-interpreters, so refer to the CPython
|
||||
documentation for all the details regarding this feature.
|
||||
|
||||
We'll just mention a couple of caveats the sub-interpreters support in pybind11:
|
||||
|
||||
1. Sub-interpreters will not receive independent copies of embedded modules.
|
||||
Instead, these are shared and modifications in one interpreter may be
|
||||
reflected in another.
|
||||
|
||||
2. Managing multiple threads, multiple interpreters and the GIL can be
|
||||
challenging and there are several caveats here, even within the pure
|
||||
CPython API (please refer to the Python docs for details). As for
|
||||
pybind11, keep in mind that `gil_scoped_release` and `gil_scoped_acquire`
|
||||
do not take sub-interpreters into account.
|
||||
|
|
@ -0,0 +1,142 @@
|
|||
Exceptions
|
||||
##########
|
||||
|
||||
Built-in exception translation
|
||||
==============================
|
||||
|
||||
When C++ code invoked from Python throws an ``std::exception``, it is
|
||||
automatically converted into a Python ``Exception``. pybind11 defines multiple
|
||||
special exception classes that will map to different types of Python
|
||||
exceptions:
|
||||
|
||||
.. tabularcolumns:: |p{0.5\textwidth}|p{0.45\textwidth}|
|
||||
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| C++ exception type | Python exception type |
|
||||
+======================================+======================================+
|
||||
| :class:`std::exception` | ``RuntimeError`` |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`std::bad_alloc` | ``MemoryError`` |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`std::domain_error` | ``ValueError`` |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`std::invalid_argument` | ``ValueError`` |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`std::length_error` | ``ValueError`` |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`std::out_of_range` | ``IndexError`` |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`std::range_error` | ``ValueError`` |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`pybind11::stop_iteration` | ``StopIteration`` (used to implement |
|
||||
| | custom iterators) |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`pybind11::index_error` | ``IndexError`` (used to indicate out |
|
||||
| | of bounds access in ``__getitem__``, |
|
||||
| | ``__setitem__``, etc.) |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`pybind11::value_error` | ``ValueError`` (used to indicate |
|
||||
| | wrong value passed in |
|
||||
| | ``container.remove(...)``) |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`pybind11::key_error` | ``KeyError`` (used to indicate out |
|
||||
| | of bounds access in ``__getitem__``, |
|
||||
| | ``__setitem__`` in dict-like |
|
||||
| | objects, etc.) |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| :class:`pybind11::error_already_set` | Indicates that the Python exception |
|
||||
| | flag has already been set via Python |
|
||||
| | API calls from C++ code; this C++ |
|
||||
| | exception is used to propagate such |
|
||||
| | a Python exception back to Python. |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
|
||||
When a Python function invoked from C++ throws an exception, it is converted
|
||||
into a C++ exception of type :class:`error_already_set` whose string payload
|
||||
contains a textual summary.
|
||||
|
||||
There is also a special exception :class:`cast_error` that is thrown by
|
||||
:func:`handle::call` when the input arguments cannot be converted to Python
|
||||
objects.
|
||||
|
||||
Registering custom translators
|
||||
==============================
|
||||
|
||||
If the default exception conversion policy described above is insufficient,
|
||||
pybind11 also provides support for registering custom exception translators.
|
||||
To register a simple exception conversion that translates a C++ exception into
|
||||
a new Python exception using the C++ exception's ``what()`` method, a helper
|
||||
function is available:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::register_exception<CppExp>(module, "PyExp");
|
||||
|
||||
This call creates a Python exception class with the name ``PyExp`` in the given
|
||||
module and automatically converts any encountered exceptions of type ``CppExp``
|
||||
into Python exceptions of type ``PyExp``.
|
||||
|
||||
When more advanced exception translation is needed, the function
|
||||
``py::register_exception_translator(translator)`` can be used to register
|
||||
functions that can translate arbitrary exception types (and which may include
|
||||
additional logic to do so). The function takes a stateless callable (e.g. a
|
||||
function pointer or a lambda function without captured variables) with the call
|
||||
signature ``void(std::exception_ptr)``.
|
||||
|
||||
When a C++ exception is thrown, the registered exception translators are tried
|
||||
in reverse order of registration (i.e. the last registered translator gets the
|
||||
first shot at handling the exception).
|
||||
|
||||
Inside the translator, ``std::rethrow_exception`` should be used within
|
||||
a try block to re-throw the exception. One or more catch clauses to catch
|
||||
the appropriate exceptions should then be used with each clause using
|
||||
``PyErr_SetString`` to set a Python exception or ``ex(string)`` to set
|
||||
the python exception to a custom exception type (see below).
|
||||
|
||||
To declare a custom Python exception type, declare a ``py::exception`` variable
|
||||
and use this in the associated exception translator (note: it is often useful
|
||||
to make this a static declaration when using it inside a lambda expression
|
||||
without requiring capturing).
|
||||
|
||||
|
||||
The following example demonstrates this for a hypothetical exception classes
|
||||
``MyCustomException`` and ``OtherException``: the first is translated to a
|
||||
custom python exception ``MyCustomError``, while the second is translated to a
|
||||
standard python RuntimeError:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
static py::exception<MyCustomException> exc(m, "MyCustomError");
|
||||
py::register_exception_translator([](std::exception_ptr p) {
|
||||
try {
|
||||
if (p) std::rethrow_exception(p);
|
||||
} catch (const MyCustomException &e) {
|
||||
exc(e.what());
|
||||
} catch (const OtherException &e) {
|
||||
PyErr_SetString(PyExc_RuntimeError, e.what());
|
||||
}
|
||||
});
|
||||
|
||||
Multiple exceptions can be handled by a single translator, as shown in the
|
||||
example above. If the exception is not caught by the current translator, the
|
||||
previously registered one gets a chance.
|
||||
|
||||
If none of the registered exception translators is able to handle the
|
||||
exception, it is handled by the default converter as described in the previous
|
||||
section.
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_exceptions.cpp` contains examples
|
||||
of various custom exception translators and custom exception types.
|
||||
|
||||
.. note::
|
||||
|
||||
You must call either ``PyErr_SetString`` or a custom exception's call
|
||||
operator (``exc(string)``) for every exception caught in a custom exception
|
||||
translator. Failure to do so will cause Python to crash with ``SystemError:
|
||||
error return without exception set``.
|
||||
|
||||
Exceptions that you do not plan to handle should simply not be caught, or
|
||||
may be explicitly (re-)thrown to delegate it to the other,
|
||||
previously-declared existing exception translators.
|
||||
|
|
@ -0,0 +1,507 @@
|
|||
Functions
|
||||
#########
|
||||
|
||||
Before proceeding with this section, make sure that you are already familiar
|
||||
with the basics of binding functions and classes, as explained in :doc:`/basics`
|
||||
and :doc:`/classes`. The following guide is applicable to both free and member
|
||||
functions, i.e. *methods* in Python.
|
||||
|
||||
.. _return_value_policies:
|
||||
|
||||
Return value policies
|
||||
=====================
|
||||
|
||||
Python and C++ use fundamentally different ways of managing the memory and
|
||||
lifetime of objects managed by them. This can lead to issues when creating
|
||||
bindings for functions that return a non-trivial type. Just by looking at the
|
||||
type information, it is not clear whether Python should take charge of the
|
||||
returned value and eventually free its resources, or if this is handled on the
|
||||
C++ side. For this reason, pybind11 provides a several *return value policy*
|
||||
annotations that can be passed to the :func:`module::def` and
|
||||
:func:`class_::def` functions. The default policy is
|
||||
:enum:`return_value_policy::automatic`.
|
||||
|
||||
Return value policies are tricky, and it's very important to get them right.
|
||||
Just to illustrate what can go wrong, consider the following simple example:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
/* Function declaration */
|
||||
Data *get_data() { return _data; /* (pointer to a static data structure) */ }
|
||||
...
|
||||
|
||||
/* Binding code */
|
||||
m.def("get_data", &get_data); // <-- KABOOM, will cause crash when called from Python
|
||||
|
||||
What's going on here? When ``get_data()`` is called from Python, the return
|
||||
value (a native C++ type) must be wrapped to turn it into a usable Python type.
|
||||
In this case, the default return value policy (:enum:`return_value_policy::automatic`)
|
||||
causes pybind11 to assume ownership of the static ``_data`` instance.
|
||||
|
||||
When Python's garbage collector eventually deletes the Python
|
||||
wrapper, pybind11 will also attempt to delete the C++ instance (via ``operator
|
||||
delete()``) due to the implied ownership. At this point, the entire application
|
||||
will come crashing down, though errors could also be more subtle and involve
|
||||
silent data corruption.
|
||||
|
||||
In the above example, the policy :enum:`return_value_policy::reference` should have
|
||||
been specified so that the global data instance is only *referenced* without any
|
||||
implied transfer of ownership, i.e.:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("get_data", &get_data, return_value_policy::reference);
|
||||
|
||||
On the other hand, this is not the right policy for many other situations,
|
||||
where ignoring ownership could lead to resource leaks.
|
||||
As a developer using pybind11, it's important to be familiar with the different
|
||||
return value policies, including which situation calls for which one of them.
|
||||
The following table provides an overview of available policies:
|
||||
|
||||
.. tabularcolumns:: |p{0.5\textwidth}|p{0.45\textwidth}|
|
||||
|
||||
+--------------------------------------------------+----------------------------------------------------------------------------+
|
||||
| Return value policy | Description |
|
||||
+==================================================+============================================================================+
|
||||
| :enum:`return_value_policy::take_ownership` | Reference an existing object (i.e. do not create a new copy) and take |
|
||||
| | ownership. Python will call the destructor and delete operator when the |
|
||||
| | object's reference count reaches zero. Undefined behavior ensues when the |
|
||||
| | C++ side does the same, or when the data was not dynamically allocated. |
|
||||
+--------------------------------------------------+----------------------------------------------------------------------------+
|
||||
| :enum:`return_value_policy::copy` | Create a new copy of the returned object, which will be owned by Python. |
|
||||
| | This policy is comparably safe because the lifetimes of the two instances |
|
||||
| | are decoupled. |
|
||||
+--------------------------------------------------+----------------------------------------------------------------------------+
|
||||
| :enum:`return_value_policy::move` | Use ``std::move`` to move the return value contents into a new instance |
|
||||
| | that will be owned by Python. This policy is comparably safe because the |
|
||||
| | lifetimes of the two instances (move source and destination) are decoupled.|
|
||||
+--------------------------------------------------+----------------------------------------------------------------------------+
|
||||
| :enum:`return_value_policy::reference` | Reference an existing object, but do not take ownership. The C++ side is |
|
||||
| | responsible for managing the object's lifetime and deallocating it when |
|
||||
| | it is no longer used. Warning: undefined behavior will ensue when the C++ |
|
||||
| | side deletes an object that is still referenced and used by Python. |
|
||||
+--------------------------------------------------+----------------------------------------------------------------------------+
|
||||
| :enum:`return_value_policy::reference_internal` | Indicates that the lifetime of the return value is tied to the lifetime |
|
||||
| | of a parent object, namely the implicit ``this``, or ``self`` argument of |
|
||||
| | the called method or property. Internally, this policy works just like |
|
||||
| | :enum:`return_value_policy::reference` but additionally applies a |
|
||||
| | ``keep_alive<0, 1>`` *call policy* (described in the next section) that |
|
||||
| | prevents the parent object from being garbage collected as long as the |
|
||||
| | return value is referenced by Python. This is the default policy for |
|
||||
| | property getters created via ``def_property``, ``def_readwrite``, etc. |
|
||||
+--------------------------------------------------+----------------------------------------------------------------------------+
|
||||
| :enum:`return_value_policy::automatic` | **Default policy.** This policy falls back to the policy |
|
||||
| | :enum:`return_value_policy::take_ownership` when the return value is a |
|
||||
| | pointer. Otherwise, it uses :enum:`return_value_policy::move` or |
|
||||
| | :enum:`return_value_policy::copy` for rvalue and lvalue references, |
|
||||
| | respectively. See above for a description of what all of these different |
|
||||
| | policies do. |
|
||||
+--------------------------------------------------+----------------------------------------------------------------------------+
|
||||
| :enum:`return_value_policy::automatic_reference` | As above, but use policy :enum:`return_value_policy::reference` when the |
|
||||
| | return value is a pointer. This is the default conversion policy for |
|
||||
| | function arguments when calling Python functions manually from C++ code |
|
||||
| | (i.e. via handle::operator()). You probably won't need to use this. |
|
||||
+--------------------------------------------------+----------------------------------------------------------------------------+
|
||||
|
||||
Return value policies can also be applied to properties:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
class_<MyClass>(m, "MyClass")
|
||||
.def_property("data", &MyClass::getData, &MyClass::setData,
|
||||
py::return_value_policy::copy);
|
||||
|
||||
Technically, the code above applies the policy to both the getter and the
|
||||
setter function, however, the setter doesn't really care about *return*
|
||||
value policies which makes this a convenient terse syntax. Alternatively,
|
||||
targeted arguments can be passed through the :class:`cpp_function` constructor:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
class_<MyClass>(m, "MyClass")
|
||||
.def_property("data"
|
||||
py::cpp_function(&MyClass::getData, py::return_value_policy::copy),
|
||||
py::cpp_function(&MyClass::setData)
|
||||
);
|
||||
|
||||
.. warning::
|
||||
|
||||
Code with invalid return value policies might access uninitialized memory or
|
||||
free data structures multiple times, which can lead to hard-to-debug
|
||||
non-determinism and segmentation faults, hence it is worth spending the
|
||||
time to understand all the different options in the table above.
|
||||
|
||||
.. note::
|
||||
|
||||
One important aspect of the above policies is that they only apply to
|
||||
instances which pybind11 has *not* seen before, in which case the policy
|
||||
clarifies essential questions about the return value's lifetime and
|
||||
ownership. When pybind11 knows the instance already (as identified by its
|
||||
type and address in memory), it will return the existing Python object
|
||||
wrapper rather than creating a new copy.
|
||||
|
||||
.. note::
|
||||
|
||||
The next section on :ref:`call_policies` discusses *call policies* that can be
|
||||
specified *in addition* to a return value policy from the list above. Call
|
||||
policies indicate reference relationships that can involve both return values
|
||||
and parameters of functions.
|
||||
|
||||
.. note::
|
||||
|
||||
As an alternative to elaborate call policies and lifetime management logic,
|
||||
consider using smart pointers (see the section on :ref:`smart_pointers` for
|
||||
details). Smart pointers can tell whether an object is still referenced from
|
||||
C++ or Python, which generally eliminates the kinds of inconsistencies that
|
||||
can lead to crashes or undefined behavior. For functions returning smart
|
||||
pointers, it is not necessary to specify a return value policy.
|
||||
|
||||
.. _call_policies:
|
||||
|
||||
Additional call policies
|
||||
========================
|
||||
|
||||
In addition to the above return value policies, further *call policies* can be
|
||||
specified to indicate dependencies between parameters or ensure a certain state
|
||||
for the function call.
|
||||
|
||||
Keep alive
|
||||
----------
|
||||
|
||||
In general, this policy is required when the C++ object is any kind of container
|
||||
and another object is being added to the container. ``keep_alive<Nurse, Patient>``
|
||||
indicates that the argument with index ``Patient`` should be kept alive at least
|
||||
until the argument with index ``Nurse`` is freed by the garbage collector. Argument
|
||||
indices start at one, while zero refers to the return value. For methods, index
|
||||
``1`` refers to the implicit ``this`` pointer, while regular arguments begin at
|
||||
index ``2``. Arbitrarily many call policies can be specified. When a ``Nurse``
|
||||
with value ``None`` is detected at runtime, the call policy does nothing.
|
||||
|
||||
When the nurse is not a pybind11-registered type, the implementation internally
|
||||
relies on the ability to create a *weak reference* to the nurse object. When
|
||||
the nurse object is not a pybind11-registered type and does not support weak
|
||||
references, an exception will be thrown.
|
||||
|
||||
Consider the following example: here, the binding code for a list append
|
||||
operation ties the lifetime of the newly added element to the underlying
|
||||
container:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<List>(m, "List")
|
||||
.def("append", &List::append, py::keep_alive<1, 2>());
|
||||
|
||||
For consistency, the argument indexing is identical for constructors. Index
|
||||
``1`` still refers to the implicit ``this`` pointer, i.e. the object which is
|
||||
being constructed. Index ``0`` refers to the return type which is presumed to
|
||||
be ``void`` when a constructor is viewed like a function. The following example
|
||||
ties the lifetime of the constructor element to the constructed object:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<Nurse>(m, "Nurse")
|
||||
.def(py::init<Patient &>(), py::keep_alive<1, 2>());
|
||||
|
||||
.. note::
|
||||
|
||||
``keep_alive`` is analogous to the ``with_custodian_and_ward`` (if Nurse,
|
||||
Patient != 0) and ``with_custodian_and_ward_postcall`` (if Nurse/Patient ==
|
||||
0) policies from Boost.Python.
|
||||
|
||||
Call guard
|
||||
----------
|
||||
|
||||
The ``call_guard<T>`` policy allows any scope guard type ``T`` to be placed
|
||||
around the function call. For example, this definition:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("foo", foo, py::call_guard<T>());
|
||||
|
||||
is equivalent to the following pseudocode:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("foo", [](args...) {
|
||||
T scope_guard;
|
||||
return foo(args...); // forwarded arguments
|
||||
});
|
||||
|
||||
The only requirement is that ``T`` is default-constructible, but otherwise any
|
||||
scope guard will work. This is very useful in combination with `gil_scoped_release`.
|
||||
See :ref:`gil`.
|
||||
|
||||
Multiple guards can also be specified as ``py::call_guard<T1, T2, T3...>``. The
|
||||
constructor order is left to right and destruction happens in reverse.
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_call_policies.cpp` contains a complete example
|
||||
that demonstrates using `keep_alive` and `call_guard` in more detail.
|
||||
|
||||
.. _python_objects_as_args:
|
||||
|
||||
Python objects as arguments
|
||||
===========================
|
||||
|
||||
pybind11 exposes all major Python types using thin C++ wrapper classes. These
|
||||
wrapper classes can also be used as parameters of functions in bindings, which
|
||||
makes it possible to directly work with native Python types on the C++ side.
|
||||
For instance, the following statement iterates over a Python ``dict``:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
void print_dict(py::dict dict) {
|
||||
/* Easily interact with Python types */
|
||||
for (auto item : dict)
|
||||
std::cout << "key=" << std::string(py::str(item.first)) << ", "
|
||||
<< "value=" << std::string(py::str(item.second)) << std::endl;
|
||||
}
|
||||
|
||||
It can be exported:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("print_dict", &print_dict);
|
||||
|
||||
And used in Python as usual:
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
>>> print_dict({'foo': 123, 'bar': 'hello'})
|
||||
key=foo, value=123
|
||||
key=bar, value=hello
|
||||
|
||||
For more information on using Python objects in C++, see :doc:`/advanced/pycpp/index`.
|
||||
|
||||
Accepting \*args and \*\*kwargs
|
||||
===============================
|
||||
|
||||
Python provides a useful mechanism to define functions that accept arbitrary
|
||||
numbers of arguments and keyword arguments:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def generic(*args, **kwargs):
|
||||
... # do something with args and kwargs
|
||||
|
||||
Such functions can also be created using pybind11:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
void generic(py::args args, py::kwargs kwargs) {
|
||||
/// .. do something with args
|
||||
if (kwargs)
|
||||
/// .. do something with kwargs
|
||||
}
|
||||
|
||||
/// Binding code
|
||||
m.def("generic", &generic);
|
||||
|
||||
The class ``py::args`` derives from ``py::tuple`` and ``py::kwargs`` derives
|
||||
from ``py::dict``.
|
||||
|
||||
You may also use just one or the other, and may combine these with other
|
||||
arguments as long as the ``py::args`` and ``py::kwargs`` arguments are the last
|
||||
arguments accepted by the function.
|
||||
|
||||
Please refer to the other examples for details on how to iterate over these,
|
||||
and on how to cast their entries into C++ objects. A demonstration is also
|
||||
available in ``tests/test_kwargs_and_defaults.cpp``.
|
||||
|
||||
.. note::
|
||||
|
||||
When combining \*args or \*\*kwargs with :ref:`keyword_args` you should
|
||||
*not* include ``py::arg`` tags for the ``py::args`` and ``py::kwargs``
|
||||
arguments.
|
||||
|
||||
Default arguments revisited
|
||||
===========================
|
||||
|
||||
The section on :ref:`default_args` previously discussed basic usage of default
|
||||
arguments using pybind11. One noteworthy aspect of their implementation is that
|
||||
default arguments are converted to Python objects right at declaration time.
|
||||
Consider the following example:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<MyClass>("MyClass")
|
||||
.def("myFunction", py::arg("arg") = SomeType(123));
|
||||
|
||||
In this case, pybind11 must already be set up to deal with values of the type
|
||||
``SomeType`` (via a prior instantiation of ``py::class_<SomeType>``), or an
|
||||
exception will be thrown.
|
||||
|
||||
Another aspect worth highlighting is that the "preview" of the default argument
|
||||
in the function signature is generated using the object's ``__repr__`` method.
|
||||
If not available, the signature may not be very helpful, e.g.:
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
FUNCTIONS
|
||||
...
|
||||
| myFunction(...)
|
||||
| Signature : (MyClass, arg : SomeType = <SomeType object at 0x101b7b080>) -> NoneType
|
||||
...
|
||||
|
||||
The first way of addressing this is by defining ``SomeType.__repr__``.
|
||||
Alternatively, it is possible to specify the human-readable preview of the
|
||||
default argument manually using the ``arg_v`` notation:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<MyClass>("MyClass")
|
||||
.def("myFunction", py::arg_v("arg", SomeType(123), "SomeType(123)"));
|
||||
|
||||
Sometimes it may be necessary to pass a null pointer value as a default
|
||||
argument. In this case, remember to cast it to the underlying type in question,
|
||||
like so:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<MyClass>("MyClass")
|
||||
.def("myFunction", py::arg("arg") = (SomeType *) nullptr);
|
||||
|
||||
.. _nonconverting_arguments:
|
||||
|
||||
Non-converting arguments
|
||||
========================
|
||||
|
||||
Certain argument types may support conversion from one type to another. Some
|
||||
examples of conversions are:
|
||||
|
||||
* :ref:`implicit_conversions` declared using ``py::implicitly_convertible<A,B>()``
|
||||
* Calling a method accepting a double with an integer argument
|
||||
* Calling a ``std::complex<float>`` argument with a non-complex python type
|
||||
(for example, with a float). (Requires the optional ``pybind11/complex.h``
|
||||
header).
|
||||
* Calling a function taking an Eigen matrix reference with a numpy array of the
|
||||
wrong type or of an incompatible data layout. (Requires the optional
|
||||
``pybind11/eigen.h`` header).
|
||||
|
||||
This behaviour is sometimes undesirable: the binding code may prefer to raise
|
||||
an error rather than convert the argument. This behaviour can be obtained
|
||||
through ``py::arg`` by calling the ``.noconvert()`` method of the ``py::arg``
|
||||
object, such as:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("floats_only", [](double f) { return 0.5 * f; }, py::arg("f").noconvert());
|
||||
m.def("floats_preferred", [](double f) { return 0.5 * f; }, py::arg("f"));
|
||||
|
||||
Attempting the call the second function (the one without ``.noconvert()``) with
|
||||
an integer will succeed, but attempting to call the ``.noconvert()`` version
|
||||
will fail with a ``TypeError``:
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
>>> floats_preferred(4)
|
||||
2.0
|
||||
>>> floats_only(4)
|
||||
Traceback (most recent call last):
|
||||
File "<stdin>", line 1, in <module>
|
||||
TypeError: floats_only(): incompatible function arguments. The following argument types are supported:
|
||||
1. (f: float) -> float
|
||||
|
||||
Invoked with: 4
|
||||
|
||||
You may, of course, combine this with the :var:`_a` shorthand notation (see
|
||||
:ref:`keyword_args`) and/or :ref:`default_args`. It is also permitted to omit
|
||||
the argument name by using the ``py::arg()`` constructor without an argument
|
||||
name, i.e. by specifying ``py::arg().noconvert()``.
|
||||
|
||||
.. note::
|
||||
|
||||
When specifying ``py::arg`` options it is necessary to provide the same
|
||||
number of options as the bound function has arguments. Thus if you want to
|
||||
enable no-convert behaviour for just one of several arguments, you will
|
||||
need to specify a ``py::arg()`` annotation for each argument with the
|
||||
no-convert argument modified to ``py::arg().noconvert()``.
|
||||
|
||||
.. _none_arguments:
|
||||
|
||||
Allow/Prohibiting None arguments
|
||||
================================
|
||||
|
||||
When a C++ type registered with :class:`py::class_` is passed as an argument to
|
||||
a function taking the instance as pointer or shared holder (e.g. ``shared_ptr``
|
||||
or a custom, copyable holder as described in :ref:`smart_pointers`), pybind
|
||||
allows ``None`` to be passed from Python which results in calling the C++
|
||||
function with ``nullptr`` (or an empty holder) for the argument.
|
||||
|
||||
To explicitly enable or disable this behaviour, using the
|
||||
``.none`` method of the :class:`py::arg` object:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<Dog>(m, "Dog").def(py::init<>());
|
||||
py::class_<Cat>(m, "Cat").def(py::init<>());
|
||||
m.def("bark", [](Dog *dog) -> std::string {
|
||||
if (dog) return "woof!"; /* Called with a Dog instance */
|
||||
else return "(no dog)"; /* Called with None, dog == nullptr */
|
||||
}, py::arg("dog").none(true));
|
||||
m.def("meow", [](Cat *cat) -> std::string {
|
||||
// Can't be called with None argument
|
||||
return "meow";
|
||||
}, py::arg("cat").none(false));
|
||||
|
||||
With the above, the Python call ``bark(None)`` will return the string ``"(no
|
||||
dog)"``, while attempting to call ``meow(None)`` will raise a ``TypeError``:
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
>>> from animals import Dog, Cat, bark, meow
|
||||
>>> bark(Dog())
|
||||
'woof!'
|
||||
>>> meow(Cat())
|
||||
'meow'
|
||||
>>> bark(None)
|
||||
'(no dog)'
|
||||
>>> meow(None)
|
||||
Traceback (most recent call last):
|
||||
File "<stdin>", line 1, in <module>
|
||||
TypeError: meow(): incompatible function arguments. The following argument types are supported:
|
||||
1. (cat: animals.Cat) -> str
|
||||
|
||||
Invoked with: None
|
||||
|
||||
The default behaviour when the tag is unspecified is to allow ``None``.
|
||||
|
||||
.. note::
|
||||
|
||||
Even when ``.none(true)`` is specified for an argument, ``None`` will be converted to a
|
||||
``nullptr`` *only* for custom and :ref:`opaque <opaque>` types. Pointers to built-in types
|
||||
(``double *``, ``int *``, ...) and STL types (``std::vector<T> *``, ...; if ``pybind11/stl.h``
|
||||
is included) are copied when converted to C++ (see :doc:`/advanced/cast/overview`) and will
|
||||
not allow ``None`` as argument. To pass optional argument of these copied types consider
|
||||
using ``std::optional<T>``
|
||||
|
||||
Overload resolution order
|
||||
=========================
|
||||
|
||||
When a function or method with multiple overloads is called from Python,
|
||||
pybind11 determines which overload to call in two passes. The first pass
|
||||
attempts to call each overload without allowing argument conversion (as if
|
||||
every argument had been specified as ``py::arg().noconvert()`` as described
|
||||
above).
|
||||
|
||||
If no overload succeeds in the no-conversion first pass, a second pass is
|
||||
attempted in which argument conversion is allowed (except where prohibited via
|
||||
an explicit ``py::arg().noconvert()`` attribute in the function definition).
|
||||
|
||||
If the second pass also fails a ``TypeError`` is raised.
|
||||
|
||||
Within each pass, overloads are tried in the order they were registered with
|
||||
pybind11.
|
||||
|
||||
What this means in practice is that pybind11 will prefer any overload that does
|
||||
not require conversion of arguments to an overload that does, but otherwise prefers
|
||||
earlier-defined overloads to later-defined ones.
|
||||
|
||||
.. note::
|
||||
|
||||
pybind11 does *not* further prioritize based on the number/pattern of
|
||||
overloaded arguments. That is, pybind11 does not prioritize a function
|
||||
requiring one conversion over one requiring three, but only prioritizes
|
||||
overloads requiring no conversion at all to overloads that require
|
||||
conversion of at least one argument.
|
||||
|
|
@ -0,0 +1,306 @@
|
|||
Miscellaneous
|
||||
#############
|
||||
|
||||
.. _macro_notes:
|
||||
|
||||
General notes regarding convenience macros
|
||||
==========================================
|
||||
|
||||
pybind11 provides a few convenience macros such as
|
||||
:func:`PYBIND11_DECLARE_HOLDER_TYPE` and ``PYBIND11_OVERLOAD_*``. Since these
|
||||
are "just" macros that are evaluated in the preprocessor (which has no concept
|
||||
of types), they *will* get confused by commas in a template argument; for
|
||||
example, consider:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
PYBIND11_OVERLOAD(MyReturnType<T1, T2>, Class<T3, T4>, func)
|
||||
|
||||
The limitation of the C preprocessor interprets this as five arguments (with new
|
||||
arguments beginning after each comma) rather than three. To get around this,
|
||||
there are two alternatives: you can use a type alias, or you can wrap the type
|
||||
using the ``PYBIND11_TYPE`` macro:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Version 1: using a type alias
|
||||
using ReturnType = MyReturnType<T1, T2>;
|
||||
using ClassType = Class<T3, T4>;
|
||||
PYBIND11_OVERLOAD(ReturnType, ClassType, func);
|
||||
|
||||
// Version 2: using the PYBIND11_TYPE macro:
|
||||
PYBIND11_OVERLOAD(PYBIND11_TYPE(MyReturnType<T1, T2>),
|
||||
PYBIND11_TYPE(Class<T3, T4>), func)
|
||||
|
||||
The ``PYBIND11_MAKE_OPAQUE`` macro does *not* require the above workarounds.
|
||||
|
||||
.. _gil:
|
||||
|
||||
Global Interpreter Lock (GIL)
|
||||
=============================
|
||||
|
||||
When calling a C++ function from Python, the GIL is always held.
|
||||
The classes :class:`gil_scoped_release` and :class:`gil_scoped_acquire` can be
|
||||
used to acquire and release the global interpreter lock in the body of a C++
|
||||
function call. In this way, long-running C++ code can be parallelized using
|
||||
multiple Python threads. Taking :ref:`overriding_virtuals` as an example, this
|
||||
could be realized as follows (important changes highlighted):
|
||||
|
||||
.. code-block:: cpp
|
||||
:emphasize-lines: 8,9,31,32
|
||||
|
||||
class PyAnimal : public Animal {
|
||||
public:
|
||||
/* Inherit the constructors */
|
||||
using Animal::Animal;
|
||||
|
||||
/* Trampoline (need one for each virtual function) */
|
||||
std::string go(int n_times) {
|
||||
/* Acquire GIL before calling Python code */
|
||||
py::gil_scoped_acquire acquire;
|
||||
|
||||
PYBIND11_OVERLOAD_PURE(
|
||||
std::string, /* Return type */
|
||||
Animal, /* Parent class */
|
||||
go, /* Name of function */
|
||||
n_times /* Argument(s) */
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
PYBIND11_MODULE(example, m) {
|
||||
py::class_<Animal, PyAnimal> animal(m, "Animal");
|
||||
animal
|
||||
.def(py::init<>())
|
||||
.def("go", &Animal::go);
|
||||
|
||||
py::class_<Dog>(m, "Dog", animal)
|
||||
.def(py::init<>());
|
||||
|
||||
m.def("call_go", [](Animal *animal) -> std::string {
|
||||
/* Release GIL before calling into (potentially long-running) C++ code */
|
||||
py::gil_scoped_release release;
|
||||
return call_go(animal);
|
||||
});
|
||||
}
|
||||
|
||||
The ``call_go`` wrapper can also be simplified using the `call_guard` policy
|
||||
(see :ref:`call_policies`) which yields the same result:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("call_go", &call_go, py::call_guard<py::gil_scoped_release>());
|
||||
|
||||
|
||||
Binding sequence data types, iterators, the slicing protocol, etc.
|
||||
==================================================================
|
||||
|
||||
Please refer to the supplemental example for details.
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_sequences_and_iterators.cpp` contains a
|
||||
complete example that shows how to bind a sequence data type, including
|
||||
length queries (``__len__``), iterators (``__iter__``), the slicing
|
||||
protocol and other kinds of useful operations.
|
||||
|
||||
|
||||
Partitioning code over multiple extension modules
|
||||
=================================================
|
||||
|
||||
It's straightforward to split binding code over multiple extension modules,
|
||||
while referencing types that are declared elsewhere. Everything "just" works
|
||||
without any special precautions. One exception to this rule occurs when
|
||||
extending a type declared in another extension module. Recall the basic example
|
||||
from Section :ref:`inheritance`.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<Pet> pet(m, "Pet");
|
||||
pet.def(py::init<const std::string &>())
|
||||
.def_readwrite("name", &Pet::name);
|
||||
|
||||
py::class_<Dog>(m, "Dog", pet /* <- specify parent */)
|
||||
.def(py::init<const std::string &>())
|
||||
.def("bark", &Dog::bark);
|
||||
|
||||
Suppose now that ``Pet`` bindings are defined in a module named ``basic``,
|
||||
whereas the ``Dog`` bindings are defined somewhere else. The challenge is of
|
||||
course that the variable ``pet`` is not available anymore though it is needed
|
||||
to indicate the inheritance relationship to the constructor of ``class_<Dog>``.
|
||||
However, it can be acquired as follows:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::object pet = (py::object) py::module::import("basic").attr("Pet");
|
||||
|
||||
py::class_<Dog>(m, "Dog", pet)
|
||||
.def(py::init<const std::string &>())
|
||||
.def("bark", &Dog::bark);
|
||||
|
||||
Alternatively, you can specify the base class as a template parameter option to
|
||||
``class_``, which performs an automated lookup of the corresponding Python
|
||||
type. Like the above code, however, this also requires invoking the ``import``
|
||||
function once to ensure that the pybind11 binding code of the module ``basic``
|
||||
has been executed:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::module::import("basic");
|
||||
|
||||
py::class_<Dog, Pet>(m, "Dog")
|
||||
.def(py::init<const std::string &>())
|
||||
.def("bark", &Dog::bark);
|
||||
|
||||
Naturally, both methods will fail when there are cyclic dependencies.
|
||||
|
||||
Note that pybind11 code compiled with hidden-by-default symbol visibility (e.g.
|
||||
via the command line flag ``-fvisibility=hidden`` on GCC/Clang), which is
|
||||
required for proper pybind11 functionality, can interfere with the ability to
|
||||
access types defined in another extension module. Working around this requires
|
||||
manually exporting types that are accessed by multiple extension modules;
|
||||
pybind11 provides a macro to do just this:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
class PYBIND11_EXPORT Dog : public Animal {
|
||||
...
|
||||
};
|
||||
|
||||
Note also that it is possible (although would rarely be required) to share arbitrary
|
||||
C++ objects between extension modules at runtime. Internal library data is shared
|
||||
between modules using capsule machinery [#f6]_ which can be also utilized for
|
||||
storing, modifying and accessing user-defined data. Note that an extension module
|
||||
will "see" other extensions' data if and only if they were built with the same
|
||||
pybind11 version. Consider the following example:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
auto data = (MyData *) py::get_shared_data("mydata");
|
||||
if (!data)
|
||||
data = (MyData *) py::set_shared_data("mydata", new MyData(42));
|
||||
|
||||
If the above snippet was used in several separately compiled extension modules,
|
||||
the first one to be imported would create a ``MyData`` instance and associate
|
||||
a ``"mydata"`` key with a pointer to it. Extensions that are imported later
|
||||
would be then able to access the data behind the same pointer.
|
||||
|
||||
.. [#f6] https://docs.python.org/3/extending/extending.html#using-capsules
|
||||
|
||||
Module Destructors
|
||||
==================
|
||||
|
||||
pybind11 does not provide an explicit mechanism to invoke cleanup code at
|
||||
module destruction time. In rare cases where such functionality is required, it
|
||||
is possible to emulate it using Python capsules or weak references with a
|
||||
destruction callback.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
auto cleanup_callback = []() {
|
||||
// perform cleanup here -- this function is called with the GIL held
|
||||
};
|
||||
|
||||
m.add_object("_cleanup", py::capsule(cleanup_callback));
|
||||
|
||||
This approach has the potential downside that instances of classes exposed
|
||||
within the module may still be alive when the cleanup callback is invoked
|
||||
(whether this is acceptable will generally depend on the application).
|
||||
|
||||
Alternatively, the capsule may also be stashed within a type object, which
|
||||
ensures that it not called before all instances of that type have been
|
||||
collected:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
auto cleanup_callback = []() { /* ... */ };
|
||||
m.attr("BaseClass").attr("_cleanup") = py::capsule(cleanup_callback);
|
||||
|
||||
Both approaches also expose a potentially dangerous ``_cleanup`` attribute in
|
||||
Python, which may be undesirable from an API standpoint (a premature explicit
|
||||
call from Python might lead to undefined behavior). Yet another approach that
|
||||
avoids this issue involves weak reference with a cleanup callback:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Register a callback function that is invoked when the BaseClass object is colelcted
|
||||
py::cpp_function cleanup_callback(
|
||||
[](py::handle weakref) {
|
||||
// perform cleanup here -- this function is called with the GIL held
|
||||
|
||||
weakref.dec_ref(); // release weak reference
|
||||
}
|
||||
);
|
||||
|
||||
// Create a weak reference with a cleanup callback and initially leak it
|
||||
(void) py::weakref(m.attr("BaseClass"), cleanup_callback).release();
|
||||
|
||||
.. note::
|
||||
|
||||
PyPy (at least version 5.9) does not garbage collect objects when the
|
||||
interpreter exits. An alternative approach (which also works on CPython) is to use
|
||||
the :py:mod:`atexit` module [#f7]_, for example:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
auto atexit = py::module::import("atexit");
|
||||
atexit.attr("register")(py::cpp_function([]() {
|
||||
// perform cleanup here -- this function is called with the GIL held
|
||||
}));
|
||||
|
||||
.. [#f7] https://docs.python.org/3/library/atexit.html
|
||||
|
||||
|
||||
Generating documentation using Sphinx
|
||||
=====================================
|
||||
|
||||
Sphinx [#f4]_ has the ability to inspect the signatures and documentation
|
||||
strings in pybind11-based extension modules to automatically generate beautiful
|
||||
documentation in a variety formats. The python_example repository [#f5]_ contains a
|
||||
simple example repository which uses this approach.
|
||||
|
||||
There are two potential gotchas when using this approach: first, make sure that
|
||||
the resulting strings do not contain any :kbd:`TAB` characters, which break the
|
||||
docstring parsing routines. You may want to use C++11 raw string literals,
|
||||
which are convenient for multi-line comments. Conveniently, any excess
|
||||
indentation will be automatically be removed by Sphinx. However, for this to
|
||||
work, it is important that all lines are indented consistently, i.e.:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// ok
|
||||
m.def("foo", &foo, R"mydelimiter(
|
||||
The foo function
|
||||
|
||||
Parameters
|
||||
----------
|
||||
)mydelimiter");
|
||||
|
||||
// *not ok*
|
||||
m.def("foo", &foo, R"mydelimiter(The foo function
|
||||
|
||||
Parameters
|
||||
----------
|
||||
)mydelimiter");
|
||||
|
||||
By default, pybind11 automatically generates and prepends a signature to the docstring of a function
|
||||
registered with ``module::def()`` and ``class_::def()``. Sometimes this
|
||||
behavior is not desirable, because you want to provide your own signature or remove
|
||||
the docstring completely to exclude the function from the Sphinx documentation.
|
||||
The class ``options`` allows you to selectively suppress auto-generated signatures:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
PYBIND11_MODULE(example, m) {
|
||||
py::options options;
|
||||
options.disable_function_signatures();
|
||||
|
||||
m.def("add", [](int a, int b) { return a + b; }, "A function which adds two numbers");
|
||||
}
|
||||
|
||||
Note that changes to the settings affect only function bindings created during the
|
||||
lifetime of the ``options`` instance. When it goes out of scope at the end of the module's init function,
|
||||
the default settings are restored to prevent unwanted side effects.
|
||||
|
||||
.. [#f4] http://www.sphinx-doc.org
|
||||
.. [#f5] http://github.com/pybind/python_example
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
Python C++ interface
|
||||
####################
|
||||
|
||||
pybind11 exposes Python types and functions using thin C++ wrappers, which
|
||||
makes it possible to conveniently call Python code from C++ without resorting
|
||||
to Python's C API.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
object
|
||||
numpy
|
||||
utilities
|
||||
|
|
@ -0,0 +1,386 @@
|
|||
.. _numpy:
|
||||
|
||||
NumPy
|
||||
#####
|
||||
|
||||
Buffer protocol
|
||||
===============
|
||||
|
||||
Python supports an extremely general and convenient approach for exchanging
|
||||
data between plugin libraries. Types can expose a buffer view [#f2]_, which
|
||||
provides fast direct access to the raw internal data representation. Suppose we
|
||||
want to bind the following simplistic Matrix class:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
class Matrix {
|
||||
public:
|
||||
Matrix(size_t rows, size_t cols) : m_rows(rows), m_cols(cols) {
|
||||
m_data = new float[rows*cols];
|
||||
}
|
||||
float *data() { return m_data; }
|
||||
size_t rows() const { return m_rows; }
|
||||
size_t cols() const { return m_cols; }
|
||||
private:
|
||||
size_t m_rows, m_cols;
|
||||
float *m_data;
|
||||
};
|
||||
|
||||
The following binding code exposes the ``Matrix`` contents as a buffer object,
|
||||
making it possible to cast Matrices into NumPy arrays. It is even possible to
|
||||
completely avoid copy operations with Python expressions like
|
||||
``np.array(matrix_instance, copy = False)``.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
|
||||
.def_buffer([](Matrix &m) -> py::buffer_info {
|
||||
return py::buffer_info(
|
||||
m.data(), /* Pointer to buffer */
|
||||
sizeof(float), /* Size of one scalar */
|
||||
py::format_descriptor<float>::format(), /* Python struct-style format descriptor */
|
||||
2, /* Number of dimensions */
|
||||
{ m.rows(), m.cols() }, /* Buffer dimensions */
|
||||
{ sizeof(float) * m.cols(), /* Strides (in bytes) for each index */
|
||||
sizeof(float) }
|
||||
);
|
||||
});
|
||||
|
||||
Supporting the buffer protocol in a new type involves specifying the special
|
||||
``py::buffer_protocol()`` tag in the ``py::class_`` constructor and calling the
|
||||
``def_buffer()`` method with a lambda function that creates a
|
||||
``py::buffer_info`` description record on demand describing a given matrix
|
||||
instance. The contents of ``py::buffer_info`` mirror the Python buffer protocol
|
||||
specification.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
struct buffer_info {
|
||||
void *ptr;
|
||||
ssize_t itemsize;
|
||||
std::string format;
|
||||
ssize_t ndim;
|
||||
std::vector<ssize_t> shape;
|
||||
std::vector<ssize_t> strides;
|
||||
};
|
||||
|
||||
To create a C++ function that can take a Python buffer object as an argument,
|
||||
simply use the type ``py::buffer`` as one of its arguments. Buffers can exist
|
||||
in a great variety of configurations, hence some safety checks are usually
|
||||
necessary in the function body. Below, you can see an basic example on how to
|
||||
define a custom constructor for the Eigen double precision matrix
|
||||
(``Eigen::MatrixXd``) type, which supports initialization from compatible
|
||||
buffer objects (e.g. a NumPy matrix).
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
/* Bind MatrixXd (or some other Eigen type) to Python */
|
||||
typedef Eigen::MatrixXd Matrix;
|
||||
|
||||
typedef Matrix::Scalar Scalar;
|
||||
constexpr bool rowMajor = Matrix::Flags & Eigen::RowMajorBit;
|
||||
|
||||
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
|
||||
.def("__init__", [](Matrix &m, py::buffer b) {
|
||||
typedef Eigen::Stride<Eigen::Dynamic, Eigen::Dynamic> Strides;
|
||||
|
||||
/* Request a buffer descriptor from Python */
|
||||
py::buffer_info info = b.request();
|
||||
|
||||
/* Some sanity checks ... */
|
||||
if (info.format != py::format_descriptor<Scalar>::format())
|
||||
throw std::runtime_error("Incompatible format: expected a double array!");
|
||||
|
||||
if (info.ndim != 2)
|
||||
throw std::runtime_error("Incompatible buffer dimension!");
|
||||
|
||||
auto strides = Strides(
|
||||
info.strides[rowMajor ? 0 : 1] / (py::ssize_t)sizeof(Scalar),
|
||||
info.strides[rowMajor ? 1 : 0] / (py::ssize_t)sizeof(Scalar));
|
||||
|
||||
auto map = Eigen::Map<Matrix, 0, Strides>(
|
||||
static_cast<Scalar *>(info.ptr), info.shape[0], info.shape[1], strides);
|
||||
|
||||
new (&m) Matrix(map);
|
||||
});
|
||||
|
||||
For reference, the ``def_buffer()`` call for this Eigen data type should look
|
||||
as follows:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
.def_buffer([](Matrix &m) -> py::buffer_info {
|
||||
return py::buffer_info(
|
||||
m.data(), /* Pointer to buffer */
|
||||
sizeof(Scalar), /* Size of one scalar */
|
||||
py::format_descriptor<Scalar>::format(), /* Python struct-style format descriptor */
|
||||
2, /* Number of dimensions */
|
||||
{ m.rows(), m.cols() }, /* Buffer dimensions */
|
||||
{ sizeof(Scalar) * (rowMajor ? m.cols() : 1),
|
||||
sizeof(Scalar) * (rowMajor ? 1 : m.rows()) }
|
||||
/* Strides (in bytes) for each index */
|
||||
);
|
||||
})
|
||||
|
||||
For a much easier approach of binding Eigen types (although with some
|
||||
limitations), refer to the section on :doc:`/advanced/cast/eigen`.
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_buffers.cpp` contains a complete example
|
||||
that demonstrates using the buffer protocol with pybind11 in more detail.
|
||||
|
||||
.. [#f2] http://docs.python.org/3/c-api/buffer.html
|
||||
|
||||
Arrays
|
||||
======
|
||||
|
||||
By exchanging ``py::buffer`` with ``py::array`` in the above snippet, we can
|
||||
restrict the function so that it only accepts NumPy arrays (rather than any
|
||||
type of Python object satisfying the buffer protocol).
|
||||
|
||||
In many situations, we want to define a function which only accepts a NumPy
|
||||
array of a certain data type. This is possible via the ``py::array_t<T>``
|
||||
template. For instance, the following function requires the argument to be a
|
||||
NumPy array containing double precision values.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
void f(py::array_t<double> array);
|
||||
|
||||
When it is invoked with a different type (e.g. an integer or a list of
|
||||
integers), the binding code will attempt to cast the input into a NumPy array
|
||||
of the requested type. Note that this feature requires the
|
||||
:file:`pybind11/numpy.h` header to be included.
|
||||
|
||||
Data in NumPy arrays is not guaranteed to packed in a dense manner;
|
||||
furthermore, entries can be separated by arbitrary column and row strides.
|
||||
Sometimes, it can be useful to require a function to only accept dense arrays
|
||||
using either the C (row-major) or Fortran (column-major) ordering. This can be
|
||||
accomplished via a second template argument with values ``py::array::c_style``
|
||||
or ``py::array::f_style``.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
void f(py::array_t<double, py::array::c_style | py::array::forcecast> array);
|
||||
|
||||
The ``py::array::forcecast`` argument is the default value of the second
|
||||
template parameter, and it ensures that non-conforming arguments are converted
|
||||
into an array satisfying the specified requirements instead of trying the next
|
||||
function overload.
|
||||
|
||||
Structured types
|
||||
================
|
||||
|
||||
In order for ``py::array_t`` to work with structured (record) types, we first
|
||||
need to register the memory layout of the type. This can be done via
|
||||
``PYBIND11_NUMPY_DTYPE`` macro, called in the plugin definition code, which
|
||||
expects the type followed by field names:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
struct A {
|
||||
int x;
|
||||
double y;
|
||||
};
|
||||
|
||||
struct B {
|
||||
int z;
|
||||
A a;
|
||||
};
|
||||
|
||||
// ...
|
||||
PYBIND11_MODULE(test, m) {
|
||||
// ...
|
||||
|
||||
PYBIND11_NUMPY_DTYPE(A, x, y);
|
||||
PYBIND11_NUMPY_DTYPE(B, z, a);
|
||||
/* now both A and B can be used as template arguments to py::array_t */
|
||||
}
|
||||
|
||||
The structure should consist of fundamental arithmetic types, ``std::complex``,
|
||||
previously registered substructures, and arrays of any of the above. Both C++
|
||||
arrays and ``std::array`` are supported. While there is a static assertion to
|
||||
prevent many types of unsupported structures, it is still the user's
|
||||
responsibility to use only "plain" structures that can be safely manipulated as
|
||||
raw memory without violating invariants.
|
||||
|
||||
Vectorizing functions
|
||||
=====================
|
||||
|
||||
Suppose we want to bind a function with the following signature to Python so
|
||||
that it can process arbitrary NumPy array arguments (vectors, matrices, general
|
||||
N-D arrays) in addition to its normal arguments:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
double my_func(int x, float y, double z);
|
||||
|
||||
After including the ``pybind11/numpy.h`` header, this is extremely simple:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("vectorized_func", py::vectorize(my_func));
|
||||
|
||||
Invoking the function like below causes 4 calls to be made to ``my_func`` with
|
||||
each of the array elements. The significant advantage of this compared to
|
||||
solutions like ``numpy.vectorize()`` is that the loop over the elements runs
|
||||
entirely on the C++ side and can be crunched down into a tight, optimized loop
|
||||
by the compiler. The result is returned as a NumPy array of type
|
||||
``numpy.dtype.float64``.
|
||||
|
||||
.. code-block:: pycon
|
||||
|
||||
>>> x = np.array([[1, 3],[5, 7]])
|
||||
>>> y = np.array([[2, 4],[6, 8]])
|
||||
>>> z = 3
|
||||
>>> result = vectorized_func(x, y, z)
|
||||
|
||||
The scalar argument ``z`` is transparently replicated 4 times. The input
|
||||
arrays ``x`` and ``y`` are automatically converted into the right types (they
|
||||
are of type ``numpy.dtype.int64`` but need to be ``numpy.dtype.int32`` and
|
||||
``numpy.dtype.float32``, respectively).
|
||||
|
||||
.. note::
|
||||
|
||||
Only arithmetic, complex, and POD types passed by value or by ``const &``
|
||||
reference are vectorized; all other arguments are passed through as-is.
|
||||
Functions taking rvalue reference arguments cannot be vectorized.
|
||||
|
||||
In cases where the computation is too complicated to be reduced to
|
||||
``vectorize``, it will be necessary to create and access the buffer contents
|
||||
manually. The following snippet contains a complete example that shows how this
|
||||
works (the code is somewhat contrived, since it could have been done more
|
||||
simply using ``vectorize``).
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/pybind11.h>
|
||||
#include <pybind11/numpy.h>
|
||||
|
||||
namespace py = pybind11;
|
||||
|
||||
py::array_t<double> add_arrays(py::array_t<double> input1, py::array_t<double> input2) {
|
||||
py::buffer_info buf1 = input1.request(), buf2 = input2.request();
|
||||
|
||||
if (buf1.ndim != 1 || buf2.ndim != 1)
|
||||
throw std::runtime_error("Number of dimensions must be one");
|
||||
|
||||
if (buf1.size != buf2.size)
|
||||
throw std::runtime_error("Input shapes must match");
|
||||
|
||||
/* No pointer is passed, so NumPy will allocate the buffer */
|
||||
auto result = py::array_t<double>(buf1.size);
|
||||
|
||||
py::buffer_info buf3 = result.request();
|
||||
|
||||
double *ptr1 = (double *) buf1.ptr,
|
||||
*ptr2 = (double *) buf2.ptr,
|
||||
*ptr3 = (double *) buf3.ptr;
|
||||
|
||||
for (size_t idx = 0; idx < buf1.shape[0]; idx++)
|
||||
ptr3[idx] = ptr1[idx] + ptr2[idx];
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
PYBIND11_MODULE(test, m) {
|
||||
m.def("add_arrays", &add_arrays, "Add two NumPy arrays");
|
||||
}
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_numpy_vectorize.cpp` contains a complete
|
||||
example that demonstrates using :func:`vectorize` in more detail.
|
||||
|
||||
Direct access
|
||||
=============
|
||||
|
||||
For performance reasons, particularly when dealing with very large arrays, it
|
||||
is often desirable to directly access array elements without internal checking
|
||||
of dimensions and bounds on every access when indices are known to be already
|
||||
valid. To avoid such checks, the ``array`` class and ``array_t<T>`` template
|
||||
class offer an unchecked proxy object that can be used for this unchecked
|
||||
access through the ``unchecked<N>`` and ``mutable_unchecked<N>`` methods,
|
||||
where ``N`` gives the required dimensionality of the array:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
m.def("sum_3d", [](py::array_t<double> x) {
|
||||
auto r = x.unchecked<3>(); // x must have ndim = 3; can be non-writeable
|
||||
double sum = 0;
|
||||
for (ssize_t i = 0; i < r.shape(0); i++)
|
||||
for (ssize_t j = 0; j < r.shape(1); j++)
|
||||
for (ssize_t k = 0; k < r.shape(2); k++)
|
||||
sum += r(i, j, k);
|
||||
return sum;
|
||||
});
|
||||
m.def("increment_3d", [](py::array_t<double> x) {
|
||||
auto r = x.mutable_unchecked<3>(); // Will throw if ndim != 3 or flags.writeable is false
|
||||
for (ssize_t i = 0; i < r.shape(0); i++)
|
||||
for (ssize_t j = 0; j < r.shape(1); j++)
|
||||
for (ssize_t k = 0; k < r.shape(2); k++)
|
||||
r(i, j, k) += 1.0;
|
||||
}, py::arg().noconvert());
|
||||
|
||||
To obtain the proxy from an ``array`` object, you must specify both the data
|
||||
type and number of dimensions as template arguments, such as ``auto r =
|
||||
myarray.mutable_unchecked<float, 2>()``.
|
||||
|
||||
If the number of dimensions is not known at compile time, you can omit the
|
||||
dimensions template parameter (i.e. calling ``arr_t.unchecked()`` or
|
||||
``arr.unchecked<T>()``. This will give you a proxy object that works in the
|
||||
same way, but results in less optimizable code and thus a small efficiency
|
||||
loss in tight loops.
|
||||
|
||||
Note that the returned proxy object directly references the array's data, and
|
||||
only reads its shape, strides, and writeable flag when constructed. You must
|
||||
take care to ensure that the referenced array is not destroyed or reshaped for
|
||||
the duration of the returned object, typically by limiting the scope of the
|
||||
returned instance.
|
||||
|
||||
The returned proxy object supports some of the same methods as ``py::array`` so
|
||||
that it can be used as a drop-in replacement for some existing, index-checked
|
||||
uses of ``py::array``:
|
||||
|
||||
- ``r.ndim()`` returns the number of dimensions
|
||||
|
||||
- ``r.data(1, 2, ...)`` and ``r.mutable_data(1, 2, ...)``` returns a pointer to
|
||||
the ``const T`` or ``T`` data, respectively, at the given indices. The
|
||||
latter is only available to proxies obtained via ``a.mutable_unchecked()``.
|
||||
|
||||
- ``itemsize()`` returns the size of an item in bytes, i.e. ``sizeof(T)``.
|
||||
|
||||
- ``ndim()`` returns the number of dimensions.
|
||||
|
||||
- ``shape(n)`` returns the size of dimension ``n``
|
||||
|
||||
- ``size()`` returns the total number of elements (i.e. the product of the shapes).
|
||||
|
||||
- ``nbytes()`` returns the number of bytes used by the referenced elements
|
||||
(i.e. ``itemsize()`` times ``size()``).
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_numpy_array.cpp` contains additional examples
|
||||
demonstrating the use of this feature.
|
||||
|
||||
Ellipsis
|
||||
========
|
||||
|
||||
Python 3 provides a convenient ``...`` ellipsis notation that is often used to
|
||||
slice multidimensional arrays. For instance, the following snippet extracts the
|
||||
middle dimensions of a tensor with the first and last index set to zero.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
a = # a NumPy array
|
||||
b = a[0, ..., 0]
|
||||
|
||||
The function ``py::ellipsis()`` function can be used to perform the same
|
||||
operation on the C++ side:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::array a = /* A NumPy array */;
|
||||
py::array b = a[py::make_tuple(0, py::ellipsis(), 0)];
|
||||
|
|
@ -0,0 +1,170 @@
|
|||
Python types
|
||||
############
|
||||
|
||||
Available wrappers
|
||||
==================
|
||||
|
||||
All major Python types are available as thin C++ wrapper classes. These
|
||||
can also be used as function parameters -- see :ref:`python_objects_as_args`.
|
||||
|
||||
Available types include :class:`handle`, :class:`object`, :class:`bool_`,
|
||||
:class:`int_`, :class:`float_`, :class:`str`, :class:`bytes`, :class:`tuple`,
|
||||
:class:`list`, :class:`dict`, :class:`slice`, :class:`none`, :class:`capsule`,
|
||||
:class:`iterable`, :class:`iterator`, :class:`function`, :class:`buffer`,
|
||||
:class:`array`, and :class:`array_t`.
|
||||
|
||||
Casting back and forth
|
||||
======================
|
||||
|
||||
In this kind of mixed code, it is often necessary to convert arbitrary C++
|
||||
types to Python, which can be done using :func:`py::cast`:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
MyClass *cls = ..;
|
||||
py::object obj = py::cast(cls);
|
||||
|
||||
The reverse direction uses the following syntax:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::object obj = ...;
|
||||
MyClass *cls = obj.cast<MyClass *>();
|
||||
|
||||
When conversion fails, both directions throw the exception :class:`cast_error`.
|
||||
|
||||
.. _python_libs:
|
||||
|
||||
Accessing Python libraries from C++
|
||||
===================================
|
||||
|
||||
It is also possible to import objects defined in the Python standard
|
||||
library or available in the current Python environment (``sys.path``) and work
|
||||
with these in C++.
|
||||
|
||||
This example obtains a reference to the Python ``Decimal`` class.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Equivalent to "from decimal import Decimal"
|
||||
py::object Decimal = py::module::import("decimal").attr("Decimal");
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Try to import scipy
|
||||
py::object scipy = py::module::import("scipy");
|
||||
return scipy.attr("__version__");
|
||||
|
||||
.. _calling_python_functions:
|
||||
|
||||
Calling Python functions
|
||||
========================
|
||||
|
||||
It is also possible to call Python classes, functions and methods
|
||||
via ``operator()``.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Construct a Python object of class Decimal
|
||||
py::object pi = Decimal("3.14159");
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Use Python to make our directories
|
||||
py::object os = py::module::import("os");
|
||||
py::object makedirs = os.attr("makedirs");
|
||||
makedirs("/tmp/path/to/somewhere");
|
||||
|
||||
One can convert the result obtained from Python to a pure C++ version
|
||||
if a ``py::class_`` or type conversion is defined.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::function f = <...>;
|
||||
py::object result_py = f(1234, "hello", some_instance);
|
||||
MyClass &result = result_py.cast<MyClass>();
|
||||
|
||||
.. _calling_python_methods:
|
||||
|
||||
Calling Python methods
|
||||
========================
|
||||
|
||||
To call an object's method, one can again use ``.attr`` to obtain access to the
|
||||
Python method.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Calculate e^π in decimal
|
||||
py::object exp_pi = pi.attr("exp")();
|
||||
py::print(py::str(exp_pi));
|
||||
|
||||
In the example above ``pi.attr("exp")`` is a *bound method*: it will always call
|
||||
the method for that same instance of the class. Alternately one can create an
|
||||
*unbound method* via the Python class (instead of instance) and pass the ``self``
|
||||
object explicitly, followed by other arguments.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::object decimal_exp = Decimal.attr("exp");
|
||||
|
||||
// Compute the e^n for n=0..4
|
||||
for (int n = 0; n < 5; n++) {
|
||||
py::print(decimal_exp(Decimal(n));
|
||||
}
|
||||
|
||||
Keyword arguments
|
||||
=================
|
||||
|
||||
Keyword arguments are also supported. In Python, there is the usual call syntax:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def f(number, say, to):
|
||||
... # function code
|
||||
|
||||
f(1234, say="hello", to=some_instance) # keyword call in Python
|
||||
|
||||
In C++, the same call can be made using:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
using namespace pybind11::literals; // to bring in the `_a` literal
|
||||
f(1234, "say"_a="hello", "to"_a=some_instance); // keyword call in C++
|
||||
|
||||
Unpacking arguments
|
||||
===================
|
||||
|
||||
Unpacking of ``*args`` and ``**kwargs`` is also possible and can be mixed with
|
||||
other arguments:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// * unpacking
|
||||
py::tuple args = py::make_tuple(1234, "hello", some_instance);
|
||||
f(*args);
|
||||
|
||||
// ** unpacking
|
||||
py::dict kwargs = py::dict("number"_a=1234, "say"_a="hello", "to"_a=some_instance);
|
||||
f(**kwargs);
|
||||
|
||||
// mixed keywords, * and ** unpacking
|
||||
py::tuple args = py::make_tuple(1234);
|
||||
py::dict kwargs = py::dict("to"_a=some_instance);
|
||||
f(*args, "say"_a="hello", **kwargs);
|
||||
|
||||
Generalized unpacking according to PEP448_ is also supported:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::dict kwargs1 = py::dict("number"_a=1234);
|
||||
py::dict kwargs2 = py::dict("to"_a=some_instance);
|
||||
f(**kwargs1, "say"_a="hello", **kwargs2);
|
||||
|
||||
.. seealso::
|
||||
|
||||
The file :file:`tests/test_pytypes.cpp` contains a complete
|
||||
example that demonstrates passing native Python types in more detail. The
|
||||
file :file:`tests/test_callbacks.cpp` presents a few examples of calling
|
||||
Python functions from C++, including keywords arguments and unpacking.
|
||||
|
||||
.. _PEP448: https://www.python.org/dev/peps/pep-0448/
|
||||
|
|
@ -0,0 +1,144 @@
|
|||
Utilities
|
||||
#########
|
||||
|
||||
Using Python's print function in C++
|
||||
====================================
|
||||
|
||||
The usual way to write output in C++ is using ``std::cout`` while in Python one
|
||||
would use ``print``. Since these methods use different buffers, mixing them can
|
||||
lead to output order issues. To resolve this, pybind11 modules can use the
|
||||
:func:`py::print` function which writes to Python's ``sys.stdout`` for consistency.
|
||||
|
||||
Python's ``print`` function is replicated in the C++ API including optional
|
||||
keyword arguments ``sep``, ``end``, ``file``, ``flush``. Everything works as
|
||||
expected in Python:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::print(1, 2.0, "three"); // 1 2.0 three
|
||||
py::print(1, 2.0, "three", "sep"_a="-"); // 1-2.0-three
|
||||
|
||||
auto args = py::make_tuple("unpacked", true);
|
||||
py::print("->", *args, "end"_a="<-"); // -> unpacked True <-
|
||||
|
||||
.. _ostream_redirect:
|
||||
|
||||
Capturing standard output from ostream
|
||||
======================================
|
||||
|
||||
Often, a library will use the streams ``std::cout`` and ``std::cerr`` to print,
|
||||
but this does not play well with Python's standard ``sys.stdout`` and ``sys.stderr``
|
||||
redirection. Replacing a library's printing with `py::print <print>` may not
|
||||
be feasible. This can be fixed using a guard around the library function that
|
||||
redirects output to the corresponding Python streams:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <pybind11/iostream.h>
|
||||
|
||||
...
|
||||
|
||||
// Add a scoped redirect for your noisy code
|
||||
m.def("noisy_func", []() {
|
||||
py::scoped_ostream_redirect stream(
|
||||
std::cout, // std::ostream&
|
||||
py::module::import("sys").attr("stdout") // Python output
|
||||
);
|
||||
call_noisy_func();
|
||||
});
|
||||
|
||||
This method respects flushes on the output streams and will flush if needed
|
||||
when the scoped guard is destroyed. This allows the output to be redirected in
|
||||
real time, such as to a Jupyter notebook. The two arguments, the C++ stream and
|
||||
the Python output, are optional, and default to standard output if not given. An
|
||||
extra type, `py::scoped_estream_redirect <scoped_estream_redirect>`, is identical
|
||||
except for defaulting to ``std::cerr`` and ``sys.stderr``; this can be useful with
|
||||
`py::call_guard`, which allows multiple items, but uses the default constructor:
|
||||
|
||||
.. code-block:: py
|
||||
|
||||
// Alternative: Call single function using call guard
|
||||
m.def("noisy_func", &call_noisy_function,
|
||||
py::call_guard<py::scoped_ostream_redirect,
|
||||
py::scoped_estream_redirect>());
|
||||
|
||||
The redirection can also be done in Python with the addition of a context
|
||||
manager, using the `py::add_ostream_redirect() <add_ostream_redirect>` function:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::add_ostream_redirect(m, "ostream_redirect");
|
||||
|
||||
The name in Python defaults to ``ostream_redirect`` if no name is passed. This
|
||||
creates the following context manager in Python:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
with ostream_redirect(stdout=True, stderr=True):
|
||||
noisy_function()
|
||||
|
||||
It defaults to redirecting both streams, though you can use the keyword
|
||||
arguments to disable one of the streams if needed.
|
||||
|
||||
.. note::
|
||||
|
||||
The above methods will not redirect C-level output to file descriptors, such
|
||||
as ``fprintf``. For those cases, you'll need to redirect the file
|
||||
descriptors either directly in C or with Python's ``os.dup2`` function
|
||||
in an operating-system dependent way.
|
||||
|
||||
.. _eval:
|
||||
|
||||
Evaluating Python expressions from strings and files
|
||||
====================================================
|
||||
|
||||
pybind11 provides the `eval`, `exec` and `eval_file` functions to evaluate
|
||||
Python expressions and statements. The following example illustrates how they
|
||||
can be used.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// At beginning of file
|
||||
#include <pybind11/eval.h>
|
||||
|
||||
...
|
||||
|
||||
// Evaluate in scope of main module
|
||||
py::object scope = py::module::import("__main__").attr("__dict__");
|
||||
|
||||
// Evaluate an isolated expression
|
||||
int result = py::eval("my_variable + 10", scope).cast<int>();
|
||||
|
||||
// Evaluate a sequence of statements
|
||||
py::exec(
|
||||
"print('Hello')\n"
|
||||
"print('world!');",
|
||||
scope);
|
||||
|
||||
// Evaluate the statements in an separate Python file on disk
|
||||
py::eval_file("script.py", scope);
|
||||
|
||||
C++11 raw string literals are also supported and quite handy for this purpose.
|
||||
The only requirement is that the first statement must be on a new line following
|
||||
the raw string delimiter ``R"(``, ensuring all lines have common leading indent:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
py::exec(R"(
|
||||
x = get_answer()
|
||||
if x == 42:
|
||||
print('Hello World!')
|
||||
else:
|
||||
print('Bye!')
|
||||
)", scope
|
||||
);
|
||||
|
||||
.. note::
|
||||
|
||||
`eval` and `eval_file` accept a template parameter that describes how the
|
||||
string/file should be interpreted. Possible choices include ``eval_expr``
|
||||
(isolated expression), ``eval_single_statement`` (a single statement, return
|
||||
value is always ``none``), and ``eval_statements`` (sequence of statements,
|
||||
return value is always ``none``). `eval` defaults to ``eval_expr``,
|
||||
`eval_file` defaults to ``eval_statements`` and `exec` is just a shortcut
|
||||
for ``eval<eval_statements>``.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue