Pose (computer vision)
In the fields of computing and computer vision, pose (or spatial pose) represents the position and orientation of an object, usually in three dimensions.[1] Poses are often stored internally as transformation matrices.[2][3] The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not.[4][5]
In computer vision, the pose of an object is often estimated from camera input by the process of pose estimation. This information can then be used, for example, to allow a robot to manipulate an object or to avoid moving into the object based on its perceived position and orientation in the environment. Other applications include skeletal action recognition.
Pose estimation
The specific task of determining the pose of an object in an image (or stereo images, image sequence) is referred to as pose estimation. Pose estimation problems can be solved in different ways depending on the image sensor configuration, and choice of methodology. Three classes of methodologies can be distinguished:
- Analytic or geometric methods: Given that the image sensor (camera) is calibrated and the mapping from 3D points in the scene and 2D points in the image is known. If also the geometry of the object is known, it means that the projected image of the object on the camera image is a well-known function of the object's pose. Once a set of control points on the object, typically corners or other feature points, has been identified, it is then possible to solve the pose transformation from a set of equations which relate the 3D coordinates of the points with their 2D image coordinates. Algorithms that determine the pose of a point cloud with respect to another point cloud are known as point set registration algorithms, if the correspondences between points are not already known.
- Genetic algorithm methods: If the pose of an object does not have to be computed in real-time a genetic algorithm may be used. This approach is robust especially when the images are not perfectly calibrated. In this particular case, the pose represent the genetic representation and the error between the projection of the object control points with the image is the fitness function.
- Learning-based methods: These methods use artificial learning-based system which learn the mapping from 2D image features to pose transformation. In short, this means that a sufficiently large set of images of the object, in different poses, must be presented to the system during a learning phase. Once the learning phase is completed, the system should be able to present an estimate of the object's pose given an image of the object.
Camera pose
Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video; it determines which incoming light ray is associated with each pixel on the resulting image. Basically, the process determines the pose of the pinhole camera.
Usually, the camera parameters are represented in a 3 × 4 projection matrix called the camera matrix. The extrinsic parameters define the camera pose (position and orientation) while the intrinsic parameters specify the camera image format (focal length, pixel size, and image origin).
This process is often called geometric camera calibration or simply camera calibration, although that term may also refer to photometric camera calibration or be restricted for the estimation of the intrinsic parameters only. Exterior orientation and interior orientation refer to the determination of only the extrinsic and intrinsic parameters, respectively.
The classic camera calibration requires special objects in the scene, which is not required in camera auto-calibration.
Camera resectioning is often used in the application of stereo vision where the camera projection matrices of two cameras are used to calculate the 3D world coordinates of a point viewed by both cameras.See also
- Gesture recognition
- Homography (computer vision)
- Camera calibration
- Structure from motion
- Essential matrix and Trifocal tensor (relative pose)
References
- ↑ Hoff, William A.; Nguyen, Khoi; Lyon, Torsten (1996-10-29). "Computer-vision-based registration techniques for augmented reality". Intelligent Robots and Computer Vision XV: Algorithms, Techniques,Active Vision, and Materials Handling (SPIE) 2904: 538–548. doi:10.1117/12.256311. Bibcode: 1996SPIE.2904..538H. https://www.spiedigitallibrary.org/conference-proceedings-of-spie/2904/0000/Computer-vision-based-registration-techniques-for-augmented-reality/10.1117/12.256311.full.
- ↑ "Pose (Position and Orientation)". https://digitalrune.github.io/DigitalRune-Documentation/html/d995ee69-0650-4993-babd-1cdb1fd8fd7a.htm.
- ↑ "Transformation matrices to geometry_msgs/Pose - ROS Answers: Open Source Q&A Forum". https://answers.ros.org/question/379109/transformation-matrices-to-geometry_msgspose/.
- ↑ "Drake: Spatial Pose and Transform". https://drake.mit.edu/doxygen_cxx/group__multibody__spatial__pose.html.
- ↑ "Apple Developer Documentation". https://developer.apple.com/documentation/realitykit/transform.
Original source: https://en.wikipedia.org/wiki/Pose (computer vision).
Read more |