X hits on this document

# A Prototype Optical Tracking System Investigation and Development - page 105 / 170

586 views

0 shares

105 / 170

## 7.4 Pose estimation using quaternions

91

### Camera model

Collection of centroid data and corresponding 3D marker co-ordinates

Estimated rotation between world coordinates and camera module

Initial quaternion estimate

Quaternion Pose Estimation

Estimated position in world coordinates

Figure 7.5 The inputs and outputs of the quaternion based pose estimation algorithm are shown in this figure.

list is small. This can occur if the beacon is far away from the camera or if it is viewed by the camera at a large angle off its principal axis. The second reason for an incorrect corre- spondence is occlusion. The modeled centroids may not match the measured centroid set due to markers being occluded by something. The final possibility is that there may be un- known centroids in the measured list which do not correspond to centroids in the model. This can be caused by reflections and bright points caused by bright lights that are not markers. Due to these problems it may seem that the algorithm has little chance of work- ing but by controlling the above factors the system can be set up so that a correspondence can be found without problems.

7.4

# Pose estimation using quaternions

n iterative pose estimation algorithm was designed that seeks to estimate the value of a unit quaternion using the output from the correspondence algorithm. The quaternion describes the rotation between the world coordinate frame and the hub enclosure’s frame. The algorithm seeks to minimise a function F (q) using the method of steepest descent [91]. This corresponds to minimising the variance of a cloud of points. The quaternion that results from the minimisation can also be used to calculate the hub enclosure’s position. Figure 7.5 shows the inputs and outputs of the algorithm. The algorithm requires as input an initial quaternion rotation, a collection of 3D points and the coordinates of the centroids that correspond to these, and a camera model.

7.4.1

# Definition of function to minimise

The following description applies to one camera module.

n extension to multiple cameras

is given in Section 7.4.4.

## The goal is to calculate the variance of a cloud of points in 3D

described by the function

F (q) =

1 N

## N

X

|pn q)

p

q)|2 .

(7.1)

n=1

 Document views 586 Page views 586 Page last viewed Sun Jan 22 19:12:28 UTC 2017 Pages 170 Paragraphs 6307 Words 54996