4 resultados para Camera placement
em Duke University
Resumo:
We present a quantitative phase microscopy method that uses a Bayer mosaic color camera to simultaneously acquire off-axis interferograms in transmission mode at two distinct wavelengths. Wrapped phase information is processed using a two-wavelength algorithm to extend the range of the optical path delay measurements that can be detected using a single temporal acquisition. We experimentally demonstrate this technique by acquiring the phase profiles of optically clear microstructures without 2pi ambiguities. In addition, the phase noise contribution arising from spectral channel crosstalk on the color camera is quantified.
Resumo:
Background. Thoracic epidural catheters provide the best quality postoperative pain relief for major abdominal and thoracic surgical procedures, but placement is one of the most challenging procedures in the repertoire of an anesthesiologist. Most patients presenting for a procedure that would benefit from a thoracic epidural catheter have already had high resolution imaging that may be useful to assist placement of a catheter. Methods. This retrospective study used data from 168 patients to examine the association and predictive power of epidural-skin distance (ESD) on computed tomography (CT) to determine loss of resistance depth acquired during epidural placement. Additionally, the ability of anesthesiologists to measure this distance was compared to a radiologist, who specializes in spine imaging. Results. There was a strong association between CT measurement and loss of resistance depth (P < 0.0001); the presence of morbid obesity (BMI > 35) changed this relationship (P = 0.007). The ability of anesthesiologists to make CT measurements was similar to a gold standard radiologist (all individual ICCs > 0.9). Conclusions. Overall, this study supports the examination of a recent CT scan to aid in the placement of a thoracic epidural catheter. Making use of these scans may lead to faster epidural placements, fewer accidental dural punctures, and better epidural blockade.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.