4 resultados para Range Expansion
em Massachusetts Institute of Technology
Resumo:
Structure from motion often refers to the computation of 3D structure from a matched sequence of images. However, a depth map of a surface is difficult to compute and may not be a good representation for storage and recognition. Given matched images, I will first show that the sign of the normal curvature in a given direction at a given point in the image can be computed from a simple difference of slopes of line-segments in one image. Using this result, local surface patches can be classified as convex, concave, parabolic (cylindrical), hyperbolic (saddle point) or planar. At the same time the translational component of the optical flow is obtained, from which the focus of expansion can be computed.
Resumo:
Visibility constraints can aid the segmentation of foreground objects observed with multiple range images. In our approach, points are defined as foreground if they can be determined to occlude some {em empty space} in the scene. We present an efficient algorithm to estimate foreground points in each range view using explicit epipolar search. In cases where the background pattern is stationary, we show how visibility constraints from other views can generate virtual background values at points with no valid depth in the primary view. We demonstrate the performance of both algorithms for detecting people in indoor office environments.
Resumo:
We report a 75dB, 2.8mW, 100Hz-10kHz envelope detector in a 1.5mm 2.8V CMOS technology. The envelope detector performs input-dc-insensitive voltage-to-currentconverting rectification followed by novel nanopower current-mode peak detection. The use of a subthreshold wide- linear-range transconductor (WLR OTA) allows greater than 1.7Vpp input voltage swings. We show theoretically that this optimal performance is technology-independent for the given topology and may be improved only by spending more power. A novel circuit topology is used to perform 140nW peak detection with controllable attack and release time constants. The lower limits of envelope detection are determined by the more dominant of two effects: The first effect is caused by the inability of amplified high-frequency signals to exceed the deadzone created by exponential nonlinearities in the rectifier. The second effect is due to an output current caused by thermal noise rectification. We demonstrate good agreement of experimentally measured results with theory. The envelope detector is useful in low power bionic implants for the deaf, hearing aids, and speech-recognition front ends. Extension of the envelope detector to higher- frequency applications is straightforward if power consumption is inc
Resumo:
For applications involving the control of moving vehicles, the recovery of relative motion between a camera and its environment is of high utility. This thesis describes the design and testing of a real-time analog VLSI chip which estimates the focus of expansion (FOE) from measured time-varying images. Our approach assumes a camera moving through a fixed world with translational velocity; the FOE is the projection of the translation vector onto the image plane. This location is the point towards which the camera is moving, and other points appear to be expanding outward from. By way of the camera imaging parameters, the location of the FOE gives the direction of 3-D translation. The algorithm we use for estimating the FOE minimizes the sum of squares of the differences at every pixel between the observed time variation of brightness and the predicted variation given the assumed position of the FOE. This minimization is not straightforward, because the relationship between the brightness derivatives depends on the unknown distance to the surface being imaged. However, image points where brightness is instantaneously constant play a critical role. Ideally, the FOE would be at the intersection of the tangents to the iso-brightness contours at these "stationary" points. In practice, brightness derivatives are hard to estimate accurately given that the image is quite noisy. Reliable results can nevertheless be obtained if the image contains many stationary points and the point is found that minimizes the sum of squares of the perpendicular distances from the tangents at the stationary points. The FOE chip calculates the gradient of this least-squares minimization sum, and the estimation is performed by closing a feedback loop around it. The chip has been implemented using an embedded CCD imager for image acquisition and a row-parallel processing scheme. A 64 x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a design goal of 200 mW of on-chip power, a top frame rate of 1000 frames/second, and a basic accuracy of 5%. A complete experimental system which estimates the FOE in real time using real motion and image scenes is demonstrated.