982 resultados para camera motion
Resumo:
Bio-inspired designs can provide an answer to engineering problems such as swimming strategies at the micron or nano-scale. Scientists are now designing artificial micro-swimmers that can mimic flagella-powered swimming of micro-organisms. In an application such as lab-on-a-chip in which micro-object manipulation in small flow geometries could be achieved by micro-swimmers, control of the swimming direction becomes an important aspect for retrieval and control of the micro-swimmer. A bio-inspired approach for swimming direction reversal (a flagellum bearing mastigonemes) can be used to design such a system and is being explored in the present work. We analyze the system using a computational framework in which the equations of solid mechanics and fluid dynamics are solved simultaneously. The fluid dynamics of Stokes flow is represented by a 2D Stokeslets approach while the solid mechanics behavior is realized using Euler-Bernoulli beam elements. The working principle of a flagellum bearing mastigonemes can be broken up into two parts: (1) the contribution of the base flagellum and (2) the contribution of mastigonemes, which act like cilia. These contributions are counteractive, and the net motion (velocity and direction) is a superposition of the two. In the present work, we also perform a dimensional analysis to understand the underlying physics associated with the system parameters such as the height of the mastigonemes, the number of mastigonemes, the flagellar wave length and amplitude, the flagellum length, and mastigonemes rigidity. Our results provide fundamental physical insight on the swimming of a flagellum with mastigonemes, and it provides guidelines for the design of artificial flagellar systems.
Resumo:
Current state-of-the-art techniques for determination of the change in volume of human chests, used in lung-function measurement, calculate the volume bounded by a reconstructed chest surface and its projection on to an approximately parallel static plane over a series of time instants. This method works well so long as the subject does not move globally relative to the reconstructed surface's co-ordinate system. In practice this means the subject has to be braced, which restricts the technique's use. We present here a method to compensate for global motion of the subject, allowing accurate measurement while free-standing, and also while undergoing intentional motion. © 2012 Springer-Verlag.
Resumo:
Vision-based object detection has been introduced in construction for recognizing and locating construction entities in on-site camera views. It can provide spatial locations of a large number of entities, which is beneficial in large-scale, congested construction sites. However, even a few false detections prevent its practical applications. In resolving this issue, this paper presents a novel hybrid method for locating construction equipment that fuses the function of detection and tracking algorithms. This method detects construction equipment in the video view by taking advantage of entities' motion, shape, and color distribution. Background subtraction, Haar-like features, and eigen-images are used for motion, shape, and color information, respectively. A tracking algorithm steps in the process to make up for the false detections. False detections are identified by catching drastic changes in object size and appearance. The identified false detections are replaced with tracking results. Preliminary experiments show that the combination with tracking has the potential to enhance the detection performance.
Resumo:
Videogrammetry is an inexpensive and easy-to-use technology for spatial 3D scene recovery. When applied to large scale civil infrastructure scenes, only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames requires careful consideration. Videotaping a built infrastructure scene results in large video files filled with blurry, noisy, or redundant frames. This is due to frame rate to camera speed ratios that are often higher than necessary; camera and lens imperfections and limitations that result in imaging noise; and occasional jerky motions of the camera that result in motion blur; all of which can significantly affect the performance of the videogrammetric pipeline. To tackle these issues, this paper proposes a novel method for automating the selection of an optimized number of informative, high quality frames. According to this method, as the first step, blurred frames are removed using the thresholds determined based on a minimum level of frame quality required to obtain robust results. Then, an optimum number of key frames are selected from the remaining frames using the selection criteria devised by the authors. Experimental results show that the proposed method outperforms existing methods in terms of improved 3D reconstruction results, while maintaining the optimum number of extracted frames needed to generate high quality 3D point clouds.© 2012 Elsevier Ltd. All rights reserved.
Resumo:
Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.
Resumo:
Real-time cardiac ultrasound allows monitoring the heart motion during intracardiac beating heart procedures. Our application assists atrial septal defect (ASD) closure techniques using real-time 3D ultrasound guidance. One major image processing challenge is the processing of information at high frame rate. We present an optimized block flow technique, which combines the probability-based velocity computation for an entire block with template matching. We propose adapted similarity constraints both from frame to frame, to conserve energy, and globally, to minimize errors. We show tracking results on eight in-vivo 4D datasets acquired from porcine beating-heart procedures. Computing velocity at the block level with an optimized scheme, our technique tracks ASD motion at 41 frames/s. We analyze the errors of motion estimation and retrieve the cardiac cycle in ungated images. © 2007 IEEE.
Resumo:
We present a system for augmenting depth camera output using multispectral photometric stereo. The technique is demonstrated using a Kinect sensor and is able to produce geometry independently for each frame. Improved reconstruction is demonstrated using the Kinect's inbuilt RGB camera and further improvements are achieved by introducing an additional high resolution camera. As well as qualitative improvements in reconstruction a quantitative reduction in temporal noise is shown. As part of the system an approach is presented for relaxing the assumption of multispectral photometric stereo that scenes are of constant chromaticity to the assumption that scenes contain multiple piecewise constant chromaticities.
Resumo:
Time-resolved particle image velocimetry (PIV) has been performed inside the nozzle of a commercially available inkjet print-head to obtain the time-dependent velocity waveform. A printhead with a single transparent nozzle 80 μm in orifice diameter was used to eject single droplets at a speed of 5 m/s. An optical microscope was used with an ultra-high-speed camera to capture the motion of particles suspended in a transparent liquid at the center of the nozzle and above the fluid meniscus at a rate of half a million frames per second. Time-resolved velocity fields were obtained from a fluid layer approximately 200 μm thick within the nozzle for a complete jetting cycle. A Lagrangian finite-element numerical model with experimental measurements as inputs was used to predict the meniscus movement. The model predictions showed good agreement with the experimental results. This work provides the first experimental verification of physical models and numerical simulations of flows within a drop-on-demand nozzle. © 2012 Society for Imaging Science and Technology.
Resumo:
The present paper proposes a unified geometric framework for coordinated motion on Lie groups. It first gives a general problem formulation and analyzes ensuing conditions for coordinated motion. Then, it introduces a precise method to design control laws in fully actuated and underactuated settings with simple integrator dynamics. It thereby shows that coordination can be studied in a systematic way once the Lie group geometry of the configuration space is well characterized. Applying the proposed general methodology to particular examples allows to retrieve control laws that have been proposed in the literature on intuitive grounds. A link with Brockett's double bracket flows is also made. The concepts are illustrated on SO(3), SE(2) and SE(3). © 2010 IEEE.
Resumo:
This paper proposes a design methodology to stabilize relative equilibria in a model of identical, steered particles moving in the plane at unit speed. Relative equilibria either correspond to parallel motion of all particles with fixed relative spacing or to circular motion of all particles around the same circle. Particles exchange relative information according to a communication graph that can be undirected or directed and time-invariant or time-varying. The emphasis of this paper is to show how previous results assuming all-to-all communication can be extended to a general communication framework. © 2008 IEEE.