979 resultados para camera motion parameters


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper reports an interactive tool for calibrating a camera, suitable for use in outdoor scenes. The motivation for the tool was the need to obtain an approximate calibration for images taken with no explicit calibration data. Such images are frequently presented to research laboratories, especially in surveillance applications, with a request to demonstrate algorithms. The method decomposes the calibration parameters into intuitively simple components, and relies on the operator interactively adjusting the parameter settings to achieve a visually acceptable agreement between a rectilinear calibration model and his own perception of the scene. Using the tool, we have been able to calibrate images of unknown scenes, taken with unknown cameras, in a matter of minutes. The standard of calibration has proved to be sufficient for model-based pose recovery and tracking of vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the context of active vision, scant attention has been paid to the execution of motion saccades—rapid re-adjustments of the direction of gaze to attend to moving objects. In this paper we first develop a methodology for, and give real-time demonstrations of, the use of motion detection and segmentation processes to initiate capture saccades towards a moving object. The saccade is driven by both position and velocity of the moving target under the assumption of constant target velocity, using prediction to overcome the delay introduced by visual processing. We next demonstrate the use of a first order approximation to the segmented motion field to compute bounds on the time-to-contact in the presence of looming motion. If the bound falls below a safe limit, a panic saccade is fired, moving the camera away from the approaching object. We then describe the use of image motion to realize smooth pursuit, tracking using velocity information alone, where the camera is moved so as to null a single constant image motion fitted within a central image region. Finally, we glue together capture saccades with smooth pursuit, thus effecting changes in both what is being attended to and how it is being attended to. To couple the different visual activities of waiting, saccading, pursuing and panicking, we use a finite state machine which provides inherent robustness outside of visual processing and provides a means of making repeated exploration. We demonstrate in repeated trials that the transition from saccadic motion to tracking is more likely to succeed using position and velocity control, than when using position alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of human behaviour through visual information has been a highly active research topic in the computer vision community. This was previously achieved via images from a conventional camera, but recently depth sensors have made a new type of data available. This survey starts by explaining the advantages of depth imagery, then describes the new sensors that are available to obtain it. In particular, the Microsoft Kinect has made high-resolution real-time depth cheaply available. The main published research on the use of depth imagery for analysing human activity is reviewed. Much of the existing work focuses on body part detection and pose estimation. A growing research area addresses the recognition of human actions. The publicly available datasets that include depth imagery are listed, as are the software libraries that can acquire it from a sensor. This survey concludes by summarising the current state of work on this topic, and pointing out promising future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rotational motion of an artificial satellite is studied by considering torques produced by gravity gradient and direct solar radiation pressure. A satellite of circular cylinder shape is considered here, and Andoyers variables are used to describe the rotational motion. Expressions for direct solar radiation torque are derived. When the earth's shadow is not considered, an analytical solution is obtained using Lagrange's method of variation of parameters. A semi-analytical procedure is proposed to predict the satellite's attitude under the influence of the earth's shadow. The analytical solution shows that angular variables are linear and periodic functions of time while their conjugates suffer only periodic variations. When compared, numerical and analytical solutions have a good agreement during the time range considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the feasibility of using an energy harvesting device tuned such that its natural frequency coincides with higher harmonics of the input to capture energy from walking or running human motion more efficiently. The paper starts by reviewing the concept of a linear resonant generator for a tonal frequency input and then derives an expression for the power harvested for an input with several harmonics. The amount of power harvested is estimated numerically using measured data from human subjects. Assuming that the input is periodic, the signal is reconstructed using a Fourier series before being used in the simulation. It is found that although the power output depends on the input frequency, the choice of tuning the natural frequency of the device to coincide with a particular higher harmonic is restricted by the amount of damping that is needed to maximize the amount of power harvested, as well as to comply with the size limit of the device. It is also found that it is not feasible to tune the device to match the first few harmonics when the size of the device is small, because a large amount of damping is required to limit the motion of the mass.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamics of a pair of satellites similar to Enceladus-Dione is investigated with a two-degrees-of-freedom model written in the domain of the planar general three-body problem. Using surfaces of section and spectral analysis methods, we study the phase space of the system in terms of several parameters, including the most recent data. A detailed study of the main possible regimes of motion is presented, and in particular we show that, besides the two separated resonances, the phase space is replete of secondary resonances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A semi-analytical approach is proposed to study the rotational motion of an artificial satellite under the influence of the torque due to the solar radiation pressure and taking into account the influence of Earth's shadow. The Earth's shadow is introduced in the equations for the rotational motion as a function depending on the longitude of the Sun, on the ecliptic's obliquity and on the orbital parameters of the satellite. By mapping and computing this function, we can get the periods in which the satellite is not illuminated and the torque due to the solar radiation pressure is zero. When the satellite is illuminated, a known analytical solution is used to predict the satellite's attitude. This analytical solution is expressed in terms of Andoyer's variables and depends on the physical and geometrical properties of the satellite and on the direction of the Sun radiation flux. By simulating a hypothetical circular cylindrical type satellite, an example is exhibited and the results agree quite well when compared with a numerical integration. © 1997 COSPAR. Published by Elsevier Science Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. Isokinetic tests are often applied to assess muscular strength and EMG activity, however the specific ranges of motion used in testing (fully flexed or extended positions) might be constrictive and/or be painful for patients with injuries or under-going rehabilitation. The aim of this study was to examine the effects of different ranges of motion (RoM) when determining maximal EMG during isokinetic knee flexion and extension with different types of contractions and velocities. Methods. Eighteen males had EMG activity recorded on the vastus lateralis, vastus medialis, semitendinosus and biceps femoris muscles during five maximal isokinetic concentric and eccentric contractions for the knee flexors and extensors at 60° • s -1 and 180° • s -1. The root mean square of EMG was calculated at three different ranges of motion: (1) a full range of motion (90°-20° [0° = full knee extension]); (2) a range of motion of 20° (between 60°-80° and 40°-60° for knee extension and flexion, respectively) and (3) at a 10° interval around the angle where peak torque is produced. EMG measurements were statistically analyzed (ANOVA) to test for the range of motion, contraction velocity and contraction speed effects. Coefficients of variation and Pearson's correlation coefficients were also calculated among the ranges of motion. Results. Predominantly similar (p > 0.05) and well-correlated EMG results (r > 0.7, p ≤ 0.001) were found among the ranges of motion. However, a lower coefficient of variation was found for the full range of motion, while the 10° interval around peak torque at 180° • s -1 had the highest coefficient, regardless of the type of contraction. Conclusions. Shorter ranges of motion at around the peak torque angle provides a reliable indicator when recording EMG activity during maximal isokinetic parameters. It may provide a safer alternative when testing patients with injuries or undergoing rehabilitation.