974 resultados para motion picture cameras


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Via an operator continued fraction scheme, we expand Kramers equation in the high friction limit. Then all distribution moments are expressed in terms of the first momemt (particle density). The latter satisfies a generalized Smoluchowsky equation. As an application, we present the nonequilibrium thermodynamics and hydrodynamical picture for the one-dimensional Brownian motion. (C) 2000 Elsevier B.V. B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a generic decoupled imagebased control scheme for cameras obeying the unified projection model. The scheme is based on the spherical projection model. Invariants to rotational motion are computed from this projection and used to control the translational degrees of freedom. Importantly we form invariants which decrease the sensitivity of the interaction matrix to object depth variation. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6-DOF robotic platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual recording devices such as video cameras, CCTVs, or webcams have been broadly used to facilitate work progress or safety monitoring on construction sites. Without human intervention, however, both real-time reasoning about captured scenes and interpretation of recorded images are challenging tasks. This article presents an exploratory method for automated object identification using standard video cameras on construction sites. The proposed method supports real-time detection and classification of mobile heavy equipment and workers. The background subtraction algorithm extracts motion pixels from an image sequence, the pixels are then grouped into regions to represent moving objects, and finally the regions are identified as a certain object using classifiers. For evaluating the method, the formulated computer-aided process was implemented on actual construction sites, and promising results were obtained. This article is expected to contribute to future applications of automated monitoring systems of work zone safety or productivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1999 Richards compared the accuracy of commercially available motion capture systems commonly used in biomechanics. Richards identified that in static tests the optical motion capture systems generally produced RMS errors of less than 1.0 mm. During dynamic tests, the RMS error increased to up to 4.2 mm in some systems. In the last 12 years motion capture systems have continued to evolve and now include high-resolution CCD or CMOS image sensors, wireless communication, and high full frame sampling frequencies. In addition to hardware advances, there have also been a number of advances in software, which includes improved calibration and tracking algorithms, real time data streaming, and the introduction of the c3d standard. These advances have allowed the system manufactures to maintain a high retail price in the name of advancement. In areas such as gait analysis and ergonomics many of the advanced features such as high resolution image sensors and high sampling frequencies are not required due to the nature of the task often investigated. Recently Natural Point introduced low cost cameras, which on face value appear to be suitable as at very least a high quality teaching tool in biomechanics and possibly even a research tool when coupled with the correct calibration and tracking software. The aim of the study was therefore to compare both the linear accuracy and quality of angular kinematics from a typical high end motion capture system and a low cost system during a simple task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we demonstrate passive vision-based localization in environments more than two orders of magnitude darker than the current benchmark using a 100 webcam and a 500 camera. Our approach uses the camera’s maximum exposure duration and sensor gain to achieve appropriately exposed images even in unlit night-time environments, albeit with extreme levels of motion blur. Using the SeqSLAM algorithm, we first evaluate the effect of variable motion blur caused by simulated exposures of 132 ms to 10000 ms duration on localization performance. We then use actual long exposure camera datasets to demonstrate day-night localization in two different environments. Finally we perform a statistical analysis that compares the baseline performance of matching unprocessed greyscale images to using patch normalization and local neighbourhood normalization – the two key SeqSLAM components. Our results and analysis show for the first time why the SeqSLAM algorithm is effective, and demonstrate the potential for cheap camera-based localization systems that function across extreme perceptual change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disjoint top-view networked cameras are among the most commonly utilized networks in many applications. One of the open questions for these cameras' study is the computation of extrinsic parameters (positions and orientations), named extrinsic calibration or localization of cameras. Current approaches either rely on strict assumptions of the object motion for accurate results or fail to provide results of high accuracy without the requirement of the object motion. To address these shortcomings, we present a location-constrained maximum a posteriori (LMAP) approach by applying known locations in the surveillance area, some of which would be passed by the object opportunistically. The LMAP approach formulates the problem as a joint inference of the extrinsic parameters and object trajectory based on the cameras' observations and the known locations. In addition, a new task-oriented evaluation metric, named MABR (the Maximum value of All image points' Back-projected localization errors' L2 norms Relative to the area of field of view), is presented to assess the quality of the calibration results in an indoor object tracking context. Finally, results herein demonstrate the superior performance of the proposed method over the state-of-the-art algorithm based on the presented MABR and classical evaluation metric in simulations and real experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Markerless motion capture systems are relatively new devices that can significantly speed up capturing full body motion. A precision of the assessment of the finger’s position with this type of equipment was evaluated at 17.30 ± 9.56 mm when compare to an active marker system [1]. The Microsoft Kinect was proposed to standardized and enhanced clinical evaluation of patients with hemiplegic cerebral palsy [2]. Markerless motion capture systems have the potential to be used in a clinical setting for movement analysis, as well as for large cohort research. However, the precision of such system needs to be characterized. Global objectives • To assess the precision within the recording field of the markerless motion capture system Openstage 2 (Organic Motion, NY). • To compare the markerless motion capture system with an optoelectric motion capture system with active markers. Specific objectives • To assess the noise of a static body at 13 different location within the recording field of the markerless motion capture system. • To assess the smallest oscillation detected by the markerless motion capture system. • To assess the difference between both systems regarding the body joint angle measurement. Methods Equipment • OpenStage® 2 (Organic Motion, NY) o Markerless motion capture system o 16 video cameras (acquisition rate : 60Hz) o Recording zone : 4m * 5m * 2.4m (depth * width * height) o Provide position and angle of 23 different body segments • VisualeyezTM VZ4000 (PhoeniX Technologies Incorporated, BC) o Optoelectric motion capture system with active markers o 4 trackers system (total of 12 cameras) o Accuracy : 0.5~0.7mm Protocol & Analysis • Static noise: o Motion recording of an humanoid mannequin was done in 13 different locations o RMSE was calculated for each segment in each location • Smallest oscillation detected: o Small oscillations were induced to the humanoid mannequin and motion was recorded until it stopped. o Correlation between the displacement of the head recorded by both systems was measured. A corresponding magnitude was also measured. • Body joints angle: o Body motion was recorded simultaneously with both systems (left side only). o 6 participants (3 females; 32.7 ± 9.4 years old) • Tasks: Walk, Squat, Shoulder flexion & abduction, Elbow flexion, Wrist extension, Pronation / supination (not in results), Head flexion & rotation (not in results), Leg rotation (not in results), Trunk rotation (not in results) o Several body joint angles were measured with both systems. o RMSE was calculated between signals of both systems. Results Conclusion Results show that the Organic Motion markerless system has the potential to be used for assessment of clinical motor symptoms or motor performances However, the following points should be considered: • Precision of the Openstage system varied within the recording field. • Precision is not constant between limb segments. • The error seems to be higher close to the range of motion extremities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model-based optical motion capture systems require knowledge of the position of the markers relative to the underlying skeleton, the lengths of the skeleton's limbs, and which limb each marker is attached to. These model parameters are typically assumed and entered into the system manually, although techniques exist for calculating some of them, such as the position of the markers relative to the skeleton's joints. We present a fully automatic procedure for determining these model parameters. It tracks the 2D positions of the markers on the cameras' image planes and determines which markers lie on each limb before calculating the position of the underlying skeleton. The only assumption is that the skeleton consists of rigid limbs connected with ball joints. The proposed system is demonstrated on a number of real data examples and is shown to calculate good estimates of the model parameters in each. © 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approach for estimating 3D body pose from multiple, uncalibrated views is proposed. First, a mapping from image features to 2D body joint locations is computed using a statistical framework that yields a set of several body pose hypotheses. The concept of a "virtual camera" is introduced that makes this mapping invariant to translation, image-plane rotation, and scaling of the input. As a consequence, the calibration matrices (intrinsics) of the virtual cameras can be considered completely known, and their poses are known up to a single angular displacement parameter. Given pose hypotheses obtained in the multiple virtual camera views, the recovery of 3D body pose and camera relative orientations is formulated as a stochastic optimization problem. An Expectation-Maximization algorithm is derived that can obtain the locally most likely (self-consistent) combination of body pose hypotheses. Performance of the approach is evaluated with synthetic sequences as well as real video sequences of human motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Markerless systems are becoming more ubiquitous due to their increased use in video games consoles. Cheap cameras and software suites are making motion capture technologies more freely available to the digitally inclined choreographer. In this workshop we will demonstrate the opportunities and limitations provided by easily accessible and relatively inexpensive markerless motion capture systems. In particular we will explore the capacity of these systems to provide useful data in a live performance scenario where the latency, size and format of the captured data is crucial in allowing real-time processing and visualisation of the captured scene

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recent advances in technology and miniaturization of devices such as GPS or IMU, Unmanned Aerial Vehicles became a feasible platform for a Remote Sensing applications. The use of UAVs compared to the conventional aerial platforms provides a set of advantages such as higher spatial resolution of the derived products. UAV - based imagery obtained by a user grade cameras introduces a set of problems which have to be solved, e. g. rotational or angular differences or unknown or insufficiently precise IO and EO camera parameters. In this work, UAV - based imagery of RGB and CIR type was processed using two different workflows based on PhotoScan and VisualSfM software solutions resulting in the DSM and orthophoto products. Feature detection and matching parameters influence on the result quality as well as a processing time was examined and the optimal parameter setup was presented. Products of the both workflows were compared in terms of a quality and a spatial accuracy. Both workflows were compared by presenting the processing times and quality of the results. Finally, the obtained products were used in order to demonstrate vegetation classification. Contribution of the IHS transformations was examined with respect to the classification accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.