68 resultados para Motion Tracking System
em CentAUR: Central Archive University of Reading - UK
Resumo:
An increasing number of neuroscience experiments are using virtual reality to provide a more immersive and less artificial experimental environment. This is particularly useful to navigation and three-dimensional scene perception experiments. Such experiments require accurate real-time tracking of the observer's head in order to render the virtual scene. Here, we present data on the accuracy of a commonly used six degrees of freedom tracker (Intersense IS900) when it is moved in ways typical of virtual reality applications. We compared the reported location of the tracker with its location computed by an optical tracking method. When the tracker was stationary, the root mean square error in spatial accuracy was 0.64 mm. However, we found that errors increased over ten-fold (up to 17 mm) when the tracker moved at speeds common in virtual reality applications. We demonstrate that the errors we report here are predominantly due to inaccuracies of the IS900 system rather than the optical tracking against which it was compared. (c) 2006 Elsevier B.V. All rights reserved.
Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach
Resumo:
Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.
Resumo:
Under the framework of the European Union Funded SAFEE project(1), this paper gives an overview of a novel monitoring and scene analysis system developed for use onboard aircraft in spatially constrained environments. The techniques discussed herein aim to warn on-board crew about pre-determined indicators of threat intent (such as running or shouting in the cabin), as elicited from industry and security experts. The subject matter experts believe that activities such as these are strong indicators of the beginnings of undesirable chains of events or scenarios, which should not be allowed to develop aboard aircraft. This project aimes to detect these scenarios and provide advice to the crew. These events may involve unruly passengers or be indicative of the precursors to terrorist threats. With a state of the art tracking system using homography intersections of motion images, and probability based Petri nets for scene understanding, the SAFEE behavioural analysis system automatically assesses the output from multiple intelligent sensors, and creates. recommendations that are presented to the crew using an integrated airborn user interface. Evaluation of the system is conducted within a full size aircraft mockup, and experimental results are presented, showing that the SAFEE system is well suited to monitoring people in confined environments, and that meaningful and instructive output regarding human actions can be derived from the sensor network within the cabin.
Resumo:
The paper describes a novel integrated vision system in which two autonomous visual modules are combined to interpret a dynamic scene. The first module employs a 3D model-based scheme to track rigid objects such as vehicles. The second module uses a 2D deformable model to track non-rigid objects such as people. The principal contribution is a novel method for handling occlusion between objects within the context of this hybrid tracking system. The practical aim of the work is to derive a scene description that is sufficiently rich to be used in a range of surveillance tasks. The paper describes each of the modules in outline before detailing the method of integration and the handling of occlusion in particular. Experimental results are presented to illustrate the performance of the system in a dynamic outdoor scene involving cars and people.
Resumo:
It has been proposed that there is a core impairment in autism spectrum conditions (ASC) to the mirror neuron system (MNS): If observed actions cannot be mapped onto the motor commands required for performance, higher order sociocognitive functions that involve understanding another person's perspective, such as theory of mind, may be impaired. However, evidence of MNS impairment in ASC is mixed. The present study used an 'automatic imitation' paradigm to assess MNS functioning in adults with ASC and matched controls, when observing emotional facial actions. Participants performed a pre-specified angry or surprised facial action in response to observed angry or surprised facial actions, and the speed of their action was measured with motion tracking equipment. Both the ASC and control groups demonstrated automatic imitation of the facial actions, such that responding was faster when they acted with the same emotional expression that they had observed. There was no difference between the two groups in the magnitude of the effect. These findings suggest that previous apparent demonstrations of impairments to the MNS in ASC may be driven by a lack of visual attention to the stimuli or motor sequencing impairments, and therefore that there is, in fact, no MNS impairment in ASC. We discuss these findings with reference to the literature on MNS functioning and imitation in ASC, as well as theories of the role of the MNS in sociocognitive functioning in typical development.
Resumo:
Recent evidence suggests that the mirror neuron system responds to the goals of actions, even when the end of the movement is hidden from view. To investigate whether this predictive ability might be based on the detection of early differences between actions with different outcomes, we used electromyography (EMG) and motion tracking to assess whether two actions with different goals (grasp to eat and grasp to place) differed from each other in their initial reaching phases. In a second experiment, we then tested whether observers could detect early differences and predict the outcome of these movements, based on seeing only part of the actions. Experiment 1 revealed early kinematic differences between the two movements, with grasp-to-eat movements characterised by an earlier peak acceleration, and different grasp position, compared to grasp-to-place movements. There were also significant differences in forearm muscle activity in the reaching phase of the two actions. The behavioural data arising from Experiments 2a and 2b indicated that observers are not able to predict whether an object is going to be brought to the mouth or placed until after the grasp has been completed. This suggests that the early kinematic differences are either not visible to observers, or that they are not used to predict the end-goals of actions. These data are discussed in the context of the mirror neuron system
Resumo:
This paper presents a quantitative evaluation of a tracking system on PETS 2015 Challenge datasets using well-established performance measures. Using the existing tools, the tracking system implements an end-to-end pipeline that include object detection, tracking and post- processing stages. The evaluation results are presented on the provided sequences of both ARENA and P5 datasets of PETS 2015 Challenge. The results show an encouraging performance of the tracker in terms of accuracy but a greater tendency of being prone to cardinality error and ID changes on both datasets. Moreover, the analysis show a better performance of the tracker on visible imagery than on thermal imagery.
Resumo:
Navigating cluttered indoor environments is a difficult problem in indoor service robotics. The Acroboter concept, a novel approach to indoor locomotion, represents unique opportunity to avoid obstacles in indoor environments by navigating the ceiling plane. This mode of locomotion requires the ability to accurately detect obstacles, and plan 3D trajectories through the environment. This paper presents the development of a resilient object tracking system, as well as a novel approach to generating 3D paths suitable for such robot configurations. Distributed human-machine interfacing allowing simulation previewing of actions is also considered in the developed system architecture.
Marker placement to describe the wrist movements during activities of daily living in cyclical tasks
Resumo:
Objective. To describe the wrist kinematics during movement through free range of motion and activities of daily living using a cyclical task. Design. The wrist angles were initially calculated in a calibration trial and then in two selected activities of daily living (jar opening and carton pouring). Background. Existing studies which describe the wrist movement do not address the specific application of daily activities. Moreover, the data presented from subject to subject may differ simply because of the non-cyclical nature of the upper limbs movements. Methods. The coordinates of external markers attached to bone references on the forearm and dorsal side of the hand were obtained using an optical motion capture system. The wrist angles were derived from free motion trials and successively calculated in four healthy subjects for two specific cyclical daily activities (opening a jar and pouring from a carton). Results. The free motions trial highlighted the interaction between the wrist angles. Both the jar opening and the carton pouring activity showed a repetitive pattern for the three angles within the cycle length. In the jar-opening task, the standard deviation for the whole population was 10.8degrees for flexion-extension, 5.3degrees for radial-ulnar deviation and 10.4degrees for pronation-supination. In the carton-pouring task, the standard deviation for the whole population was 16.0degrees for flexion-extension, 3.4degrees for radial-ulnar deviation and 10.7degrees for pro nation-supination. Conclusion. Wrist kinematics in healthy subjects can be successfully described by the rotations about the axes of marker-defined coordinates systems during free range of motion and daily activities using cyclical tasks.
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
Industrial robotic manipulators can be found in most factories today. Their tasks are accomplished through actively moving, placing and assembling parts. This movement is facilitated by actuators that apply a torque in response to a command signal. The presence of friction and possibly backlash have instigated the development of sophisticated compensation and control methods in order to achieve the desired performance may that be accurate motion tracking, fast movement or in fact contact with the environment. This thesis presents a dual drive actuator design that is capable of physically linearising friction and hence eliminating the need for complex compensation algorithms. A number of mathematical models are derived that allow for the simulation of the actuator dynamics. The actuator may be constructed using geared dc motors, in which case the benefits of torque magnification is retained whilst the increased non-linear friction effects are also linearised. An additional benefit of the actuator is the high quality, low latency output position signal provided by the differencing of the two drive positions. Due to this and the linearised nature of friction, the actuator is well suited for low velocity, stop-start applications, micro-manipulation and even in hard-contact tasks. There are, however, disadvantages to its design. When idle, the device uses power whilst many other, single drive actuators do not. Also the complexity of the models mean that parameterisation is difficult. Management of start-up conditions still pose a challenge.
Resumo:
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Resumo:
The motion of a car is described using a stochastic model in which the driving processes are the steering angle and the tangential acceleration. The model incorporates exactly the kinematic constraint that the wheels do not slip sideways. Two filters based on this model have been implemented, namely the standard EKF, and a new filter (the CUF) in which the expectation and the covariance of the system state are propagated accurately. Experiments show that i) the CUF is better than the EKF at predicting future positions of the car; and ii) the filter outputs can be used to control the measurement process, leading to improved ability to recover from errors in predictive tracking.