825 resultados para Tracking and trailing.
Resumo:
We present a novel way of interacting with an immersive virtual environment which involves inexpensive motion-capture using the Wii Remote®. A software framework is also presented to visualize and share this information across two remote CAVETM-like environments. The resulting applications can be used to assist rehabilitation by sending motion information across remote sites. The application’s software and hardware components are scalable enough to be used on desktop computer when home-based rehabilitation is preferred.
Resumo:
This paper presents a novel intelligent multiple-controller framework incorporating a fuzzy-logic-based switching and tuning supervisor along with a generalised learning model (GLM) for an autonomous cruise control application. The proposed methodology combines the benefits of a conventional proportional-integral-derivative (PID) controller, and a PID structure-based (simultaneous) zero and pole placement controller. The switching decision between the two nonlinear fixed structure controllers is made on the basis of the required performance measure using a fuzzy-logic-based supervisor, operating at the highest level of the system. The supervisor is also employed to adaptively tune the parameters of the multiple controllers in order to achieve the desired closed-loop system performance. The intelligent multiple-controller framework is applied to the autonomous cruise control problem in order to maintain a desired vehicle speed by controlling the throttle plate angle in an electronic throttle control (ETC) system. Sample simulation results using a validated nonlinear vehicle model are used to demonstrate the effectiveness of the multiple-controller with respect to adaptively tracking the desired vehicle speed changes and achieving the desired speed of response, whilst penalising excessive control action. Crown Copyright (C) 2008 Published by Elsevier B.V. All rights reserved.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright (C) 2007 Hindawi Publishing Corporation. All rights reserved.
Resumo:
Listeners can attend to one of several simultaneous messages by tracking one speaker’s voice characteristics. Using differences in the location of sounds in a room, we ask how well cues arising from spatial position compete with these characteristics. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played simultaneously with a different “distracter” context. Talker difference was in competition with position difference, so the response indicates which cue‐type the listener was tracking. Spatial position was found to override talker difference in dichotic conditions when the talkers are similar (male). The salience of cues associated with differences in sounds, bearings decreased with distance between listener and sources. These cues are more effective binaurally. However, there appear to be other cues that increase in salience with distance between sounds. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. Distances between spectra calculated using a gammatone filterbank (with ERB‐spaced CFs) of the room’s impulse responses at different locations were computed, and comparison with listeners’ responses suggested some slight monaural loudness cues, but also monaural “timbre” cues arising from the temporal‐ and spectral‐envelope differences in the speech from different locations.
Resumo:
In a “busy” auditory environment listeners can selectively attend to one of several simultaneous messages by tracking one listener's voice characteristics. Here we ask how well other cues compete for attention with such characteristics, using variations in the spatial position of sound sources in a (virtual) seminar room. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played with a simultaneous “distracter” context that had a different wording. Talker difference was in competition with a position difference, so that the target‐word chosen indicates which cue‐type the listener was tracking. The main findings are that room‐acoustic factors provide some tracking cues, whose salience increases with distance separation. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. The room‐acoustic factors might therefore be the spectral‐ and temporal‐envelope effects of reverberation on the timbre of speech. By contrast, the salience of cues associated with differences in sounds' bearings tends to decrease with distance, and these cues are more effective in dichotic conditions. In other conditions, where a distance and a bearing difference cooperate, they can completely override a talker difference at various distances.
Resumo:
A new man-made target tracking algorithm integrating features from (Forward Looking InfraRed) image sequence is presented based on particle filter. Firstly, a multiscale fractal feature is used to enhance targets in FLIR images. Secondly, the gray space feature is defined by Bhattacharyya distance between intensity histograms of the reference target and a sample target from MFF (Multi-scale Fractal Feature) image. Thirdly, the motion feature is obtained by differencing between two MFF images. Fourthly, a fusion coefficient can be automatically obtained by online feature selection method for features integrating based on fuzzy logic. Finally, a particle filtering framework is developed to fulfill the target tracking. Experimental results have shown that the proposed algorithm can accurately track weak or small man-made target in FLIR images with complicated background. The algorithm is effective, robust and satisfied to real time tracking.
Resumo:
This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness.