934 resultados para Head-On Collisions.
Resumo:
Four experiments conducted over three seasons (2002-05) at the Crops Research Unit, University of Reading, investigated effects of canopy management of autumn sown oilseed rape (Brassica napus L. ssp. oleifera var. biennis (DC.) Metzg.) on competition with grass weeds. Emphasis was placed on the effect of the crop on the weeds. Rape canopy size was manipulated using sowing date, seed rate and the application of autumn fertilizer. Lolium multiflorum Lam., L. x boucheanum Kunth and Alopecurus myosuroides Huds. were sown as indicative grass weeds. The effects of sowing date, seed rate and autumn nitrogen on crop competitive ability were correlated with rape biomass and fractional interception of photosynthetically active radiation (PAR) by the rape floral layer, to the extent that by spring there was good evidence of crop: weed replacement. An increase in seed rate up to the highest plant densities tested increased both rape biomass and competitiveness, e.g. in 2002/3, L. multiflorum head density was reduced from 539 to 245 heads/m(2) and spikelet density from 13 170 to 5960 spikelets/m(2) when rape plant density was increased from 16 to 81 plants/m(2). Spikelets/head of Lolium spp. was little affected by rape seed rate, but the length of heads of A. myosuroides was reduced by 9 % when plant density was increased from 29-51 plants/m(2). Autumn nitrogen increased rape biomass and reduced L. multiflorum head density (415 and 336 heads/m(2) without and with autumn nitrogen, respectively) and spikelet density (9990 and 8220 spikelets/m(2) without and with autumn nitrogen, respectively). The number of spikelets/head was not significantly affected by autumn nitrogen. Early sowing could increase biomass and competitiveness, but poor crop establishment sometimes overrode the effect. Where crop and weed establishment was similar for both sowing dates, a 2-week delay (i.e. early September to mid-September) increased L. multiflorum head density from 226 to 633 heads/m(2) and spikelet density from 5780 to 15 060 spikelets/m(2).
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone. The methods are also limited to optical see-through HMDs. Building on our existing HMD calibration method [1], we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside an HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in various positions. The locations of image features on the calibration object are then re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the display’s intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner in both see-through and in non-see-through modes and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors and involves no error-prone human measurements.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
In this paper, a forward-looking infrared (FLIR) video surveillance system is presented for collision avoidance of moving ships to bridge piers. An image preprocessing algorithm is proposed to reduce clutter background by multi-scale fractal analysis, in which the blanket method is used for fractal feature computation. Then, the moving ship detection algorithm is developed from image differentials of the fractal feature in the region of surveillance between regularly interval frames. When the moving ships are detected in region of surveillance, the device for safety alert is triggered. Experimental results have shown that the approach is feasible and effective. It has achieved real-time and reliable alert to avoid collisions of moving ships to bridge piers.
Resumo:
The main objective is to generate kinematic models for the head and neck movements. The motivation comes from our study of individuals with quadriplegia and the need to design rehabilitation aiding devices such as robots and teletheses that can be controlled by head-neck movements. It is then necessary to develop mathematical models for the head and neck movements. Two identification methods have been applied to study the kinematics of head-neck movements of able-body as well as neck-injured subjects. In particular, sagittal plane movements are well modeled by a planar two-revolute-joint linkage. In fact, the motion in joint space seems to indicate that sagittal plane movements may be classified as a single DOF motion. Finally, a spatial three-revolute-joint system has been employed to model 3D head-neck movements.
Resumo:
Researchers in the rehabilitation engineering community have been designing and developing a variety of passive/active devices to help persons with limited upper extremity function to perform essential daily manipulations. Devices range from low-end tools such as head/mouth sticks to sophisticated robots using vision and speech input. While almost all of the high-end equipment developed to date relies on visual feedback alone to guide the user providing no tactile or proprioceptive cues, the “low-tech” head/mouth sticks deliver better “feel” because of the inherent force feedback through physical contact with the user's body. However, the disadvantage of a conventional head/mouth stick is that it can only function in a limited workspace and the performance is limited by the user's strength. It therefore seems reasonable to attempt to develop a system that exploits the advantages of the two approaches: the power and flexibility of robotic systems with the sensory feedback of a headstick. The system presented in this paper reflects the design philosophy stated above. This system contains a pair of master-slave robots with the master being operated by the user's head and the slave acting as a telestick. Described in this paper are the design, control strategies, implementation and performance evaluation of the head-controlled force-reflecting telestick system.
Resumo:
For individuals with upper-extremity motor disabilities, the head-stick is a simple and intuitive means of performing manipulations because it provides direct proprioceptive information to the user. Through practice and use of inherent proprioceptive cues, users may become quite adept at using the head-stick for a number of different tasks. The traditional head-stick is limited, however, to the user's achievable range of head motion and force generation, which may be insufficient for many tasks. The authors describe an interface to a robot system which emulates the proprioceptive qualities of a traditional head-stick while also allowing for augmented end-effector ranges of force and motion. The design and implementation of the system in terms of coordinate transforms, bilateral telemanipulator architecture, safety systems, and system identification of the master is described, in addition to preliminary evaluation results.
Resumo:
This paper describes the design, implementation and testing of a high speed controlled stereo “head/eye” platform which facilitates the rapid redirection of gaze in response to visual input. It details the mechanical device, which is based around geared DC motors, and describes hardware aspects of the controller and vision system, which are implemented on a reconfigurable network of general purpose parallel processors. The servo-controller is described in detail and higher level gaze and vision constructs outlined. The paper gives performance figures gained both from mechanical tests on the platform alone, and from closed loop tests on the entire system using visual feedback from a feature detector.
Resumo:
The elderly tutor La Sale's didactic treatise for his charges (dated 1451) includes an eye-witness account of the siege of Anjou-held Naples by the Aragonese in 1438. It narrates the accidental death (or miracle, depending on the perspective of the chroniclers) of the infante Pedro of Castille, brother of King Alfonso the Magnanimous of Aragon. This article explores how "La Sale", an adapted version of the Middle French translation of Valerius Maximus's 'Facta et dicta memorabilia', frames and skews the anecdote towards an exploration of the reliability and authority of the tutor-narrator.
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements.
Resumo:
This article presents three ethnographic tales of interactions with living room media to help recreate the experience of significant moments in time, of affective encounters at the interface in which there is a collision or confusion of situated and virtual worlds. It draws on a year-long video ethnography of the practice and performance of everyday interactions with living room media. By studying situated activity and the lived practice of (new) media, rather than taking an exclusive focus on the virtual as a detached space, this ethnographic work demonstrates how the situated and mediated clash, or are crafted into complex emotional encounters during everyday living room life.
Resumo:
The authors demonstrate four real-time reactive responses to movement in everyday scenes using an active head/eye platform. They first describe the design and realization of a high-bandwidth four-degree-of-freedom head/eye platform and visual feedback loop for the exploration of motion processing within active vision. The vision system divides processing into two scales and two broad functions. At a coarse, quasi-peripheral scale, detection and segmentation of new motion occurs across the whole image, and at fine scale, tracking of already detected motion takes place within a foveal region. Several simple coarse scale motion sensors which run concurrently at 25 Hz with latencies around 100 ms are detailed. The use of these sensors are discussed to drive the following real-time responses: (1) head/eye saccades to moving regions of interest; (2) a panic response to looming motion; (3) an opto-kinetic response to continuous motion across the image and (4) smooth pursuit of a moving target using motion alone.