937 resultados para Lipreading, AVASR, Front-End Effect, Pose-Estimator, Visual Front-End


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Federal Railroad Administration, Office of Research, Development and Demonstrations, Washington, D.C.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is a growing body of evidence that the processes mediating the allocation of spatial attention within objects may be separable from those governing attentional distribution between objects. In the neglect literature, a related proposal has been made regarding the perception of (within-object) sizes and (between-object) distances. This proposal follows observations that, in size-matching and bisection tasks, neglect is more strongly expressed when patients are required to attend to the sizes of discrete objects than to the (unfilled) distances between objects. These findings are consistent with a partial dissociation between size and distance processing, but a simpler alternative must also be considered. Whilst a neglect patient may fail to explore the full extent of a solid stimulus, the estimation of an unfilled distance requires that both endpoints be inspected before the task can be attempted at all. The attentional cueing implicit in distance estimation tasks might thus account for their superior performance by neglect patients. We report two bisection studies that address this issue. The first confirmed, amongst patients with left visual neglect, a reliable reduction of rightward error for unfilled gap stimuli as compared with solid lines. The second study assessed the cause of this reduction, deconfounding the effects of stimulus type (lines vs. gaps) and attentional cueing, by applying an explicit cueing manipulation to line and gap bisection tasks. Under these matched cueing conditions, all patients performed similarly on line and gap bisection tasks, suggesting that the reduction of neglect typically observed for gap stimuli may be attributable entirely to cueing effects. We found no evidence that a spatial extent, once fully attended, is judged any differently according to whether it is filled or unfilled.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Including positive end-expiratory pressure (PEEP) in the manual resuscitation bag (MRB) may render manual hyperinflation (MHI) ineffective as a secretion maneuver technique in mechanically ventilated patients. In this study we aimed to determine the effect of increased PEEP or decreased compliance on peak expiratory flow rate (PEF) during MHI. A blinded, randomized study was performed on a lung simulator by 10 physiotherapists experienced in MHI and intensive care practice. PEEP levels of 0-15 cm H2O, compliance levels of 0.05 and 0.02 L/cm H2O, and MRB type were randomized. The Mapleson-C MRB generated significantly higher PEF (P < 0.01, d = 2.72) when compared with the Laerdal MRB for all levels of PEEP. In normal compliance (0.05 L/cm H2O) there was a significant decrease in PEF (P < 0.01, d = 1.45) for a PEEP more than 10 cm H2O in the Mapleson-C circuit. The Laerdal MRB at PEEP levels of more than 10 cm H2O did not generate a PEF that is theoretically capable of producing two-phase gas-liquid flow and, consequently, mobilizing pulmonary secretions. If MHI is indicated as a result of mucous plugging, the Mapleson-C MRB may be the most effective method of secretion mobilization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE. The driving environment is becoming increasingly complex, including both visual and auditory distractions within the in- vehicle and external driving environments. This study was designed to investigate the effect of visual and auditory distractions on a performance measure that has been shown to be related to driving safety, the useful field of view. METHODS. A laboratory study recorded the useful field of view in 28 young visually normal adults (mean 22.6 +/- 2.2 years). The useful field of view was measured in the presence and absence of visual distracters (of the same angular subtense as the target) and with three levels of auditory distraction (none, listening only, listening and responding). RESULTS. Central errors increased significantly (P < 0.05) in the presence of auditory but not visual distracters, while peripheral errors increased in the presence of both visual and auditory distracters. Peripheral errors increased with eccentricity and were greatest in the inferior region in the presence of distracters. CONCLUSIONS. Visual and auditory distracters reduce the extent of the useful field of view, and these effects are exacerbated in inferior and peripheral locations. This result has significant ramifications for road safety in an increasingly complex in-vehicle and driving environment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emmetropization is dependent on visual feedback and presumably some measure of the optical and image quality of the eye. We investigated the effect of simple alterations to image contrast on eye growth and refractive development. A 1.6 cyc/deg square-wave-grating target was located at the end of a 3.3 cm cone,, imaged by a +30 D lens and applied monocularly to the eyes of 8-day-old chicks. Eleven different contrast targets were tested: 95, 67, 47.5, 33.5, 24, 17, 12, 8.5, 4.2, 2.1, and 0%. Refractive error (RE), vitreous chamber depth (VC) and axial length (AL) varied with the contrast of the image (RE diff. F-10.86 = 12.420, p < 0.0005; VC diff. F-10.86 = 8.756, p < 0.0005; AL diff. F-10.86 = 9.240, p < 0.0005). Target contrasts 4.2% and lower produced relative myopia (4.2%: RE diff = -7.48 +/- 2.26 D, p = 0.987; 2.1%: RE diff = -7.22 +/- 2.77 D, p = 0.951) of similar amount to that observed in response to a featureless 0% contrast target (RE diff = -9.11 +/- 4.68 D). For target contrast levels 47.5% and greater isometropia was maintained (95%: RE diff = 1.83 +/- 2.78 D; 67%: RE diff = 0.14 +/- 1.84 D; 47.5% RE diff = 0.25 +/- 1.82 D). Contrasts in between produced an intermediate amount of myopia (33.5%: RE diff = -2.81 +/- 1.80 D; 24%: RE diff = -3.45 +/- 1.64 D; 17%: RE diff = -3.19 +/- 1.54 D; 12%: RE diff = -4.08 +/- 3.56 D; 8.5%: RE diff = -4.09 +/- 3.60 D). We conclude that image contrast provides important visual information for the eye growth control system or that contrast must reach a threshold value for some other emmetropization signal to function. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.