907 resultados para Visual Odometry,Transformer,Deep learning
Resumo:
Three dives of the Mir manned submersibles with plankton counts and two vertical plankton hauls with a BR net were carried out above the Lost City (Atlantis underwater massif) and the Broken Spur hydrothermal fields during cruise 50 of R/V Akademik Mstislav Keldysh. Above the Atlantis seamount no significant increase in plankton concentration was found. Above the Lost City field horizontal heterogeneity of plankton distribution in the near-bottom layer and in overlying water layers was shown. Near-bottom aggregations of euphausiids and amphipods previously reported by other scientists seem to be related to attraction of these animals by the submersible's headlights rather than represent a natural phenomenon.
Resumo:
The upper Albian to Coniacian section (Cores 105 to 89) at Site 530 contains rare and poorly preserved coccoliths at a few levels and fine-fraction carbonate ("micarb") at all the levels studied. Dissolution ranking of the most resistant coccolith species is possible. Changes in the dissolution intensity resulting from variations in the organic carbon and carbonate input seem a likely explanation for changes in the relative abundance of fine-fraction carbonates types.
Resumo:
Human Engineering Project 20-TV-1, Project designation NR-781.
Resumo:
"AIR-C48-1/63-TR."
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Previous research in visual search indicates that animal fear-relevant deviants, snakes/spiders, are found faster among non fear-relevant backgrounds, flowers/mushrooms, than vice versa. Moreover, deviant absence was indicated faster among snakes/spiders and detection time for flower/mushroom deviants, but not for snake/spider deviants, increased in larger arrays. The current research indicates that the latter 2 results do not reflect on fear-relevance, but are found only with flower/mushroom controls. These findings may reflect on factors such as background homogeneity, deviant homogeneity, or background-deviant similarity. The current research removes contradictions between previous studies that used animal and social fear-relevant stimuli and indicates that apparent search advantages for fear-relevant deviants seem likely to reflect on delayed attentional disengagement from fear-relevance on control trials.
Resumo:
This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.
Resumo:
There is still a great deal of opportunity for research on contextual interactive immersion in virtual heritage environments. The general failure of virtual environment technology to create engaging and educational experiences may be attributable not just to deficiencies in technology or in visual fidelity, but also to a lack of contextual and performative-based interaction, such as that found in games. However, there is little written so far on exactly how game-style interaction can help improve virtual learning environments.