968 resultados para Visual cues
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Resumo:
Background Project archives are becoming increasingly large and complex. On construction projects in particular, the increasing amount of information and the increasing complexity of its structure make searching and exploring information in the project archive challenging and time-consuming. Methods This research investigates a query-driven approach that represents new forms of contextual information to help users understand the set of documents resulting from queries of construction project archives. Specifically, this research extends query-driven interface research by representing three types of contextual information: (1) the temporal context is represented in the form of a timeline to show when each document was created; (2) the search-relevance context shows exactly which of the entered keywords matched each document; and (3) the usage context shows which project participants have accessed or modified a file. Results We implemented and tested these ideas within a prototype query-driven interface we call VisArchive. VisArchive employs a combination of multi-scale and multi-dimensional timelines, color-coded stacked bar charts, additional supporting visual cues and filters to support searching and exploring historical project archives. The timeline-based interface integrates three interactive timelines as focus + context visualizations. Conclusions The feasibility of using these visual design principles is tested in two types of project archives: searching construction project archives of an educational building project and tracking of software defects in the Mozilla Thunderbird project. These case studies demonstrate the applicability, usefulness and generality of the design principles implemented.
Resumo:
In this study we showed that a freshwater fish, the climbing perch (Anabas testudineus) is incapable of using chemical communication but employs visual cues to acquire familiarity and distinguish a familiar group of conspecifics from an unfamiliar one. Moreover, the isolation of olfactory signals from visual cues did not affect the recognition and preference for a familiar shoal in this species.
Resumo:
Whether mice perceive the depth of space dependent on the visual size of object targets was explored when visual cues such as perspective and partial occlusion in space were excluded. A mouse was placed on a platform the height of which is adjustable. The platform located inside a box in which all other walls were dark exception its bottom through that light was projected as a sole visual cue. The visual object cue was composed of 4x4 grids to allow a mouse estimating the distance of the platform relative to the grids. Three sizes of grids reduced in a proportion of 2/3 and seven distances with an equal interval between the platform and the grids at the bottom were applied in the experiments. The duration of a mouse staying on the platform at each height was recorded when the different sizes of the grids were presented randomly to test whether the Judgment of the mouse for the depth of the platform from the bottom was affected by the size information of the visual target. The results from all conditions of three object sizes show that time of mice staying on the platform became longer with the increase in height. In distance of 20 similar to 30 cm, the mice did not use the size information of a target to judge the depth, while mainly used the information of binocular disparity. In distance less than 20 cm or more than 30 cm, however, especially in much higher distance 50 cm, 60 cm and 70 cm, the mice were able to use the size information to do so in order to compensate the lack of binocular disparity information from both eyes. Because the mice have only 1/3 of the visual field that is binocular. This behavioral paradigm established in the current study is a useful model and can be applied to the experiments using transgenic mouse as an animal model to investigate the relationships between behaviors and gene functions.
Resumo:
Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.
Resumo:
Both commercial and scientific applications often need to transform color images into gray-scale images, e. g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well-defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.
Resumo:
This research project is a study of the role of fixation and visual attention in object recognition. In this project, we build an active vision system which can recognize a target object in a cluttered scene efficiently and reliably. Our system integrates visual cues like color and stereo to perform figure/ground separation, yielding candidate regions on which to focus attention. Within each image region, we use stereo to extract features that lie within a narrow disparity range about the fixation position. These selected features are then used as input to an alignment-style recognition system. We show that visual attention and fixation significantly reduce the complexity and the false identifications in model-based recognition using Alignment methods. We also demonstrate that stereo can be used effectively as a figure/ground separator without the need for accurate camera calibration.
Resumo:
The aim of the present study was to assess the influence of local environmental olfactory cues on place learning in rats. We developed a new experimental design allowing the comparison of the use of local olfactory and visual cues in spatial and discrimination learning. We compared the effect of both types of cues on the discrimination of a single food source in an open-field arena. The goal was either in a fixed or in a variable location, and could be indicated by local olfactory and/or visual cues. The local cues enhanced the discrimination of the goal dish, whether it was in a fixed or in a variable location. However, we did not observe any overshadowing of the spatial information by the local olfactory or visual cue. Rats relied primarily on distant visuospatial information to locate the goal, neglecting local information when it was in conflict with the spatial information.
Resumo:
View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.
Resumo:
This study investigated the orienting of visual attention in rats using a 3-hole nose-poke task analogous to Posner, Information processing in cognition: the Loyola Symposium, Erlbaum, Hillsdale, (1980) covert attention task for humans. The effects of non-predictive (50% valid and 50% invalid) and predictive (80% valid and 20% invalid) peripheral visual cues on reaction times and response accuracy to a target stimulus, using Stimuli-Onset Asynchronies (SOAs) varying between 200 and 1,200 ms, were investigated. The results showed shorter reaction times in valid trials relative to invalid trials for both subjects trained in the non-predictive and predictive conditions, particularly when the SOAs were 200 and 400 ms. However, the magnitude of this validity effect was significantly greater for subjects exposed to predictive cues, when the SOA was 800 ms. Subjects exposed to invalid predictive cues exhibited an increase in omission errors relative to subjects exposed to invalid non-predictive cues. In contrast, valid cues reduced the proportion of omission errors for subjects trained in the predictive condition relative to subjects trained in the non-predictive condition. These results are congruent with those usually reported for humans and indicate that, in addition to the exogenous capture of attention promoted by both predictive and non-predictive peripheral cues, rats exposed to predictive cues engaged an additional slower process equivalent to human`s endogenous orienting of attention. To our knowledge, this is the first demonstration of an endogenous-like process of covert orienting of visual attention in rats.
Resumo:
Ornamental fish may be severely affected by a stressful environment. Stressors impair the immune response, reproduction and growth rate; thus, the identification of possible stressors will aid to improve the overall quality of ornamental fish. The aim of this study was to determine whole-body cortisol of adult zebrafish, Danio rerio, following visual or direct contact with a predator species. Zebrafish were distributed in three groups: the first group, which consisted of zebrafish reared completely isolated of the predator, was considered the negative control; the second group, in which the predator, Parachromis managuensis was stocked together with zebrafish, was considered the positive control; the third group consisted of zebrafish stocked in a glass aquarium, with direct visual contact with the predator. The mean whole-body cortisol concentration in zebrafish from the negative control was 6.78 +/- 1.12 ng g(-1), a concentration statistically lower than that found in zebrafish having visual contact with the predator (9.26 +/- 0.88 ng g(-1)) which, in turn, was statistically lower than the mean whole-body cortisol of the positive control group (12.35 +/- 1.59 ng g(-1)). The higher whole-body cortisol concentration found in fish from the positive control can be attributed to the detection, by the zebrafish, of relevant risk situations that may involve a combination of chemical, olfactory and visual cues. One of the functions of elevated cortisol is to mobilize energy from body resources to cope with stress. The elevation of whole-body cortisol in fish subjected to visual contact with the predator involves only the visual cue in the recognition of predation risk. We hypothesized that the zebrafish could recognize predator characteristics in P managuensis, such as length, shape, color and behavior. Nonetheless, the elevation of whole-body cortisol in zebrafish suggested that the visual contact of the predator may elicit a stress response in prey fish. This assertion has a strong practical application concerning the species distribution in ornamental fish markets in which prey species should not be allowed to see predator species. Minimizing visual contact between prey and predator fish may improve the quality, viability and welfare of small fish in ornamental fish markets. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
I studied the effect of disturbance chemical cues on fish that make trade-offs between foraging in an open area and remaining in a safe refuge. I used convict cichlids Archocentrus nigrofasciatus that were either visually exposed to a predator (n = 8) or exposed to water conditioned by chemical cues from disturbed conspecifics (n = 8). Fish visually exposed to a predator decreased their ingestion rate and spent more time in the refuge than in the foraging area, while fish receiving water from frightened conspecifics did not alter their ingestion rate or time spent in the refuge and foraging site, but increased their spatial occupation (i.e., motion). These results suggest that convict cichlids recognized the predator by visual cues. Moreover, disturbance cues are a form of threatening public information that may increase fish spatial occupation due to increased exploring behaviour; but is not sufficiently alarming to alter feeding or increase refuge use. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.