978 resultados para Visual identification tasks
Resumo:
Resumen tomado de la revista
Resumo:
There is evidence that automatic visual attention favors the right side. This study investigated whether this lateral asymmetry interacts with the right hemisphere dominance for visual location processing and left hemisphere dominance for visual shape processing. Volunteers were tested in a location discrimination task and a shape discrimination task. The target stimuli (S2) could occur in the left or right hemifield. They were preceded by an ipsilateral, contralateral or bilateral prime stimulus (S1). The attentional effect produced by the right S1 was larger than that produced by the left S1. This lateral asymmetry was similar between the two tasks suggesting that the hemispheric asymmetries of visual mechanisms do not contribute to it. The finding that it was basically due to a longer reaction time to the left S2 than to the right S2 for the contralateral S1 condition suggests that the inhibitory component of attention is laterally asymmetric.
Resumo:
Norms for three visual memory tasks, including Corsi's block tapping test and the BEM 144 complex figures and visual recognition, were developed for neuropsychological assessment in Brazilian children. The tasks were measured in 127 children ages 7 to 10 years from rural and urban areas of the States of São Paulo and Minas Gerais. Analysis indicated age-related but not sex-related differences. A cross-cultural effect was observed in relation to copying and recall of Complex pictures. Different performances between rural and urban children were noted. © Perceptual and Motor Skills 2005.
Resumo:
This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.
Resumo:
Difficulties in visual attention are increasingly being linked to dyslexia. To date, the majority of studies have inferred functionality of attention from response times to stimuli presented for an indefinite duration. However, in paradigms that use reaction times to investigate the ability to orient attention, a delayed reaction time could also indicate difficulties in signal enhancement or noise exclusion once oriented. Thus, in order to investigate attention modulation and visual crowding effects in dyslexia, this study measured stimulus discrimination accuracy to rapidly presented displays. Adults with dyslexia (AwD) and controls discriminated the orientation of a target in an array of different numbers of - and differently spaced - vertically orientated distractors. Results showed that AwD: were disproportionately impacted by (i) close spacing and (ii) increased numbers of stimuli, (iii) did use pre-cues to modulate attention, but (iv) used cues less successfully to counter effects of increasing numbers of distractors. A greater dependence on pre-cues, larger effects of crowding and the impact of increased numbers of distractors all correlated significantly with measures of literacy. These findings extend previous studies of visual crowding of letters in dyslexia to non-complex stimuli. Overall, AwD do not use cues less, but they do use cues less successfully. We conclude that visual attention is an important factor to consider in the aetiology of dyslexia. The results challenge existing theoretical accounts of visual attention deficits, which alone are unable to comprehensively explain the pattern of findings demonstrated here.
Resumo:
Because of attentional limitations, the human visual system can process for awareness and response only a fraction of the input received. Lesion and functional imaging studies have identified frontal, temporal, and parietal areas as playing a major role in the attentional control of visual processing, but very little is known about how these areas interact to form a dynamic attentional network. We hypothesized that the network communicates by means of neural phase synchronization, and we used magnetoencephalography to study transient long-range interarea phase coupling in a well studied attentionally taxing dual-target task (attentional blink). Our results reveal that communication within the fronto-parieto-temporal attentional network proceeds via transient long-range phase synchronization in the beta band. Changes in synchronization reflect changes in the attentional demands of the task and are directly related to behavioral performance. Thus, we show how attentional limitations arise from the way in which the subsystems of the attentional network interact. The human brain faces an inestimable task of reducing a potentially overloading amount of input into a manageable flow of information that reflects both the current needs of the organism and the external demands placed on it. This task is accomplished via a ubiquitous construct known as “attention,” whose mechanism, although well characterized behaviorally, is far from understood at the neurophysiological level. Whereas attempts to identify particular neural structures involved in the operation of attention have met with considerable success (1-5) and have resulted in the identification of frontal, parietal, and temporal regions, far less is known about the interaction among these structures in a way that can account for the task-dependent successes and failures of attention. The goal of the present research was, thus, to unravel the means by which the subsystems making up the human attentional network communicate and to relate the temporal dynamics of their communication to observed attentional limitations in humans. A prime candidate for communication among distributed systems in the human brain is neural synchronization (for review, see ref. 6). Indeed, a number of studies provide converging evidence that long-range interarea communication is related to synchronized oscillatory activity (refs. 7-14; for review, see ref. 15). To determine whether neural synchronization plays a role in attentional control, we placed humans in an attentionally demanding task and used magnetoencephalography (MEG) to track interarea communication by means of neural synchronization. In particular, we presented 10 healthy subjects with two visual target letters embedded in streams of 13 distractor letters, appearing at a rate of seven per second. The targets were separated in time by a single distractor. This condition leads to the “attentional blink” (AB), a well studied dual-task phenomenon showing the reduced ability to report the second of two targets when an interval <500 ms separates them (16-18). Importantly, the AB does not prevent perceptual processing of missed target stimuli but only their conscious report (19), demonstrating the attentional nature of this effect and making it a good candidate for the purpose of our investigation. Although numerous studies have investigated factors, e.g., stimulus and timing parameters, that manipulate the magnitude of a particular AB outcome, few have sought to characterize the neural state under which “standard” AB parameters produce an inability to report the second target on some trials but not others. We hypothesized that the different attentional states leading to different behavioral outcomes (second target reported correctly or not) are characterized by specific patterns of transient long-range synchronization between brain areas involved in target processing. Showing the hypothesized correspondence between states of neural synchronization and human behavior in an attentional task entails two demonstrations. First, it needs to be demonstrated that cortical areas that are suspected to be involved in visual-attention tasks, and the AB in particular, interact by means of neural synchronization. This demonstration is particularly important because previous brain-imaging studies (e.g., ref. 5) only showed that the respective areas are active within a rather large time window in the same task and not that they are concurrently active and actually create an interactive network. Second, it needs to be demonstrated that the pattern of neural synchronization is sensitive to the behavioral outcome; specifically, the ability to correctly identify the second of two rapidly succeeding visual targets
Resumo:
Convolutional Neural Networks (CNN) have become the state-of-the-art methods on many large scale visual recognition tasks. For a lot of practical applications, CNN architectures have a restrictive requirement: A huge amount of labeled data are needed for training. The idea of generative pretraining is to obtain initial weights of the network by training the network in a completely unsupervised way and then fine-tune the weights for the task at hand using supervised learning. In this thesis, a general introduction to Deep Neural Networks and algorithms are given and these methods are applied to classification tasks of handwritten digits and natural images for developing unsupervised feature learning. The goal of this thesis is to find out if the effect of pretraining is damped by recent practical advances in optimization and regularization of CNN. The experimental results show that pretraining is still a substantial regularizer, however, not a necessary step in training Convolutional Neural Networks with rectified activations. On handwritten digits, the proposed pretraining model achieved a classification accuracy comparable to the state-of-the-art methods.
Resumo:
L’effet d’encombrement, qui nous empêche d’identifier correctement un stimulus visuel lorsqu’il est entouré de flanqueurs, est omniprésent à travers une grande variété de classes de stimuli. L’excentricité du stimulus cible ainsi que la distance cible-flanqueur constituent des facteurs fondamentaux qui modulent l’effet d’encombrement. La similarité cible-flanqueur semble également contribuer à l’ampleur de l’effet d’encombrement, selon des données obtenues avec des stimuli non-linguistiques. La présente étude a examiné ces trois facteurs en conjonction avec le contenu en fréquences spatiales des stimuli, dans une tâche d’identification de lettres. Nous avons présenté des images filtrées de lettres à des sujets non-dyslexiques exempts de troubles neurologiques, tout en manipulant l’excentricité de la cible ainsi que la similarité cible-flanqueurs (selon des matrices de confusion pré-établies). Quatre types de filtrage de fréquences spatiales ont été utilisés : passe-bas, passe-haut, à large bande et mixte (i.e. élimination des fréquences moyennes, connues comme étant optimales pour l’identification de lettres). Ces conditions étaient appariées en termes d’énergie de contraste. Les sujets devaient identifier la lettre cible le plus rapidement possible en évitant de commettre une erreur. Les résultats démontrent que la similarité cible-flanqueur amplifie l’effet d’encombrement, i.e. l’effet conjoint de distance et d’excentricité. Ceci étend les connaissances sur l’impact de la similarité sur l’encombrement à l’identification visuelle de stimuli linguistiques. De plus, la magnitude de l’effet d’encombrement est plus grande avec le filtre passe-bas, suivit du filtre mixte, du filtre passe-haut et du filtre à large bande, avec différences significatives entre les conditions consécutives. Nous concluons que : 1- les fréquences spatiales moyennes offrent une protection optimale contre l’encombrement en identification de lettres; 2- lorsque les fréquences spatiales moyennes sont absentes du stimulus, les hautes fréquences protègent contre l’encombrement alors que les basses fréquences l’amplifient, probablement par l’entremise de leur impact opposé quant la disponibilité de l’information sur les caractéristiques distinctives des stimul.
Resumo:
Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias.
Resumo:
Visual attention is a very important task in autonomous robotics, but, because of its complexity, the processing time required is significant. We propose an architecture for feature selection using foveated images that is guided by visual attention tasks and that reduces the processing time required to perform these tasks. Our system can be applied in bottom-up or top-down visual attention. The foveated model determines which scales are to be used on the feature extraction algorithm. The system is able to discard features that are not extremely necessary for the tasks, thus, reducing the processing time. If the fovea is correctly placed, then it is possible to reduce the processing time without compromising the quality of the tasks outputs. The distance of the fovea from the object is also analyzed. If the visual system loses the tracking in top-down attention, basic strategies of fovea placement can be applied. Experiments have shown that it is possible to reduce up to 60% the processing time with this approach. To validate the method, we tested it with the feature algorithm known as Speeded Up Robust Features (SURF), one of the most efficient approaches for feature extraction. With the proposed architecture, we can accomplish real time requirements of robotics vision, mainly to be applied in autonomous robotics
Resumo:
This work investigates the gender effect on visual demand of drivers for dynamic maps at different cartographic scales presented In-Vehicle Route Guidance and Navigation System (RGNS). A group of 52 subjects (26 males and 26 females) took part in an experiment performed in a low-cost driving simulator. the driver's task consisted of navigating in an unknown route using a RGNS prototype which presents maps at two different cartographic scales. This paper replicates the known phenomenon of significant relationships between gender and performance at visual-spatial tasks issue. Our results show that drivers of different genders present distinct levels of visual demand both due to the cartographic scales and maneuver complexity variation. These discussed results are based upon individual differences in terms of spatial ability and spatial anxiety.
Resumo:
Crowding is defined as the negative effect obtained by adding visual distractors around a central target which has to be identified. Some studies have suggested the presence of a marked crowding effect in developmental dyslexia (e.g. Atkinson, 1991; Spinelli et al., 2002). Inspired by Spinelli’s (2002) experimental design, we explored the hypothesis that the crowding effect may affect dyslexics’ response times (RTs) and accuracy in identification tasks dealing with words, pseudowords, illegal non-words and symbolstrings. Moreover, our study aimed to clarify the relationship between the crowding phenomenon and the word-reading process, in an inter-language comparison perspective. For this purpose we studied twenty-two French dyslexics and twenty-two Italian dyslexics (total forty-four dyslexics), compared to forty-four subjects matched for reading level (22 French and 22 Italians) and forty-four chronological age-matched subjects (22 French and 22 Italians). Children were all tested on reading and cognitive abilities. Results showed no differences between French and Italian participants suggesting that performances were homogenous. Dyslexic children were all significantly impaired in words and pseudowords reading compared to their normal reading controls. Regarding the identification task with which we assessed crowding effect, both accuracy and RTs showed a lexicality effect which meant that the recognition of words was more accurate and faster in words than pseudowords, non-words and symbolstrings. Moreover, compared to normal readers, dyslexics’ RTs and accuracy were impaired only for verbal materials but not for non-verbal material; these results are in line with the phonological hypothesis (Griffiths & Snowling, 2002; Snowling, 2000; 2006) . RTs revealed a general crowding effect (RTs in the crowding condition were slower than those recorded in the isolated condition) affecting all the subjects’ performances. This effect, however, emerged to be not specific for dyslexics. Data didn’t reveal a significant effect of language, allowing the generalization of the obtained results. We also analyzed the performance of two subgroups of dyslexics, categorized according to their reading abilities. The two subgroups produced different results regarding the crowding effect and type of material, suggesting that it is meaningful to take into account also the heterogeneity of the dyslexia disorder. Finally, we also analyzed the relationship of the identification task with both reading and cognitive abilities. In conclusion, this study points out the importance of comparing visual tasks performances of dyslexic participants with those of their reading level-matched controls. This approach may improve our comprehension of the potential causal link between crowding and reading (Goswami, 2003).
Resumo:
A imagem mental e a memória visual têm sido consideradas como componentes distintos na codificação da informação, e associados a processos diferentes da memória de trabalho. Evidências experimentais mostram, por exemplo, que o desempenho em tarefas de memória baseadas na geração de imagem mentais (imaginação visual) sofre a interferência do ruído visual dinâmico (RVD), mas não se observa o mesmo efeito em tarefas de memória visual baseadas na percepção visual (memória visual). Embora várias evidências mostrem que tarefas de imaginação e de memória visual sejam baseadas em processos cognitivos diferentes, isso não descarta a possibilidade de utilizarem também processos em comum e que alguns resultados experimentais que apontam diferenças entre as duas tarefas resultem de diferenças metodológicas entre os paradigmas utilizados para estuda-las. Nosso objetivo foi equiparar as tarefas de imagem mental visual e memória visual por meio de tarefas de reconhecimento, com o paradigma de dicas retroativas espaciais. Sequências de letras romanas na forma visual (tarefa de memória visual) e acústicas (tarefa de imagem mental visual) foram apresentadas em quatro localizações espaciais diferentes. No primeiro e segundo experimento analisou-se o tempo do curso de recuperação tanto para o processo de imagem quanto para o processo de memória. No terceiro experimento, comparou-se a estrutura das representações dos dois componentes, por meio da apresentação do RVD durante a etapa de geração e recuperação. Nossos resultados mostram que não há diferenças no armazenamento da informação visual durante o período proposto, porém o RVD afeta a eficiência do processo de recuperação, isto é o tempo de resposta, sendo a representação da imagem mental visual mais suscetível ao ruído. No entanto, o processo temporal da recuperação é diferente para os dois componentes, principalmente para imaginação que requer mais tempo para recuperar a informação do que a memória. Os dados corroboram a relevância do paradigma de dicas retroativas que indica que a atenção espacial é requisitada em representações de organização espacial, independente se são visualizadas ou imaginadas.