926 resultados para visual process
Resumo:
Working memory is the process of actively maintaining a representation of information for a brief period of time so that it is available for use. In monkeys, visual working memory involves the concerted activity of a distributed neural system, including posterior areas in visual cortex and anterior areas in prefrontal cortex. Within visual cortex, ventral stream areas are selectively involved in object vision, whereas dorsal stream areas are selectively involved in spatial vision. This domain specificity appears to extend forward into prefrontal cortex, with ventrolateral areas involved mainly in working memory for objects and dorsolateral areas involved mainly in working memory for spatial locations. The organization of this distributed neural system for working memory in monkeys appears to be conserved in humans, though some differences between the two species exist. In humans, as compared with monkeys, areas specialized for object vision in the ventral stream have a more inferior location in temporal cortex, whereas areas specialized for spatial vision in the dorsal stream have a more superior location in parietal cortex. Displacement of both sets of visual areas away from the posterior perisylvian cortex may be related to the emergence of language over the course of brain evolution. Whereas areas specialized for object working memory in humans and monkeys are similarly located in ventrolateral prefrontal cortex, those specialized for spatial working memory occupy a more superior and posterior location within dorsal prefrontal cortex in humans than in monkeys. As in posterior cortex, this displacement in frontal cortex also may be related to the emergence of new areas to serve distinctively human cognitive abilities.
Resumo:
Neural connections in the adult central nervous system are highly precise. In the visual system, retinal ganglion cells send their axons to target neurons in the lateral geniculate nucleus (LGN) in such a way that axons originating from the two eyes terminate in adjacent but nonoverlapping eye-specific layers. During development, however, inputs from the two eyes are intermixed, and the adult pattern emerges gradually as axons from the two eyes sort out to form the layers. Experiments indicate that the sorting-out process, even though it occurs in utero in higher mammals and always before vision, requires retinal ganglion cell signaling; blocking retinal ganglion cell action potentials with tetrodotoxin prevents the formation of the layers. These action potentials are endogenously generated by the ganglion cells, which fire spontaneously and synchronously with each other, generating "waves" of activity that travel across the retina. Calcium imaging of the retina shows that the ganglion cells undergo correlated calcium bursting to generate the waves and that amacrine cells also participate in the correlated activity patterns. Physiological recordings from LGN neurons in vitro indicate that the quasiperiodic activity generated by the retinal ganglion cells is transmitted across the synapse between ganglion cells to drive target LGN neurons. These observations suggest that (i) a neural circuit within the immature retina is responsible for generating specific spatiotemporal patterns of neural activity; (ii) spontaneous activity generated in the retina is propagated across central synapses; and (iii) even before the photoreceptors are present, nerve cell function is essential for correct wiring of the visual system during early development. Since spontaneously generated activity is known to be present elsewhere in the developing CNS, this process of activity-dependent wiring could be used throughout the nervous system to help refine early sets of neural connections into their highly precise adult patterns.
Resumo:
O presente estudo buscou entender a influência da utilização da celebridade Gisele Bundchen em anúncios de propaganda no comportamento do consumidor por meio de uma das técnicas de Neuromarketing: o eye tracking. Sendo assim, esta pesquisa objetivou analisar se é realmente importante a presença da celebridade em propagandas de anúncio impresso analisada sob o ponto de vista do Neuromarketing por meio da análise da atenção visual ao estímulo \'celebridade\'. Para a verificação dos objetivos, das hipóteses e da proposição advindas destes objetivos, foi empregada uma metodologia em que se buscou avaliar a atenção visual dos consumidores acerca do estímulo \'celebridade\' em relação aos demais estímulos presentes nos anúncios impressos como a logomarca, nome ou símbolo que representa a marca; o produto; e outras pessoas não famosas. Essa avaliação foi realizada por meio da técnica de Neuromarketing que utiliza o equipamento de eye tracking. Assim, os participantes foram divididos em três grupos (um que avaliou os anúncios das seis marcas com a celebridade; o outro que avaliou os anúncios destas mesmas marcas com a presença de pessoas não famosas e um último grupo que avaliou os anúncios das marcas sem a presença de pessoas). No final do foi aplicado um questionário para confirmação de alguns dados e para análise em relação à lembrança da marca. Os resultados, no geral, demonstraram que, de alguma forma, os participantes prestaram atenção na celebridade considerada na pesquisa (o que foi evidenciado, principalmente, pelos mapas de calor apresentados). Quando as celebridades foram comparadas às pessoas não famosas, em alguns casos (com a confirmação de algumas hipóteses), foi evidenciada a importância da presença da celebridade; porém, em outros casos, houve mais destaque para a presença da pessoa não famosa. Na pesquisa ficou evidente, também, que a presença de pessoas (sendo elas celebridade ou não) pode atrapalhar no processo de atenção para a marca e o produto e que, quando não se utilizou pessoas, houve mais atenção dos participantes para estes outros estímulos.
Resumo:
A imagem mental e a memória visual têm sido consideradas como componentes distintos na codificação da informação, e associados a processos diferentes da memória de trabalho. Evidências experimentais mostram, por exemplo, que o desempenho em tarefas de memória baseadas na geração de imagem mentais (imaginação visual) sofre a interferência do ruído visual dinâmico (RVD), mas não se observa o mesmo efeito em tarefas de memória visual baseadas na percepção visual (memória visual). Embora várias evidências mostrem que tarefas de imaginação e de memória visual sejam baseadas em processos cognitivos diferentes, isso não descarta a possibilidade de utilizarem também processos em comum e que alguns resultados experimentais que apontam diferenças entre as duas tarefas resultem de diferenças metodológicas entre os paradigmas utilizados para estuda-las. Nosso objetivo foi equiparar as tarefas de imagem mental visual e memória visual por meio de tarefas de reconhecimento, com o paradigma de dicas retroativas espaciais. Sequências de letras romanas na forma visual (tarefa de memória visual) e acústicas (tarefa de imagem mental visual) foram apresentadas em quatro localizações espaciais diferentes. No primeiro e segundo experimento analisou-se o tempo do curso de recuperação tanto para o processo de imagem quanto para o processo de memória. No terceiro experimento, comparou-se a estrutura das representações dos dois componentes, por meio da apresentação do RVD durante a etapa de geração e recuperação. Nossos resultados mostram que não há diferenças no armazenamento da informação visual durante o período proposto, porém o RVD afeta a eficiência do processo de recuperação, isto é o tempo de resposta, sendo a representação da imagem mental visual mais suscetível ao ruído. No entanto, o processo temporal da recuperação é diferente para os dois componentes, principalmente para imaginação que requer mais tempo para recuperar a informação do que a memória. Os dados corroboram a relevância do paradigma de dicas retroativas que indica que a atenção espacial é requisitada em representações de organização espacial, independente se são visualizadas ou imaginadas.
Resumo:
In this paper, we present a novel coarse-to-fine visual localization approach: contextual visual localization. This approach relies on three elements: (i) a minimal-complexity classifier for performing fast coarse localization (submap classification); (ii) an optimized saliency detector which exploits the visual statistics of the submap; and (iii) a fast view-matching algorithm which filters initial matchings with a structural criterion. The latter algorithm yields fine localization. Our experiments show that these elements have been successfully integrated for solving the global localization problem. Context, that is, the awareness of being in a particular submap, is defined by a supervised classifier tuned for a minimal set of features. Visual context is exploited both for tuning (optimizing) the saliency detection process, and to select potential matching views in the visual database, close enough to the query view.
Resumo:
This paper presents a method for the fast calculation of a robot’s egomotion using visual features. The method is part of a complete system for automatic map building and Simultaneous Location and Mapping (SLAM). The method uses optical flow to determine whether the robot has undergone a movement. If so, some visual features that do not satisfy several criteria are deleted, and then egomotion is calculated. Thus, the proposed method improves the efficiency of the whole process because not all the data is processed. We use a state-of-the-art algorithm (TORO) to rectify the map and solve the SLAM problem. Additionally, a study of different visual detectors and descriptors has been conducted to identify which of them are more suitable for the SLAM problem. Finally, a navigation method is described using the map obtained from the SLAM solution.
Resumo:
During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and “automatically and autonomously” obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.
Resumo:
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.
Resumo:
This thesis explores the role of multimodality in language learners’ comprehension, and more specifically, the effects on students’ audio-visual comprehension when different orchestrations of modes appear in the visualization of vodcasts. Firstly, I describe the state of the art of its three main areas of concern, namely the evolution of meaning-making, Information and Communication Technology (ICT), and audio-visual comprehension. One of the most important contributions in the theoretical overview is the suggested integrative model of audio-visual comprehension, which attempts to explain how students process information received from different inputs. Secondly, I present a study based on the following research questions: ‘Which modes are orchestrated throughout the vodcasts?’, ‘Are there any multimodal ensembles that are more beneficial for students’ audio-visual comprehension?’, and ‘What are the students’ attitudes towards audio-visual (e.g., vodcasts) compared to traditional audio (e.g., audio tracks) comprehension activities?’. Along with these research questions, I have formulated two hypotheses: Audio-visual comprehension improves when there is a greater number of orchestrated modes, and students have a more positive attitude towards vodcasts than traditional audios when carrying out comprehension activities. The study includes a multimodal discourse analysis, audio-visual comprehension tests, and students’ questionnaires. The multimodal discourse analysis of two British Council’s language learning vodcasts, entitled English is GREAT and Camden Fashion, using ELAN as the multimodal annotation tool, shows that there are a variety of multimodal ensembles of two, three and four modes. The audio-visual comprehension tests were given to 40 Spanish students, learning English as a foreign language, after the visualization of vodcasts. These comprehension tests contain questions related to specific orchestrations of modes appearing in the vodcasts. The statistical analysis of the test results, using repeated-measures ANOVA, reveal that students obtain better audio-visual comprehension results when the multimodal ensembles are constituted by a greater number of orchestrated modes. Finally, the data compiled from the questionnaires, conclude that students have a more positive attitude towards vodcasts in comparison to traditional audio listenings. Results from the audio-visual comprehension tests and questionnaires prove the two hypotheses of this study.
Resumo:
Vol. 5 issued by the National League for Nursing, Division of Nursing Education.
Resumo:
In this paper, we review evidence from comparative studies of primate cortical organization, highlighting recent findings and hypotheses that may help us to understand the rules governing evolutionary changes of the cortical map and the process of formation of areas during development. We argue that clear unequivocal views of cortical areas and their homologies are more likely to emerge for 'core' fields, including the primary sensory areas, which are specified early in development by precise molecular identification steps. In primates, the middle temporal area is probably one of these primordial cortical fields. Areas that form at progressively later stages of development correspond to progressively more recent evolutionary events, their development being less firmly anchored in molecular specification. The certainty with which areal boundaries can be delimited, and likely homologies can be assigned, becomes increasingly blurred in parallel with this evolutionary/developmental sequence. For example, while current concepts for the definition of cortical areas have been vindicated in allowing a clarification of the organization of the New World monkey 'third tier' visual cortex (the third and dorsomedial areas, V3 and DM), our analyses suggest that more flexible mapping criteria may be needed to unravel the organization of higher-order visual association and polysensory areas.
Resumo:
O trabalho aborda questões sobre a produção e composição da imagem em alta definição na TV Digital HDTV. Por meio dos dados levantados na literatura específica, impressa e eletrônica, e com entrevistas com profissionais da área e observações da programação disponível em HDTV na cidade de São Paulo puderam ser analisadas as imagens e composição visual que advém com a TV Digital de alta definição e interativa. Para tanto, a produção da imagem em alta definição precisa atender a dois tipos de público: o que assiste a transmissão digital e o que ainda continuará assistindo no sistema analógico com baixa percepção para os detalhes visuais. Os resultados demonstraram duas questões fundamentais e interdependentes: as práticas de produção, materiais cenográficos e processos de composição dos elementos da imagem precisam ser atualizados segundo as novas características tecnológicas e que o processo de implantação da TV Digital no Brasil deve ser revisto, com correções de prazos e das políticas adotadas sob o risco de se atrasar todo o processo de produção de conteúdo e da imagem em alta definição para este suporte.
Resumo:
Models of visual motion processing that introduce priors for low speed through Bayesian computations are sometimes treated with scepticism by empirical researchers because of the convenient way in which parameters of the Bayesian priors have been chosen. Using the effects of motion adaptation on motion perception to illustrate, we show that the Bayesian prior, far from being convenient, may be estimated on-line and therefore represents a useful tool by which visual motion processes may be optimized in order to extract the motion signals commonly encountered in every day experience. The prescription for optimization, when combined with system constraints on the transmission of visual information, may lead to an exaggeration of perceptual bias through the process of adaptation. Our approach extends the Bayesian model of visual motion proposed byWeiss et al. [Weiss Y., Simoncelli, E., & Adelson, E. (2002). Motion illusions as optimal perception Nature Neuroscience, 5:598-604.], in suggesting that perceptual bias reflects a compromise taken by a rational system in the face of uncertain signals and system constraints. © 2007.
Resumo:
Knowledge elicitation is a well-known bottleneck in the production of knowledge-based systems (KBS). Past research has shown that visual interactive simulation (VIS) could effectively be used to elicit episodic knowledge that is appropriate for machine learning purposes, with a view to building a KBS. Nonetheless, the VIS-based elicitation process still has much room for improvement. Based in the Ford Dagenham Engine Assembly Plant, a research project is being undertaken to investigate the individual/joint effects of visual display level and mode of problem case generation on the elicitation process. This paper looks at the methodology employed and some issues that have been encountered to date. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
Expert systems, and artificial intelligence more generally, can provide a useful means for representing decision-making processes. By linking expert systems software to simulation software an effective means of including these decision-making processes in a simulation model can be achieved. This paper demonstrates how a commercial-off-the-shelf simulation package (Witness) can be linked to an expert systems package (XpertRule) through a Visual Basic interface. The methodology adopted could be used for models, and possibly software, other than those presented here.