993 resultados para Visual Recognition
Resumo:
Human object recognition is considered to be largely invariant to translation across the visual field. However, the origin of this invariance to positional changes has remained elusive, since numerous studies found that the ability to discriminate between visual patterns develops in a largely location-specific manner, with only a limited transfer to novel visual field positions. In order to reconcile these contradicting observations, we traced the acquisition of categories of unfamiliar grey-level patterns within an interleaved learning and testing paradigm that involved either the same or different retinal locations. Our results show that position invariance is an emergent property of category learning. Pattern categories acquired over several hours at a fixed location in either the peripheral or central visual field gradually become accessible at new locations without any position-specific feedback. Furthermore, categories of novel patterns presented in the left hemifield are distinctly faster learnt and better generalized to other locations than those learnt in the right hemifield. Our results suggest that during learning initially position-specific representations of categories based on spatial pattern structure become encoded in a relational, position-invariant format. Such representational shifts may provide a generic mechanism to achieve perceptual invariance in object recognition.
Resumo:
A substantial amount of evidence has been collected to propose an exclusive role for the dorsal visual pathway in the control of guided visual search mechanisms, specifically in the preattentive direction of spatial selection [Vidyasagar, T. R. (1999). A neuronal model of attentional spotlight: Parietal guiding the temporal. Brain Research and Reviews, 30, 66-76; Vidyasagar, T. R. (2001). From attentional gating in macaque primary visual cortex to dyslexia in humans. Progress in Brain Research, 134, 297-312]. Moreover, it has been suggested recently that the dorsal visual pathway is specifically involved in the spatial selection and sequencing required for orthographic processing in visual word recognition. In this experiment we manipulate the demands for spatial processing in a word recognition, lexical decision task by presenting target words in a normal spatial configuration, or where the constituent letters of each word are spatially shifted relative to each other. Accurate word recognition in the Shifted-words condition should demand higher spatial encoding requirements, thereby making greater demands on the dorsal visual stream. Magnetoencephalographic (MEG) neuroimaging revealed a high frequency (35-40 Hz) right posterior parietal activation consistent with dorsal stream involvement occurring between 100 and 300 ms post-stimulus onset, and then again at 200-400 ms. Moreover, this signal was stronger in the shifted word condition, compared to the normal word condition. This result provides neurophysiological evidence that the dorsal visual stream may play an important role in visual word recognition and reading. These results further provide a plausible link between early stage theories of reading, and the magnocellular-deficit theory of dyslexia, which characterises many types of reading difficulty. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
We used magnetoencephalography (MEG) to map the spatiotemporal evolution of cortical activity for visual word recognition. We show that for five-letter words, activity in the left hemisphere (LH) fusiform gyrus expands systematically in both the posterior-anterior and medial-lateral directions over the course of the first 500 ms after stimulus presentation. Contrary to what would be expected from cognitive models and hemodynamic studies, the component of this activity that spatially coincides with the visual word form area (VWFA) is not active until around 200 ms post-stimulus, and critically, this activity is preceded by and co-active with activity in parts of the inferior frontal gyrus (IFG, BA44/6). The spread of activity in the VWFA for words does not appear in isolation but is co-active in parallel with spread of activity in anterior middle temporal gyrus (aMTG, BA 21 and 38), posterior middle temporal gyrus (pMTG, BA37/39), and IFG. © 2004 Elsevier Inc. All rights reserved.
Resumo:
In this report we summarize the state-of-the-art of speech emotion recognition from the signal processing point of view. On the bases of multi-corporal experiments with machine-learning classifiers, the observation is made that existing approaches for supervised machine learning lead to database dependent classifiers which can not be applied for multi-language speech emotion recognition without additional training because they discriminate the emotion classes following the used training language. As there are experimental results showing that Humans can perform language independent categorisation, we made a parallel between machine recognition and the cognitive process and tried to discover the sources of these divergent results. The analysis suggests that the main difference is that the speech perception allows extraction of language independent features although language dependent features are incorporated in all levels of the speech signal and play as a strong discriminative function in human perception. Based on several results in related domains, we have suggested that in addition, the cognitive process of emotion-recognition is based on categorisation, assisted by some hierarchical structure of the emotional categories, existing in the cognitive space of all humans. We propose a strategy for developing language independent machine emotion recognition, related to the identification of language independent speech features and the use of additional information from visual (expression) features.
Resumo:
We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures. © 2012 Psychonomic Society, Inc.
Resumo:
Recent experimental studies have shown that development towards adult performance levels in configural processing in object recognition is delayed through middle childhood. Whilst partchanges to animal and artefact stimuli are processed with similar to adult levels of accuracy from 7 years of age, relative size changes to stimuli result in a significant decrease in relative performance for participants aged between 7 and 10. Two sets of computational experiments were run using the JIM3 artificial neural network with adult and 'immature' versions to simulate these results. One set progressively decreased the number of neurons involved in the representation of view-independent metric relations within multi-geon objects. A second set of computational experiments involved decreasing the number of neurons that represent view-dependent (nonrelational) object attributes in JIM3's Surface Map. The simulation results which show the best qualitative match to empirical data occurred when artificial neurons representing metric-precision relations were entirely eliminated. These results therefore provide further evidence for the late development of relational processing in object recognition and suggest that children in middle childhood may recognise objects without forming structural description representations.
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
In studies of mirror-self-recognition subjects are usually surreptitiously marked on their head, and then presented with a mirror. Scores of studies have established that by 18 to 24 months, children investigate their own head upon seeing the mark in the mirror. Scores of papers have debated what this means. Suggestions range from rich interpretations (e.g., the development of self-awareness) to lean accounts (e.g., the development of proprioceptivevisual matching), and include numerous more moderate proposals (e.g., the development of a concept of one's face). In Study 1, 18-24-monthold toddlers were given the standard test and a novel task in which they were marked on their legs rather than on their face. Toddlers performed equivalently on both tasks, suggesting that passing the test does not rely on information specific to facial features. In Study 2, toddlers were surreptitiously slipped into trouser legs that were prefixed to a highchair. Toddlers failed to retrieve the sticker now that their legs looked different from expectations. This finding, together with the findings from a third study which showed that self-recognition in live video feedback develops later than mirror selfrecognition, suggests that performance is not solely the result of proprioceptive-visual matching.
Resumo:
Background Patients with early age-related maculopathy ( ARM) do not necessarily show obvious morphological signs or functional impairment. Many have good visual acuity, yet complain of decreased visual performance. The aim of this study was to investigate the aging effects on performance of parafoveal letter recognition at reduced contrast, and defects caused by early ARM and normal fellow eyes of patients with unilateral age-related macular degeneration (nfAMD). Methods Testing of the central visual field (8 radius) was performed by the Macular Mapping Test (MMT) using recognition of letters in 40 parafoveal target locations at four contrast levels (5, 10, 25 and 100%). Effects of aging were investigated in 64 healthy subjects aged 23 to 76 years (CTRL). In addition, 39 eyes (minimum visual acuity of 0.63; 20/30) from 39 patients with either no visible signs of ARM, while the fellow eye had advanced age-related macular degeneration (nfAMD; n=12), or early signs of ARM (eARM; n=27) were examined. Performance was expressed summarily as a ""field score"" (FS). Results Performance in the MMT begins to decline linearly with age in normal subjects from the age of 50 and 54 years on, at 5% and 10% contrast respectively. The differentiation between patients and CTRLs was enhanced if FS at 5% was analyzed along with FS at 10% contrast. In 8/12 patients from group nfAMD and in 18/27 from group eARM, the FS was statistically significantly lower than in the CTRL group in at least one of the lower contrast levels. Conclusion Using parafoveal test locations, a recognition task and diminished contrast increases the chance of early detection of functional defects due to eARM or nfAMD and can differentiate them from those due to aging alone.
Resumo:
The branching structure of neurones is thought to influence patterns of connectivity and how inputs are integrated within the arbor. Recent studies have revealed a remarkable degree of variation in the branching structure of pyramidal cells in the cerebral cortex of diurnal primates, suggesting regional specialization in neuronal function. Such specialization in pyramidal cell structure may be important for various aspects of visual function, such as object recognition and color processing. To better understand the functional role of regional variation in the pyramidal cell phenotype in visual processing, we determined the complexity of the dendritic branching pattern of pyramidal cells in visual cortex of the nocturnal New World owl monkey. We used the fractal dilation method to quantify the branching structure of pyramidal cells in the primary visual area (V1), the second visual area (V2) and the caudal and rostral subdivisions of inferotemporal cortex (ITc and ITr, respectively), which are often associated with color processing. We found that, as in diurnal monkeys, there was a trend for cells of increasing fractal dimension with progression through these cortical areas. The increasing complexity paralleled a trend for increasing symmetry. That we found a similar trend in both diurnal and nocturnal monkeys suggests that it was a feature of a common anthropoid ancestor.
Resumo:
Introdução – Na avaliação diagnóstica em mamografia, o desempenho do radiologista pode estar sujeito a erros de diagnóstico. Objetivo – Descrever a importância da perceção visual na análise da mamografia, identificando os principais fatores que contribuem para a perceção visual do radiologista e que condicionam a acuidade diagnóstica. Metodologia – Estudo descritivo baseado numa revisão sistemática de literatura através da PubMed e da Science Direct. Foram incluídos 42 artigos que respeitavam, pelo menos, um dos critérios de inclusão no estudo. Para a seleção das referências foi utilizada a metodologia PRISMA, constituída por 4 fases: identificação, seleção preliminar, elegibilidade e estudos incluídos. Resultados – Na avaliação diagnóstica em mamografia, a perceção visual está intimamente relacionada com: 1) diferentes parâmetros visuais e da motilidade ocular (acuidade visual, sensibilidade ao contraste e à luminância e movimentos oculares); 2) com condições de visualização de uma imagem (iluminância da sala e luminância do monitor); e 3) fadiga ocular provocada pela observação diária consecutiva de imagens. Conclusões – A perceção visual pode ser influenciada por 3 categorias de erros observados: erros de pesquisa (lesões não são fixadas pela fóvea), erros de reconhecimento (lesões fixadas, mas não durante o tempo suficiente) e erros de decisão (lesões fixadas, mas não identificadas como suspeitas). Os estudos analisados sobre perceção visual, atenção visual e estratégia visual, bem como os estudos sobre condições de visualização não caracterizam a função visual dos observadores. Para uma avaliação correta da perceção visual em mamografia deverão ser efetuados estudos que correlacionem a função visual com a qualidade diagnóstica. ABSTRACT - Introduction – Diagnostic evaluation in mammography could be influenced by the radiologist performance that could be under diagnostic errors. Aims – To describe the importance of radiologist visual perception in mammographic diagnostic evaluation and to identify the main factors that contribute to diagnostic accuracy. Methods – In this systematic review 42 references were included based on inclusion criteria (PubMed and Science Direct). PRISMA method was used to select the references following 4 steps: identification, screening, eligibility and included references. Results – Visual perception in mammography diagnostic evaluation is related with: 1) visual parameters and ocular motility (visual acuity, contrast sensitivity and luminance and ocular movements); 2) image visualization environment (room iluminance and monitor luminance); and 3) eyestrain caused by image daily consecutive observation. Conclusions – Visual perception can be influenced by three errors categories: search errors (lesions are never looked at with high-resolution foveal vision), recognition errors (lesions are looked at, but not long enough to detect or recognize) and decision errors (lesions are looked at for long periods of time but are still missed). The reviewed studies concerning visual perception, visual attention, visual strategies and image visualization environment do not describe observer’s visual function. An accurate evaluation of visual perception in mammography must include visual function analysis.
Resumo:
To become an open to outer space, the "museum" acquired new forms and new expressions. The complexity of museological activity thus leads to new representations that alter the initial image of the museum as a building with objects. Their 'boundaries' are now less sharp, not only in relation to the spatial relationship, but also to its temporal dimension, creating an additional challenge which is the recognition of the museum itself. The design, while transdisciplinary activity, thereby assumes a key role in the communication of the museums in its visual representation and recognition of their action. The present study results from a survey conducted in 2010 to 364 Portuguese museums (from a universe of 849 museums), presenting an analysis to its base elements of visual expression of identity (name, logo, symbol, and color).
Resumo:
The robotics community is concerned with the ability to infer and compare the results from researchers in areas such as vision perception and multi-robot cooperative behavior. To accomplish that task, this paper proposes a real-time indoor visual ground truth system capable of providing accuracy with at least more magnitude than the precision of the algorithm to be evaluated. A multi-camera architecture is proposed under the ROS (Robot Operating System) framework to estimate the 3D position of objects and the implementation and results were contextualized to the Robocup Middle Size League scenario.
Resumo:
Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.
Resumo:
This project was funded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway - Mayo Institute of Technology and an industrial company, Tyco/Mallinckrodt Galway. The project aimed to develop a semi - automatic, self - learning pattern recognition system capable of detecting defects on the printed circuits boards such as component vacancy, component misalignment, component orientation, component error, and component weld. The research was conducted in three directions: image acquisition, image filtering/recognition and software development. Image acquisition studied the process of forming and digitizing images and some fundamental aspects regarding the human visual perception. The importance of choosing the right camera and illumination system for a certain type of problem has been highlighted. Probably the most important step towards image recognition is image filtering, The filters are used to correct and enhance images in order to prepare them for recognition. Convolution, histogram equalisation, filters based on Boolean mathematics, noise reduction, edge detection, geometrical filters, cross-correlation filters and image compression are some examples of the filters that have been studied and successfully implemented in the software application. The software application developed during the research is customized in order to meet the requirements of the industrial partner. The application is able to analyze pictures, perform the filtering, build libraries, process images and generate log files. It incorporates most of the filters studied and together with the illumination system and the camera it provides a fully integrated framework able to analyze defects on printed circuit boards.