991 resultados para Spatial processing
Resumo:
The human auditory system is comprised of specialized but interacting anatomic and functional pathways encoding object, spatial, and temporal information. We review how learning-induced plasticity manifests along these pathways and to what extent there are common mechanisms subserving such plasticity. A first series of experiments establishes a temporal hierarchy along which sounds of objects are discriminated along basic to fine-grained categorical boundaries and learned representations. A widespread network of temporal and (pre)frontal brain regions contributes to object discrimination via recursive processing. Learning-induced plasticity typically manifested as repetition suppression within a common set of brain regions. A second series considered how the temporal sequence of sound sources is represented. We show that lateralized responsiveness during the initial encoding phase of pairs of auditory spatial stimuli is critical for their accurate ordered perception. Finally, we consider how spatial representations are formed and modified through training-induced learning. A population-based model of spatial processing is supported wherein temporal and parietal structures interact in the encoding of relative and absolute spatial information over the initial ∼300ms post-stimulus onset. Collectively, these data provide insights into the functional organization of human audition and open directions for new developments in targeted diagnostic and neurorehabilitation strategies.
Resumo:
Abstract (English)General backgroundMultisensory stimuli are easier to recognize, can improve learning and a processed faster compared to unisensory ones. As such, the ability an organism has to extract and synthesize relevant sensory inputs across multiple sensory modalities shapes his perception of and interaction with the environment. A major question in the scientific field is how the brain extracts and fuses relevant information to create a unified perceptual representation (but also how it segregates unrelated information). This fusion between the senses has been termed "multisensory integration", a notion that derives from seminal animal single-cell studies performed in the superior colliculus, a subcortical structure shown to create a multisensory output differing from the sum of its unisensory inputs. At the cortical level, integration of multisensory information is traditionally deferred to higher classical associative cortical regions within the frontal, temporal and parietal lobes, after extensive processing within the sensory-specific and segregated pathways. However, many anatomical, electrophysiological and neuroimaging findings now speak for multisensory convergence and interactions as a distributed process beginning much earlier than previously appreciated and within the initial stages of sensory processing.The work presented in this thesis is aimed at studying the neural basis and mechanisms of how the human brain combines sensory information between the senses of hearing and touch. Early latency non-linear auditory-somatosensory neural response interactions have been repeatedly observed in humans and non-human primates. Whether these early, low-level interactions are directly influencing behavioral outcomes remains an open question as they have been observed under diverse experimental circumstances such as anesthesia, passive stimulation, as well as speeded reaction time tasks. Under laboratory settings, it has been demonstrated that simple reaction times to auditory-somatosensory stimuli are facilitated over their unisensory counterparts both when delivered to the same spatial location or not, suggesting that audi- tory-somatosensory integration must occur in cerebral regions with large-scale spatial representations. However experiments that required the spatial processing of the stimuli have observed effects limited to spatially aligned conditions or varying depending on which body part was stimulated. Whether those divergences stem from task requirements and/or the need for spatial processing has not been firmly established.Hypotheses and experimental resultsIn a first study, we hypothesized that auditory-somatosensory early non-linear multisensory neural response interactions are relevant to behavior. Performing a median split according to reaction time of a subset of behavioral and electroencephalographic data, we found that the earliest non-linear multisensory interactions measured within the EEG signal (i.e. between 40-83ms post-stimulus onset) were specific to fast reaction times indicating a direct correlation of early neural response interactions and behavior.In a second study, we hypothesized that the relevance of spatial information for task performance has an impact on behavioral measures of auditory-somatosensory integration. Across two psychophysical experiments we show that facilitated detection occurs even when attending to spatial information, with no modulation according to spatial alignment of the stimuli. On the other hand, discrimination performance with probes, quantified using sensitivity (d'), is impaired following multisensory trials in general and significantly more so following misaligned multisensory trials.In a third study, we hypothesized that behavioral improvements might vary depending which body part is stimulated. Preliminary results suggest a possible dissociation between behavioral improvements andERPs. RTs to multisensory stimuli were modulated by space only in the case when somatosensory stimuli were delivered to the neck whereas multisensory ERPs were modulated by spatial alignment for both types of somatosensory stimuli.ConclusionThis thesis provides insight into the functional role played by early, low-level multisensory interac-tions. Combining psychophysics and electrical neuroimaging techniques we demonstrate the behavioral re-levance of early and low-level interactions in the normal human system. Moreover, we show that these early interactions are hermetic to top-down influences on spatial processing suggesting their occurrence within cerebral regions having access to large-scale spatial representations. We finally highlight specific interactions between auditory space and somatosensory stimulation on different body parts. Gaining an in-depth understanding of how multisensory integration normally operates is of central importance as it will ultimately permit us to consider how the impaired brain could benefit from rehabilitation with multisensory stimula-Abstract (French)Background théoriqueDes stimuli multisensoriels sont plus faciles à reconnaître, peuvent améliorer l'apprentissage et sont traités plus rapidement comparé à des stimuli unisensoriels. Ainsi, la capacité qu'un organisme possède à extraire et à synthétiser avec ses différentes modalités sensorielles des inputs sensoriels pertinents, façonne sa perception et son interaction avec l'environnement. Une question majeure dans le domaine scientifique est comment le cerveau parvient à extraire et à fusionner des stimuli pour créer une représentation percep- tuelle cohérente (mais aussi comment il isole les stimuli sans rapport). Cette fusion entre les sens est appelée "intégration multisensorielle", une notion qui provient de travaux effectués dans le colliculus supérieur chez l'animal, une structure sous-corticale possédant des neurones produisant une sortie multisensorielle différant de la somme des entrées unisensorielles. Traditionnellement, l'intégration d'informations multisen- sorielles au niveau cortical est considérée comme se produisant tardivement dans les aires associatives supérieures dans les lobes frontaux, temporaux et pariétaux, suite à un traitement extensif au sein de régions unisensorielles primaires. Cependant, plusieurs découvertes anatomiques, électrophysiologiques et de neuroimageries remettent en question ce postulat, suggérant l'existence d'une convergence et d'interactions multisensorielles précoces.Les travaux présentés dans cette thèse sont destinés à mieux comprendre les bases neuronales et les mécanismes impliqués dans la combinaison d'informations sensorielles entre les sens de l'audition et du toucher chez l'homme. Des interactions neuronales non-linéaires précoces audio-somatosensorielles ont été observées à maintes reprises chez l'homme et le singe dans des circonstances aussi variées que sous anes- thésie, avec stimulation passive, et lors de tâches nécessitant un comportement (une détection simple de stimuli, par exemple). Ainsi, le rôle fonctionnel joué par ces interactions à une étape du traitement de l'information si précoce demeure une question ouverte. Il a également été démontré que les temps de réaction en réponse à des stimuli audio-somatosensoriels sont facilités par rapport à leurs homologues unisensoriels indépendamment de leur position spatiale. Ce résultat suggère que l'intégration audio- somatosensorielle se produit dans des régions cérébrales possédant des représentations spatiales à large échelle. Cependant, des expériences qui ont exigé un traitement spatial des stimuli ont produits des effets limités à des conditions où les stimuli multisensoriels étaient, alignés dans l'espace ou encore comme pouvant varier selon la partie de corps stimulée. Il n'a pas été établi à ce jour si ces divergences pourraient être dues aux contraintes liées à la tâche et/ou à la nécessité d'un traitement de l'information spatiale.Hypothèse et résultats expérimentauxDans une première étude, nous avons émis l'hypothèse que les interactions audio- somatosensorielles précoces sont pertinentes pour le comportement. En effectuant un partage des temps de réaction par rapport à la médiane d'un sous-ensemble de données comportementales et électroencépha- lographiques, nous avons constaté que les interactions multisensorielles qui se produisent à des latences précoces (entre 40-83ms) sont spécifique aux temps de réaction rapides indiquant une corrélation directe entre ces interactions neuronales précoces et le comportement.Dans une deuxième étude, nous avons émis l'hypothèse que si l'information spatiale devient perti-nente pour la tâche, elle pourrait exercer une influence sur des mesures comportementales de l'intégration audio-somatosensorielles. Dans deux expériences psychophysiques, nous montrons que même si les participants prêtent attention à l'information spatiale, une facilitation de la détection se produit et ce toujours indépendamment de la configuration spatiale des stimuli. Cependant, la performance de discrimination, quantifiée à l'aide d'un index de sensibilité (d') est altérée suite aux essais multisensoriels en général et de manière plus significative pour les essais multisensoriels non-alignés dans l'espace.Dans une troisième étude, nous avons émis l'hypothèse que des améliorations comportementales pourraient différer selon la partie du corps qui est stimulée (la main vs. la nuque). Des résultats préliminaires suggèrent une dissociation possible entre une facilitation comportementale et les potentiels évoqués. Les temps de réactions étaient influencés par la configuration spatiale uniquement dans le cas ou les stimuli somatosensoriels étaient sur la nuque alors que les potentiels évoqués étaient modulés par l'alignement spatial pour les deux types de stimuli somatosensorielles.ConclusionCette thèse apporte des éléments nouveaux concernant le rôle fonctionnel joué par les interactions multisensorielles précoces de bas niveau. En combinant la psychophysique et la neuroimagerie électrique, nous démontrons la pertinence comportementale des ces interactions dans le système humain normal. Par ailleurs, nous montrons que ces interactions précoces sont hermétiques aux influences dites «top-down» sur le traitement spatial suggérant leur occurrence dans des régions cérébrales ayant accès à des représentations spatiales de grande échelle. Nous soulignons enfin des interactions spécifiques entre l'espace auditif et la stimulation somatosensorielle sur différentes parties du corps. Approfondir la connaissance concernant les bases neuronales et les mécanismes impliqués dans l'intégration multisensorielle dans le système normale est d'une importance centrale car elle permettra d'examiner et de mieux comprendre comment le cerveau déficient pourrait bénéficier d'une réhabilitation avec la stimulation multisensorielle.
Resumo:
ABSTRACT (FRENCH)Ce travail de thèse basé sur le système visuel chez les sujets sains et chez les patients schizophrènes, s'articule autour de trois articles scientifiques publiés ou en cours de publication. Ces articles traitent des sujets suivants : le premier article présente une nouvelle méthode de traitement des composantes physiques des stimuli (luminance et fréquence spatiale). Le second article montre, à l'aide d'analyses de données EEG, un déficit de la voie magnocellulaire dans le traitement visuel des illusions chez les patients schizophrènes. Ceci est démontré par l'absence de modulation de la composante PI chez les patients schizophrènes contrairement aux sujets sains. Cette absence est induite par des stimuli de type illusion Kanizsa de différentes excentricités. Finalement, le troisième article, également à l'aide de méthodes de neuroimagerie électrique (EEG), montre que le traitement des contours illusoires se trouve dans le complexe latéro-occipital (LOC), à l'aide d'illusion « misaligned gratings ». De plus il révèle que les activités démontrées précédemment dans les aires visuelles primaires sont dues à des inférences « top- down ».Afin de permettre la compréhension de ces trois articles, l'introduction de ce manuscrit présente les concepts essentiels. De plus des méthodes d'analyses de temps-fréquence sont présentées. L'introduction est divisée en quatre parties : la première présente le système visuel depuis les cellules retino-corticales aux deux voix du traitement de l'information en passant par les régions composant le système visuel. La deuxième partie présente la schizophrénie par son diagnostic, ces déficits de bas niveau de traitement des stimuli visuel et ces déficits cognitifs. La troisième partie présente le traitement des contours illusoires et les trois modèles utilisés dans le dernier article. Finalement, les méthodes de traitement des données EEG seront explicitées, y compris les méthodes de temps-fréquences.Les résultats des trois articles sont présentés dans le chapitre éponyme (du même nom). De plus ce chapitre comprendra les résultats obtenus à l'aide des méthodes de temps-fréquenceFinalement, la discussion sera orientée selon trois axes : les méthodes de temps-fréquence ainsi qu'une proposition de traitement de ces données par une méthode statistique indépendante de la référence. La discussion du premier article en montrera la qualité du traitement de ces stimuli. La discussion des deux articles neurophysiologiques, proposera de nouvelles d'expériences afin d'affiner les résultats actuels sur les déficits des schizophrènes. Ceci pourrait permettre d'établir un marqueur biologique fiable de la schizophrénie.ABSTRACT (ENGLISH)This thesis focuses on the visual system in healthy subjects and schizophrenic patients. To address this research, advanced methods of analysis of electroencephalographic (EEG) data were used and developed. This manuscript is comprised of three scientific articles. The first article showed a novel method to control the physical features of visual stimuli (luminance and spatial frequencies). The second article showed, using electrical neuroimaging of EEG, a deficit in spatial processing associated with the dorsal pathway in chronic schizophrenic patients. This deficit was elicited by an absent modulation of the PI component in terms of response strength and topography as well as source estimations. This deficit was orthogonal to the preserved ability to process Kanizsa-type illusory contours. Finally, the third article resolved ongoing debates concerning the neural mechanism mediating illusory contour sensitivity by using electrical neuroimaging to show that the first differentiation of illusory contour presence vs. absence is localized within the lateral occipital complex. This effect was subsequent to modulations due to the orientation of misaligned grating stimuli. Collectively, these results support a model where effects in V1/V2 are mediated by "top-down" modulation from the LOC.To understand these three articles, the Introduction of this thesis presents the major concepts used in these articles. Additionally, a section is devoted to time-frequency analysis methods not presented in the articles themselves. The introduction is divided in four parts. The first part presents three aspects of the visual system: cellular, regional, and its functional interactions. The second part presents an overview of schizophrenia and its sensoiy-cognitive deficits. The third part presents an overview of illusory contour processing and the three models examined in the third article. Finally, advanced analysis methods for EEG are presented, including time- frequency methodology.The Introduction is followed by a synopsis of the main results in the articles as well as those obtained from the time-frequency analyses.Finally, the Discussion chapter is divided along three axes. The first axis discusses the time frequency analysis and proposes a novel statistical approach that is independent of the reference. The second axis contextualizes the first article and discusses the quality of the stimulus control and direction for further improvements. Finally, both neurophysiologic articles are contextualized by proposing future experiments and hypotheses that may serve to improve our understanding of schizophrenia on the one hand and visual functions more generally.
Resumo:
Systems which employ underwater acoustic energy for observation or communication are called sonar systems. The active and passive sonars are the two types of systems used for the detection and localisation of targets in underwater. Active sonar involves the transmission of an acoustic signal which, when reflected from a target, provides the sonar receiver with a basis for the detection and estimation. Passive sonar bases its detection and estimation on sounds which emanate from the target itself--Machinery noise, flow noise, transmission from its own active sonar etc.Electroacoustic transducers are used in sonar systems for the transmission and detection of acoustic energy. The transducer which is used for the transmission of acoustic energy is called projector and the one used for reception is called hydrophone. Since a single transducer is not sufficient enough for long range and directional transmission, a properly distributed array of transducers are to be used [9-11].The need and requirement for spatial processing to generate the most favourable directivity patterns for transducer systems used in underwater applications have already been analysed by several investigators [12-21].The desired directivity pattern can be either generated by the use of suitable focussing techniques or by an array of non-directional sensor elements, whose arrangements, spacing and the mode of excitation provide the required radiation pattern or by the combination of these.While computing that the directivity pattern, it is assumed strength of the elements are unaffected by the the source acoustic pressure at each source. However, in closely packed a r r a y s , the acoustic interaction effects experienced among the elements will modify the behaviour of individual elements and in turn will reduce the acoust ic source leve 1 wi t h respect to the maximum t heoret i cal va 1ue a s well as degrade the beam pa t tern. Th i s ef fect shou 1d be reduced in systems that are intended to generate high acoustic power output and unperturbed beam patterns [2,22-31].The work herein presented includes an approach for designing efficient and well behaved underwater transd~cer arrays, taking into account the acoustic interaction effect experienced among the closely packed multielement arrays.Architectural modifications reducing the interaction effect different radiating apertures.
Resumo:
Although there is evidence that exact calculation recruits left hemisphere perisylvian language systems, recent work has shown that exact calculation can be retained despite severe damage to these networks. In this study, we sought to identify a “core” network for calculation and hence to determine the extent to which left hemisphere language areas are part of this network. We examined performance on addition and subtraction problems in two modalities: one using conventional two-digit problems that can be easily encoded into language; the other using novel shape representations. With regard to numerical problems, our results revealed increased left fronto-temporal activity in addition, and increased parietal activity in subtraction, potentially reflecting retrieval of linguistically encoded information during addition. The shape problems elicited activations of occipital, parietal and dorsal temporal regions, reflecting visual reasoning processes. A core activation common to both calculation types involved the superior parietal lobule bilaterally, right temporal sub-gyral area, and left lateralized activations in inferior parietal (BA 40), frontal (BA 6/8/32) and occipital (BA 18) regions. The large bilateral parietal activation could be attributed to visuo-spatial processing in calculation. The inferior parietal region, and particularly the left angular gyrus, was part of the core calculation network. However, given its activation in both shape and number tasks, its role is unlikely to reflect linguistic processing per se. A possibility is that it serves to integrate right hemisphere visuo-spatial and left hemisphere linguistic and executive processing in calculation.
Resumo:
A substantial amount of evidence has been collected to propose an exclusive role for the dorsal visual pathway in the control of guided visual search mechanisms, specifically in the preattentive direction of spatial selection [Vidyasagar, T. R. (1999). A neuronal model of attentional spotlight: Parietal guiding the temporal. Brain Research and Reviews, 30, 66-76; Vidyasagar, T. R. (2001). From attentional gating in macaque primary visual cortex to dyslexia in humans. Progress in Brain Research, 134, 297-312]. Moreover, it has been suggested recently that the dorsal visual pathway is specifically involved in the spatial selection and sequencing required for orthographic processing in visual word recognition. In this experiment we manipulate the demands for spatial processing in a word recognition, lexical decision task by presenting target words in a normal spatial configuration, or where the constituent letters of each word are spatially shifted relative to each other. Accurate word recognition in the Shifted-words condition should demand higher spatial encoding requirements, thereby making greater demands on the dorsal visual stream. Magnetoencephalographic (MEG) neuroimaging revealed a high frequency (35-40 Hz) right posterior parietal activation consistent with dorsal stream involvement occurring between 100 and 300 ms post-stimulus onset, and then again at 200-400 ms. Moreover, this signal was stronger in the shifted word condition, compared to the normal word condition. This result provides neurophysiological evidence that the dorsal visual stream may play an important role in visual word recognition and reading. These results further provide a plausible link between early stage theories of reading, and the magnocellular-deficit theory of dyslexia, which characterises many types of reading difficulty. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
The early stages of dieting to lose weight have been associated with neuro-psychological impairments. Previous work has not elucidated whether these impairments are a function solely of unsupported or supported dieting. Raised cortico-steroid levels have been implicated as a possible causal mechanism. Healthy, overweight, pre-menopausal women were randomised to one of three conditions in which they dieted either as part of a commercially available weight loss group, dieted without any group support or acted as non-dieting controls for 8 weeks. Testing occurred at baseline and at 1, 4 and 8 weeks post baseline. During each session, participants completed measures of simple reaction time, motor speed, vigilance, immediate verbal recall, visuo-spatial processing and (at Week 1 only) executive function. Cortisol levels were gathered at the beginning and 30 min into each test session, via saliva samples. Also, food intake was self-recorded prior to each session and fasting body weight and percentage body fat were measured at each session. Participants in the unsupported diet condition displayed poorer vigilance performance (p=0.001) and impaired executive planning function (p=0.013) (along with a marginally significant trend for poorer visual recall (p=0.089)) after 1 week of dieting. No such impairments were observed in the other two groups. In addition, the unsupported dieters experienced a significant rise in salivary cortisol levels after 1 week of dieting (p<0.001). Both dieting groups lost roughly the same amount of body mass (p=0.011) over the course of the 8 weeks of dieting, although only the unsupported dieters experienced a significant drop in percentage body fat over the course of dieting (p=0.016). The precise causal nature of the relationship between stress, cortisol, unsupported dieting and cognitive function is, however, uncertain and should be the focus of further research. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
In this study we report the results of two experiments on visual attention conducted with patients with early-onset schizophrenia. These experiments investigated the effect of irrelevant spatial-scale information upon the processing of relevant spatial-scale information, and the ability to shift the spatial scale of attention, across consecutive trials, between different levels of the hierarchical stimulus. Twelve patients with early-onset schizophrenia and matched controls performed local-global tasks under: (1) directed attention conditions with a consistency manipulation and (2) divided-attention conditions. In the directed-attention paradigm, the early-onset patients exhibited the normal patterns of global advantage and interference, and were not unduly affected by the consistency manipulation. Under divided-attention conditions, however, the early-onset patients exhibited a local-processing deficit. The source of this local processing deficit lay in the prolonged reaction time to local targets, when these had been preceded by a global target, but not when preceded by a local target. These findings suggest an impaired ability to shift the spatial scale of attention from a global to a local spatial scale in early-onset schizophrenia. (C) 2003 Elsevier Science (USA). All rights reserved.
Resumo:
Les déficits auditifs spatiaux se produisent fréquemment après une lésion hémisphérique ; un précédent case report suggérait que la capacité explicite à reconnaître des positions sonores, comme dans la localisation des sons, peut être atteinte alors que l'utilisation implicite d'indices sonores pour la reconnaissance d'objets sonores dans un environnement bruyant reste préservée. En testant systématiquement des patients avec lésion hémisphérique inaugurale, nous avons montré que (1) l'utilisation explicite et/ou implicite des indices sonores peut être perturbée ; (2) la dissociation entre l'atteinte de l'utilisation explicite des indices sonores versus une préservation de l'utilisation implicite de ces indices est assez fréquente ; et (3) différents types de déficits dans la localisation des sons peuvent être associés avec une utilisation implicite préservée de ces indices sonores. Conceptuellement, la dissociation entre l'utilisation explicite et implicite de ces indices sonores peut illustrer la dichotomie des deux voies du système auditif. Nos résultats parlent en faveur d'une évaluation systématique des fonctions auditives spatiales dans un contexte clinique, surtout quand l'adaptation à un environnement sonore est en jeu. De plus, des études systématiques sont nécessaires afin de mettre en lien les troubles de l'utilisation explicite versus implicite de ces indices sonores avec les difficultés à effectuer les activités de la vie quotidienne, afin d'élaborer des stratégies de réhabilitation appropriées et afin de s'assurer jusqu'à quel point l'utilisation explicite et implicite des indices spatiaux peut être rééduquée à la suite d'un dommage cérébral.
Resumo:
Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage.
Resumo:
Using head-mounted eye tracker material, we assessed spatial recognition abilities (e.g., reaction to object permutation, removal or replacement with a new object) in participants with intellectual disabilities. The "Intellectual Disabilities (ID)" group (n=40) obtained a score totalling a 93.7% success rate, whereas the "Normal Control" group (n=40) scored 55.6% and took longer to fix their attention on the displaced object. The participants with an intellectual disability thus had a more accurate perception of spatial changes than controls. Interestingly, the ID participants were more reactive to object displacement than to removal of the object. In the specific test of novelty detection, however, the scores were similar, the two groups approaching 100% detection. Analysis of the strategies expressed by the ID group revealed that they engaged in more systematic object checking and were more sensitive than the control group to changes in the structure of the environment. Indeed, during the familiarisation phase, the "ID" group explored the collection of objects more slowly, and fixed their gaze for a longer time upon a significantly lower number of fixation points during visual sweeping.
Resumo:
Working memory, commonly defined as the ability to hold mental representations on line transiently and to manipulate these representations, is known to be a core deficit in schizophrenia. The aim of the present study was to investigate the visuo-spatial component of the working memory in schizophrenia, and more precisely to what extent the dynamic visuo-spatial information processing is impaired in schizophrenia patients. For this purpose we used a computerized paradigm in which 29 patients with schizophrenia (DSMIV, Diagnostic Interview for Genetic Studies) and 29 age and sex matched control subjects (DIGS) had to memorize a plane moving across the computer screen and to identify the observed trajectory among 9 plots proposed together. Each trajectory could be seen max. 3 times if needed. The results showed no difference between schizophrenia patients and controls regarding the number of correct trajectory identified after the first presentation. However, when we determine the mean number of correct trajectories on the basis of 3 trials, we observed that schizophrenia patients are significantly less performant than controls (Mann-Whitney, p _ 0.002). These findings suggest that, although schizophrenia patients are able to memorize some dynamic trajectories as well as controls, they do not profit from the repetition of the trajectory presentation. These findings are congruent with the hypothesis that schizophrenia could induce an unbalance between local and global information processing: the patients may be able to focus on details of the trajectory which could allow them to find the right target (bottom-up processes), but may show difficulty to refer to previous experience in order to filter incoming information (top-down processes) and enhance their visuo-spatial working memory abilities.
Resumo:
Psychophysical studies suggest that humans preferentially use a narrow band of low spatial frequencies for face recognition. Here we asked whether artificial face recognition systems have an improved recognition performance at the same spatial frequencies as humans. To this end, we estimated recognition performance over a large database of face images by computing three discriminability measures: Fisher Linear Discriminant Analysis, Non-Parametric Discriminant Analysis, and Mutual Information. In order to address frequency dependence, discriminabilities were measured as a function of (filtered) image size. All three measures revealed a maximum at the same image sizes, where the spatial frequency content corresponds to the psychophysical found frequencies. Our results therefore support the notion that the critical band of spatial frequencies for face recognition in humans and machines follows from inherent properties of face images, and that the use of these frequencies is associated with optimal face recognition performance.