143 resultados para PERCEPTUAL RATINGS
em Université de Lausanne, Switzerland
Resumo:
Most theories of perception assume a rigid relationship between objects of the physical world and the corresponding mental representations. We show by a priori reasoning that this assumption is not fulfilled. We claim instead that all object-representation correspondences have to be learned. However, we cannot learn to perceive all objects that there are in the world. We arrive at these conclusions by a combinatory analysis of a fictive stimulus world and the way to cope with its complexity, which is perceptual learning. We show that successful perceptual learning requires changes in the representational states of the brain that are not derived directly from the constitution of the physical world. The mind constitutes itself through perceptual learning.
Resumo:
L'imagerie mentale est définie comme une expérience similaire à la perception mais se déroulant en l'absence d'une stimulation physique. Des recherches antérieures ont montré que l'imagerie mentale améliore la performance dans certains domaines, comme par exemple le domaine moteur. Cependant, son rôle dans l'apprentissage perceptif n'a pas encore été étudié. L'apprentissage perceptif correspond à l'amélioration permanente des performances suite à la répétition de la même tâche. Cette thèse présente une série des résultats empiriques qui montrent que l'apprentissage perceptif peut aussi être achevé en l'absence des stimuli physiques. En effet, imaginer des stimuli visuels amène à une meilleure performance avec les stimuli réels. Donc, les processus sous-jacents l'apprentissage perceptif ne sont pas uniquement déclenchés par les stimuli sensoriels, mais également par des signaux internes. En plus, l'apprentissage perceptif à travers l'imagerie mentale ne se réalise que seule-ment quand les stimuli ne sont pas (complètement) présents, mais gaiement quand les stimuli montrés ne sont pas utiles quant à la résolution de la tâche. - Mental imagery is described as an experience that resembles pereeptnal ex-perience but which occurs in the absence ef a physical stimulation. Despite its beneficial effects in, among others, motor performance, the role of mental imagery m perceptual learning has not yet been addressed. Here we focus on a specific sensory modality: vision. Perceptual learning is the ability to improve perception in a stable way through the repetition of a given task Here I demonstrate by a series of empirical results that a perceptual improve¬ment can also occur in the absence of a stimulation. Imagining visual stimuli is sufficient for successful perceptual learning. Hence, processes underlying perceptual learning are not only stimulus-driven but can also be driven by internally generated signals. Moreover, I also show that perceptual learning via mental imagery can occur not only when physical stimuli are (partially) absent, but also in conditions where stimuli are uninformative with respect to the task that has to be learned.
Resumo:
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
Resumo:
Various lines of evidence accumulated over the past 30 years indicate that the cerebellum, long recognized as essential for motor control, also has considerable influence on perceptual processes. In this paper, we bring together experts from psychology and neuroscience, with the aim of providing a succinct but comprehensive overview of key findings related to the involvement of the cerebellum in sensory perception. The contributions cover such topics as anatomical and functional connectivity, evolutionary and comparative perspectives, visual and auditory processing, biological motion perception, nociception, self-motion, timing, predictive processing, and perceptual sequencing. While no single explanation has yet emerged concerning the role of the cerebellum in perceptual processes, this consensus paper summarizes the impressive empirical evidence on this problem and highlights diversities as well as commonalities between existing hypotheses. In addition to work with healthy individuals and patients with cerebellar disorders, it is also apparent that several neurological conditions in which perceptual disturbances occur, including autism and schizophrenia, are associated with cerebellar pathology. A better understanding of the involvement of the cerebellum in perceptual processes will thus likely be important for identifying and treating perceptual deficits that may at present go unnoticed and untreated. This paper provides a useful framework for further debate and empirical investigations into the influence of the cerebellum on sensory perception.
Resumo:
This study was designed to investigate personality development with children aged 8 to 12. For this purpose, Children's self-perceptions were compared to parent's ratings. 506 children and their parents completed a selection of 38 questions from the Hierarchical Personality Inventory for Children (HiPIC). Results showed an age-related increase in the structural congruence of children's ratings compared to parents' ratings and a highly significant increase in the reliabilities of both parents' and children's assessments. The mean correlation between the children's self-descriptions and parents' ratings were higher for Conscientiousness and Imagination than for Extraversion, Benevolence and Emotional Stability and significantly increased with the children's age. Mean-levels decreased with age for Imagination in parents' ratings and for Benevolence, Conscientiousness, and Imagination, in children's ratings. This study showed that personality development from 8 to 12 years goes along with an increase in the agreement between the children's self-perceptions and the parents' perceptions of the children's personality.
Resumo:
The investigation of gender differences in emotion has attracted much attention given the potential ramifications on our understanding of sexual differences in disorders involving emotion dysregulation. Yet, research on content-specific gender differences across adulthood in emotional responding is lacking. The aims of the present study were twofold. First, we sought to investigate to what extent gender differences in the self-reported emotional experience are content specific. Second, we sought to determine whether gender differences are stable across the adult lifespan. We assessed valence and arousal ratings of 14 picture series, each of a different content, in 94 men and 118 women aged 20 to 81. Compared to women, men reacted more positively to erotic images, whereas women rated low-arousing pleasant family scenes and landscapes as particularly positive. Women displayed a disposition to respond with greater defensive activation (i.e., more negative valence and higher arousal), in particular to the most arousing unpleasant contents. Importantly, significant interactions between gender and age were not found for any single content. This study makes a novel contribution by showing that gender differences in the affective experiences in response to different contents persist across the adult lifespan. These findings support the "stability hypothesis" of gender differences across age.
Resumo:
Intuitively, we think of perception as providing us with direct cognitive access to physical objects and their properties. But this common sense picture of perception becomes problematic when we notice that perception is not always veridical. In fact, reflection on illusions and hallucinations seems to indicate that perception cannot be what it intuitively appears to be. This clash between intuition and reflection is what generates the puzzle of perception. The task and enterprise of unravelling this puzzle took, and still takes, centre stage in the philosophy of perception. The goal of my dissertation is to make a contribution to this enterprise by formulating and defending a new structural approach to perception and perceptual consciousness. The argument for my structural approach is developed in several steps. Firstly, I develop an empirically inspired causal argument against naïve and direct realist conceptions of perceptual consciousness. Basically, the argument says that perception and hallucination can have the same proximal causes and must thus belong to the same mental kind. I emphasise that this insight gives us good reasons to abandon what we are instinctively driven to believe - namely that perception is directly about the outside physical world. The causal argument essentially highlights that the information that the subject acquires in perceiving a worldly object is always indirect. To put it another way, the argument shows that what we, as perceivers, are immediately aware of, is not an aspect of the world but an aspect of our sensory response to it. A view like this is traditionally known as a Representative Theory of Perception. As a second step, emphasis is put on the task of defending and promoting a new structural version of the Representative Theory of Perception; one that is immune to some major objections that have been standardly levelled at other Representative Theories of Perception. As part of this defence and promotion, I argue that it is only the structural features of perceptual experiences that are fit to represent the empirical world. This line of thought is backed up by a detailed study of the intriguing phenomenon of synaesthesia. More precisely, I concentrate on empirical cases of synaesthetic experiences and argue that some of them provide support for a structural approach to perception. The general picture that emerges in this dissertation is a new perspective on perceptual consciousness that is structural through and through.
Resumo:
Repetition of environmental sounds, like their visual counterparts, can facilitate behavior and modulate neural responses, exemplifying plasticity in how auditory objects are represented or accessed. It remains controversial whether such repetition priming/suppression involves solely plasticity based on acoustic features and/or also access to semantic features. To evaluate contributions of physical and semantic features in eliciting repetition-induced plasticity, the present functional magnetic resonance imaging (fMRI) study repeated either identical or different exemplars of the initially presented object; reasoning that identical exemplars share both physical and semantic features, whereas different exemplars share only semantic features. Participants performed a living/man-made categorization task while being scanned at 3T. Repeated stimuli of both types significantly facilitated reaction times versus initial presentations, demonstrating perceptual and semantic repetition priming. There was also repetition suppression of fMRI activity within overlapping temporal, premotor, and prefrontal regions of the auditory "what" pathway. Importantly, the magnitude of suppression effects was equivalent for both physically identical and semantically related exemplars. That the degree of repetition suppression was irrespective of whether or not both perceptual and semantic information was repeated is suggestive of a degree of acoustically independent semantic analysis in how object representations are maintained and retrieved.
Resumo:
We perceive our environment through multiple sensory channels. Nonetheless, research has traditionally focused on the investigation of sensory processing within single modalities. Thus, investigating how our brain integrates multisensory information is of crucial importance for understanding how organisms cope with a constantly changing and dynamic environment. During my thesis I have investigated how multisensory events impact our perception and brain responses, either when auditory-visual stimuli were presented simultaneously or how multisensory events at one point in time impact later unisensory processing. In "Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012) we investigated the neuronal substrates involved in motion detection in depth under multisensory vs. unisensory conditions. We have shown that congruent auditory-visual looming (i.e. approaching) signals are preferentially integrated by the brain. Further, we show that early effects under these conditions are relevant for behavior, effectively speeding up responses to these combined stimulus presentations. In "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), we investigated the behavioral impact of single encounters with meaningless auditory-visual object parings upon subsequent visual object recognition. In addition to showing that these encounters lead to impaired recognition accuracy upon repeated visual presentations, we have shown that the brain discriminates images as soon as ~100ms post-stimulus onset according to the initial encounter context. In "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review) we have addressed whether auditory object recognition is affected by single-trial multisensory memories, and whether recognition accuracy of sounds was similarly affected by the initial encounter context as visual objects. We found that this is in fact the case. We propose that a common underlying brain network is differentially involved during encoding and retrieval of images and sounds based on our behavioral findings. - Nous percevons l'environnement qui nous entoure à l'aide de plusieurs organes sensoriels. Antérieurement, la recherche sur la perception s'est focalisée sur l'étude des systèmes sensoriels indépendamment les uns des autres. Cependant, l'étude des processus cérébraux qui soutiennent l'intégration de l'information multisensorielle est d'une importance cruciale pour comprendre comment notre cerveau travail en réponse à un monde dynamique en perpétuel changement. Pendant ma thèse, j'ai ainsi étudié comment des événements multisensoriels impactent notre perception immédiate et/ou ultérieure et comment ils sont traités par notre cerveau. Dans l'étude " Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012), nous nous sommes intéressés aux processus neuronaux impliqués dans la détection de mouvements à l'aide de l'utilisation de stimuli audio-visuels seuls ou combinés. Nos résultats ont montré que notre cerveau intègre de manière préférentielle des stimuli audio-visuels combinés s'approchant de l'observateur. De plus, nous avons montré que des effets précoces, observés au niveau de la réponse cérébrale, influencent notre comportement, en accélérant la détection de ces stimuli. Dans l'étude "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), nous nous sommes intéressés à l'impact qu'a la présentation d'un stimulus audio-visuel sur l'exactitude de reconnaissance d'une image. Nous avons étudié comment la présentation d'une combinaison audio-visuelle sans signification, impacte, au niveau comportementale et cérébral, sur la reconnaissance ultérieure de l'image. Les résultats ont montré que l'exactitude de la reconnaissance d'images, présentées dans le passé, avec un son sans signification, est inférieure à celle obtenue dans le cas d'images présentées seules. De plus, notre cerveau différencie ces deux types de stimuli très tôt dans le traitement d'images. Dans l'étude "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review), nous nous sommes posés la question si l'exactitude de ia reconnaissance de sons était affectée de manière semblable par la présentation d'événements multisensoriels passés. Ceci a été vérifié par nos résultats. Nous avons proposé que cette similitude puisse être expliquée par le recrutement différentiel d'un réseau neuronal commun.
Resumo:
BACKGROUND: Appropriateness criteria for the treatment of Crohn's disease (CD) and ulcerative colitis (UC) have been developed by experts' panels. Little is known about the acceptance of such recommendations by care providers. The aim was to explore how treatment decisions of practicing gastroenterologists differ from experts using a vignette case study and a focus group. METHODS: Seventeen clinical vignettes were drawn from clinical indications evaluated by the expert panel. A vignette case questionnaire asking for treatment options in 9-10 clinical situations was submitted to 26 practicing gastroenterologists. For each vignette case, practitioners' answers on treatments deemed appropriate were compared to panel decisions. Qualitative analysis was made based on focus group discussion to explore acceptance and divergence reasons. RESULTS: 239 clinical vignettes were completed, 98 for CD and 141 for UC. Divergence between proposed treatments and results from panels was more frequent for CD (34%) than for UC (27%). Among UC clinical vignettes, the main divergences with the panel were linked to 5-ASA failure assessment and to situations where stopping treatment was the main decision. For CD, the care provider propositions diverged with the panel in mild-to-moderate active disease, where practitioners were more prone to an accelerated step up than the panel's recommendations. CONCLUSIONS: In about one third of vignettes cases, IBD treatment propositions made by practicing gastroenterologists diverged as compared to expert recommendations. Practicing gastroenterologists may experience difficulties in applying recommendations in daily practice.
Resumo:
The construct of cognitive errors is clinically relevant for cognitive therapy of mood disorders. Beck's universality hypothesis postulates the relevance of negative cognitions in all subtypes of mood disorders, as well as positive cognitions for manic states. This hypothesis has rarely been empirically addressed for patients presenting bipolar affective disorder (BD). In-patients (n = 30) presenting with BD were interviewed, as were 30 participants of a matched control group. Valid and reliable observer-rater methodology for cognitive errors was applied to the session transcripts. Overall, patients make more cognitive errors than controls. When manic and depressive patients were compared, parts of the universality hypothesis were confirmed. Manic symptoms are related to positive and negative cognitive errors. These results are discussed with regard to the main assumptions of the cognitive model for depression; thus adding an argument for extending it to the BD diagnostic group, taking into consideration specificities in terms of cognitive errors. Clinical implications for cognitive therapy of BD are suggested.
Resumo:
Discriminating complex sounds relies on multiple stages of differential brain activity. The specific roles of these stages and their links to perception were the focus of the present study. We presented 250ms duration sounds of living and man-made objects while recording 160-channel electroencephalography (EEG). Subjects categorized each sound as that of a living, man-made or unknown item. We tested whether/when the brain discriminates between sound categories even when not transpiring behaviorally. We applied a single-trial classifier that identified voltage topographies and latencies at which brain responses are most discriminative. For sounds that the subjects could not categorize, we could successfully decode the semantic category based on differences in voltage topographies during the 116-174ms post-stimulus period. Sounds that were correctly categorized as that of a living or man-made item by the same subjects exhibited two periods of differences in voltage topographies at the single-trial level. Subjects exhibited differential activity before the sound ended (starting at 112ms) and on a separate period at ~270ms post-stimulus onset. Because each of these periods could be used to reliably decode semantic categories, we interpreted the first as being related to an implicit tuning for sound representations and the second as being linked to perceptual decision-making processes. Collectively, our results show that the brain discriminates environmental sounds during early stages and independently of behavioral proficiency and that explicit sound categorization requires a subsequent processing stage.