10 resultados para Visual imagery
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Prompted reports of recall of spontaneous, conscious experiences were collected in a no-input, no-task, no-response paradigm (30 random prompts to each of 13 healthy volunteers). The mentation reports were classified into visual imagery and abstract thought. Spontaneous 19-channel brain electric activity (EEG) was continuously recorded, viewed as series of momentary spatial distributions (maps) of the brain electric field and segmented into microstates, i.e. into time segments characterized by quasi-stable landscapes of potential distribution maps which showed varying durations in the sub-second range. Microstate segmentation used a data-driven strategy. Different microstates, i.e. different brain electric landscapes must have been generated by activity of different neural assemblies and therefore are hypothesized to constitute different functions. The two types of reported experiences were associated with significantly different microstates (mean duration 121 ms) immediately preceding the prompts; these microstates showed, across subjects, for abstract thought (compared to visual imagery) a shift of the electric gravity center to the left and a clockwise rotation of the field axis. Contrariwise, the microstates 2 s before the prompt did not differ between the two types of experiences. The results support the hypothesis that different microstates of the brain as recognized in its electric field implement different conscious, reportable mind states, i.e. different classes (types) of thoughts (mentations); thus, the microstates might be candidates for the `atoms of thought'.
Core networks for visual-concrete and abstract thought content: a brain electric microstate analysis
Resumo:
Commonality of activation of spontaneously forming and stimulus-induced mental representations is an often made but rarely tested assumption in neuroscience. In a conjunction analysis of two earlier studies, brain electric activity during visual-concrete and abstract thoughts was studied. The conditions were: in study 1, spontaneous stimulus-independent thinking (post-hoc, visual imagery or abstract thought were identified); in study 2, reading of single nouns ranking high or low on a visual imagery scale. In both studies, subjects' tasks were similar: when prompted, they had to recall the last thought (study 1) or the last word (study 2). In both studies, subjects had no instruction to classify or to visually imagine their thoughts, and accordingly were not aware of the studies' aim. Brain electric data were analyzed into functional topographic brain images (using LORETA) of the last microstate before the prompt (study 1) and of the word-type discriminating event-related microstate after word onset (study 2). Conjunction analysis across the two studies yielded commonality of activation of core networks for abstract thought content in left anterior superior regions, and for visual-concrete thought content in right temporal-posterior inferior regions. The results suggest that two different core networks are automatedly activated when abstract or visual-concrete information, respectively, enters working memory, without a subject task or instruction about the two classes of information, and regardless of internal or external origin, and of input modality. These core machineries of working memory thus are invariant to source or modality of input when treating the two types of information.
Resumo:
Visual imagery – similar to visual perception – activates feature-specific and category-specific visual areas. This is frequently observed in experiments where the instruction is to imagine stimuli that have been shown immediately before the imagery task. Hence, feature-specific activation could be related to the short-term memory retrieval of previously presented sensory information. Here, we investigated mental imagery of stimuli that subjects had not seen before, eliminating the effects of short-term memory. We recorded brain activation using fMRI while subjects performed a behaviourally controlled guided imagery task in predefined retinotopic coordinates to optimize sensitivity in early visual areas. Whole brain analyses revealed activation in a parieto-frontal network and lateral–occipital cortex. Region of interest (ROI) based analyses showed activation in left hMT/V5+. Granger causality mapping taking left hMT/V5+ as source revealed an imagery-specific directed influence from the left inferior parietal lobule (IPL). Interestingly, we observed a negative BOLD response in V1–3 during imagery, modulated by the retinotopic location of the imagined motion trace. Our results indicate that rule-based motion imagery can activate higher-order visual areas involved in motion perception, with a role for top-down directed influences originating in IPL. Lower-order visual areas (V1, V2 and V3) were down-regulated during this type of imagery, possibly reflecting inhibition to avoid visual input from interfering with the imagery construction. This suggests that the activation in early visual areas observed in previous studies might be related to short- or long-term memory retrieval of specific sensory experiences.
Resumo:
In this study we investigated whether synaesthesia is associated with a particular cognitive style. Cognitive style refers to preferred modes of information processing, such as a verbal style or a visual style. We reasoned that related to the enriched world of experiences created by synaesthesia, its association with enhanced verbal and visual memory, higher imagery and creativity, synaesthetes might show enhanced preference for a verbal as well as for a visual cognitive style compared to non-synaesthetes. In Study 1 we tested a large convenience sample of 1046 participants, who classified themselves as grapheme-color, sound-color, lexical-gustatory, sequence-space, or as non-synaesthetes. To assess cognitive style, we used the revised verbalizer-visualizer questionnaire (VVQ), which involves three independent cognitive style dimensions (verbal style, visual-spatial style, and vivid imagery style). The most important result was that those who reported grapheme-color synaesthesia showed higher ratings on the verbal and vivid imagery style dimensions, but not on the visual-spatial style dimension. In Study 2 we replicated this finding in a laboratory study involving 24 grapheme-color synaesthetes with objectively confirmed synaesthesia and a closely matched control group. Our results indicate that grapheme-color synaesthetes prefer both a verbal and a specific visual cognitive style. We suggest that this enhanced preference, probably together with the greater ease to switch between a verbal and a vivid visual imagery style, may be related to cognitive advantages associated with grapheme color synaesthesia such as enhanced memory performance and creativity.
Resumo:
OBJECTIVE: To test the prediction by the Perception and Attention Deficit (PAD) model of complex visual hallucinations that cognitive impairment, specifically in visual attention, is a key risk factor for complex hallucinations in eye disease. METHODS: Two studies of elderly patients with acquired eye disease investigated the relationship between complex visual hallucinations (CVH) and impairments in general cognition and verbal attention (Study 1) and between CVH, selective visual attention and visual object perception (Study 2). The North East Visual Hallucinations Inventory was used to classify CVH. RESULTS: In Study 1, there was no relationship between CVH (n=10/39) and performance on cognitive screening or verbal attention tasks. In Study 2, participants with CVH (n=11/31) showed poorer performance on a modified Stroop task (p<0.05), a novel imagery-based attentional task (p<0.05) and picture (p<0.05) but not silhouette naming (p=0.13) tasks. Performance on these tasks correctly classified 83% of the participants as hallucinators or non-hallucinators. CONCLUSIONS: The results suggest that, consistent with the PAD model, complex visual hallucinations in people with acquired eye disease are associated with visual attention impairment.
Resumo:
The present study shows that different neural activity during mental imagery and abstract mentation can be assigned to well-defined steps of the brain's information-processing. During randomized visual presentation of single, imagery-type and abstract-type words, 27 channel event-related potential (ERP) field maps were obtained from 25 subjects (sequence-divided into a first and second group for statistics). The brain field map series showed a sequence of typical map configurations that were quasi-stable for brief time periods (microstates). The microstates were concatenated by rapid map changes. As different map configurations must result from different spatial patterns of neural activity, each microstate represents different active neural networks. Accordingly, microstates are assumed to correspond to discrete steps of information-processing. Comparing microstate topographies (using centroids) between imagery- and abstract-type words, significantly different microstates were found in both subject groups at 286–354 ms where imagery-type words were more right-lateralized than abstract-type words, and at 550–606 ms and 606–666 ms where anterior-posterior differences occurred. We conclude that language-processing consists of several, well-defined steps and that the brain-states incorporating those steps are altered by the stimuli's capacities to generate mental imagery or abstract mentation in a state-dependent manner.
Resumo:
OBJECTIVE This study aimed to test the prediction from the Perception and Attention Deficit model of complex visual hallucinations (CVH) that impairments in visual attention and perception are key risk factors for complex hallucinations in eye disease and dementia. METHODS Two studies ran concurrently to investigate the relationship between CVH and impairments in perception (picture naming using the Graded Naming Test) and attention (Stroop task plus a novel Imagery task). The studies were in two populations-older patients with dementia (n = 28) and older people with eye disease (n = 50) with a shared control group (n = 37). The same methodology was used in both studies, and the North East Visual Hallucinations Inventory was used to identify CVH. RESULTS A reliable relationship was found for older patients with dementia between impaired perceptual and attentional performance and CVH. A reliable relationship was not found in the population of people with eye disease. CONCLUSIONS The results add to previous research that object perception and attentional deficits are associated with CVH in dementia, but that risk factors for CVH in eye disease are inconsistent, suggesting that dynamic rather than static impairments in attentional processes may be key in this population.
Resumo:
Perceptual learning can occur when stimuli are only imagined, i.e., without proper stimulus presentation. For example, perceptual learning improved bisection discrimination when only the two outer lines of the bisection stimulus were presented and the central line had to be imagined. Performance improved also with other static stimuli. In non-learning imagery experiments, imagining static stimuli is different from imagining motion stimuli. We hypothesized that those differences also affect imagery perceptual learning. Here, we show that imagery training also improves motion direction discrimination. Learning occurs when no stimulus at all is presented during training, whereas no learning occurs when only noise is presented. The interference between noise and mental imagery possibly hinders learning. For static bisection stimuli, the pattern is just the opposite. Learning occurs when presented with the two outer lines of the bisection stimulus, i.e., with only a part of the visual stimulus, while no learning occurs when no stimulus at all is presented.
Resumo:
Mental color imagery abilities are commonly measured using paradigms that involve naming, judging, or comparing the colors of visual mental images of well-known objects (e.g., “Is a sunflower darker yellow than a lemon”?). Although this approach is widely used in patient studies, differences in the ability to perform such color comparisons might simply reflect participants’ general knowledge of object colors rather than their ability to generate accurate visual mental images of the colors of the objects. The aim of the present study was to design a new color imagery paradigm. Participants were asked to visualize a color for 3 s and then to determine a visually presented color by pressing 1 of 6 keys. The authors reasoned that participants would react faster when the imagined and perceived colors were congruent than when they were incongruent. In Experiment 1, participants were slower in incongruent than congruent trials but only when they were instructed to visualize the colors. The results in Experiment 2 demonstrate that the congruency effect reported in Experiment 1 cannot be attributed to verbalization of the color that had to be visualized. Finally, in Experiment 3, the congruency effect evoked by mental imagery correlated with performance in a perceptual version of the task. The authors discuss these findings with respect to the mechanisms that underlie mental imagery and patients suffering from color imagery deficits.