925 resultados para Human visual processing
Resumo:
Delta-9-tetrahydrocannabinol (Delta-9-THC) and Cannabidiol (CBD), the two main ingredients of the Cannabis sativa plant have distinct symptomatic and behavioral effects. We used functional magnetic resonance imaging (fMRI) in healthy volunteers to examine whether Delta-9-THC and CBD had opposite effects on regional brain function. We then assessed whether pretreatment with CBD can prevent the acute psychotic symptoms induced by Delta-9-THC. Fifteen healthy men with minimal earlier exposure to cannabis were scanned while performing a verbal memory task, a response inhibition task, a sensory processing task, and when viewing fearful faces. Subjects were scanned on three occasions, each preceded by oral administration of Delta-9-THC, CBD, or placebo. BOLD responses were measured using fMRI. In a second experiment, six healthy volunteers were administered Delta-9-THC intravenously on two occasions, after placebo or CBD pretreatment to examine whether CBD could block the psychotic symptoms induced by Delta-9-THC. Delta-9-THC and CBD had opposite effects on activation relative to placebo in the striatum during verbal recall, in the hippocampus during the response inhibition task, in the amygdala when subjects viewed fearful faces, in the superior temporal cortex when subjects listened to speech, and in the occipital cortex during visual processing. In the second experiment, pretreatment with CBD prevented the acute induction of psychotic symptoms by Delta-9-tetrahydrocannabinol. Delta-9-THC and CBD can have opposite effects on regional brain function, which may underlie their different symptomatic and behavioral effects, and CBD`s ability to block the psychotogenic effects of Delta-9-THC. Neuropsychopharmacology (2010) 35, 764-774; doi:10.1038/npp.2009.184; published online 18 November 2009
Resumo:
Recent studies have revealed marked variation in pyramidal cell structure in the visual cortex of macaque and marmoset monkeys. In particular, there is a systematic increase in the size of, and number of spines in, the arbours of pyramidal cells with progression through occipitotemporal (OT) visual areas. In the present study we extend the basis for comparison by investigating pyramidal cell structure in visual areas of the nocturnal owl monkey. As in the diurnal macaque and marmoset monkeys, pyramidal cells became progressively larger and more spinous with anterior progression through OT visual areas. These data suggest that: 1. the trend for more complex pyramidal cells with anterior progression through OT visual areas is a fundamental organizational principle in primate cortex; 2. areal specialization of the pyramidal cell phenotype provides an anatomical substrate for the reconstruction of the visual scene in OT areas; 3. evolutionary specialization of different aspects of visual processing may determine the extent of interareal variation in the pyramidal cell phenotype in different species; and 4. pyramidal cell structure is not necessarily related to brain size. Crown Copyright (C) 2003 Published by Elsevier Science Ltd on behalf of IBRO. All rights reserved.
Resumo:
Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT are functionally linked and temporally synchronized during time encoding whereas they are functionally independent and operate serially (V1 followed by V5/MT) while maintaining temporal information in working memory. These data challenge the traditional view of V1 and V5/MT as visuo-spatial features detectors and highlight the functional contribution and the temporal dynamics of these brain regions in the processing of time in millisecond range. The present project resulted in the paper entitled: 'How the visual brain encodes and keeps track of time' by Paolo Salvioni, Lysiann Kalmbach, Micah Murray and Domenica Bueti that is now submitted for publication to the Journal of Neuroscience.
Resumo:
Since the early days of functional magnetic resonance imaging (fMRI), retinotopic mapping emerged as a powerful and widely-accepted tool, allowing the identification of individual visual cortical fields and furthering the study of visual processing. In contrast, tonotopic mapping in auditory cortex proved more challenging primarily because of the smaller size of auditory cortical fields. The spatial resolution capabilities of fMRI have since advanced, and recent reports from our labs and several others demonstrate the reliability of tonotopic mapping in human auditory cortex. Here we review the wide range of stimulus procedures and analysis methods that have been used to successfully map tonotopy in human auditory cortex. We point out that recent studies provide a remarkably consistent view of human tonotopic organisation, although the interpretation of the maps continues to vary. In particular, there remains controversy over the exact orientation of the primary gradients with respect to Heschl's gyrus, which leads to different predictions about the location of human A1, R, and surrounding fields. We discuss the development of this debate and argue that literature is converging towards an interpretation that core fields A1 and R fold across the rostral and caudal banks of Heschl's gyrus, with tonotopic gradients laid out in a distinctive V-shaped manner. This suggests an organisation that is largely homologous with non-human primates. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Resumo:
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799-1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with "visual preference" showed enhanced phosphene perception irrespective of L-sound velocity, those with "auditory preference" showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.
Resumo:
Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT come into play simultaneously and seem to be functionally linked during interval encoding, whereas they operate serially (V1 followed by V5/MT) and seem to be independent while maintaining temporal information in working memory. These data help to refine our knowledge of the functional properties of human visual cortex, highlighting the contribution and the temporal dynamics of V1 and V5/MT in the processing of the temporal aspects of visual information.
Resumo:
Image filtering is a highly demanded approach of image enhancement in digital imaging systems design. It is widely used in television and camera design technologies to improve the quality of an output image to avoid various problems such as image blurring problem thatgains importance in design of displays of large sizes and design of digital cameras. This thesis proposes a new image filtering method basedon visual characteristics of human eye such as MTF. In contrast to the traditional filtering methods based on human visual characteristics this thesis takes into account the anisotropy of the human eye vision. The proposed method is based on laboratory measurements of the human eye MTF and takes into account degradation of the image by the latter. This method improves an image in the way it will be degraded by human eye MTF to give perception of the original image quality. This thesis gives a basic understanding of an image filtering approach and the concept of MTF and describes an algorithm to perform an image enhancement based on MTF of human eye. Performed experiments have shown quite good results according to human evaluation. Suggestions to improve the algorithm are also given for the future improvements.
Resumo:
Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ "featurally" are much easier to distinguish when inverted than those that differ "configurally" (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) ??finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjects' expectations, there is no difference between "featurally" and "configurally" transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.
Resumo:
The existence of hand-centred visual processing has long been established in the macaque premotor cortex. These hand-centred mechanisms have been thought to play some general role in the sensory guidance of movements towards objects, or, more recently, in the sensory guidance of object avoidance movements. We suggest that these hand-centred mechanisms play a specific and prominent role in the rapid selection and control of manual actions following sudden changes in the properties of the objects relevant for hand-object interactions. We discuss recent anatomical and physiological evidence from human and non-human primates, which indicates the existence of rapid processing of visual information for hand-object interactions. This new evidence demonstrates how several stages of the hierarchical visual processing system may be bypassed, feeding the motor system with hand-related visual inputs within just 70 ms following a sudden event. This time window is early enough, and this processing rapid enough, to allow the generation and control of rapid hand-centred avoidance and acquisitive actions, for aversive and desired objects, respectively
Resumo:
We show that the affective experience of touch and the sight of touch can be modulated by cognition, and investigate in an fMRI study where top-down cognitive modulations of bottom-up somatosensory and visual processing of touch and its affective value occur in the human brain. The cognitive modulation was produced by word labels, 'Rich moisturizing cream' or 'Basic cream', while cream was being applied to the forearm, or was seen being applied to a forearm. The subjective pleasantness and richness were modulated by the word labels, as were the fMRI activations to touch in parietal cortex area 7, the insula and ventral striatum. The cognitive labels influenced the activations to the sight of touch and also the correlations with pleasantness in the pregenual cingulate/orbitofrontal cortex and ventral striatum. Further evidence of how the orbitofrontal cortex is involved in affective aspects of touch was that touch to the forearm [which has C fiber Touch (CT) afferents sensitive to light touch] compared with touch to the glabrous skin of the hand (which does not) revealed activation in the mid-orbitofrontal cortex. This is of interest as previous studies have suggested that the CT system is important in affiliative caress-like touch between individuals.
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
In visual tracking experiments, distributions of the relative phase be-tween target and tracer showed positive relative phase indicating that the tracer precedes the target position. We found a mode transition from the reactive to anticipatory mode. The proposed integrated model provides a framework to understand the antici-patory behaviour of human, focusing on the integration of visual and soma-tosensory information. The time delays in visual processing and somatosensory feedback are explicitly treated in the simultaneous differential equations. The anticipatory behaviour observed in the visual tracking experiments can be ex-plained by the feedforward term of target velocity, internal dynamics, and time delay in somatosensory feedback.
Resumo:
O objetivo deste estudo foi estimar a entropia conjunta do sistema visual humano no domínio do espaço e no domínio das freqüências espaciais através de funções psicométricas. Estas foram obtidas com testes de discriminação de estímulos com luminância ou cromaticidade moduladas por funções de Gábor. A essência do método consistiu em avaliar a entropia no domínio do espaço, testando-se a capacidade do sujeito em discriminar estímulos que diferiam apenas em extensão espacial, e avaliar a entropia no domínio das freqüências espaciais, testando-se a capacidade do sujeito em discriminar estímulos que diferiam apenas em freqüência espacial. A entropia conjunta foi calculada, então, a partir desses dois valores individuais de entropia. Três condições visuais foram estudadas: acromática, cromática sem correção fina para eqüiluminância e cromática com correção para eqüiluminância através de fotometria com flicker heterocromático. Quatro sujeitos foram testados nas três condições, dois sujeitos adicionais foram testados na condição cromática sem eqüiluminância fina e um sétimo sujeito também fez o teste acromático. Todos os sujeitos foram examinados por oftalmologista e considerados normais do ponto de vista oftálmico, não apresentando relato, sintomas ou sinais de disfunções visuais ou de moléstias potencialmente capazes de afetar o sistema visual. Eles tinham acuidade visual normal ou corrigida de no mínimo 20/30. O trabalho foi aprovado pela Comissão de Ética em Pesquisa do Núcleo de Medicina Tropical da UFPA e obedeceu às recomendações da Declaração de Helsinki. As funções de Gábor usadas para modulação de luminância ou cromaticidade compreenderam redes senoidais unidimensionais horizontais, moduladas na direção vertical, dentro de envelopes gaussianos bidimensionais cuja extensão espacial era medida pelo desvio padrão da gaussiana. Os estímulos foram gerados usando-se uma rotina escrita em Pascal num ambiente Delphi 7 Enterprise. Foi utilizado um microcomputador Dell Precision 390 Workstation e um gerador de estímulos CRS VSG ViSaGe para exibir os estímulos num CRT de 20”, 800 x 600 pixels, 120 Hz, padrão RGB, Mitsubishi Diamond Pro 2070SB. Nos experimentos acromáticos, os estímulos foram gerados pela modulação de luminância de uma cor branca correspondente à cromaticidade CIE1931 (x = 0,270; y = 0,280) ou CIE1976 (u’ = 0,186; v’= 0,433) e tinha luminância média de 44,5 cd/m2. Nos experimentos cromáticos, a luminância média foi mantida em 15 cd/m2 e foram usadas duas series de estímulos verde-vermelhos. Os estímulos de uma série foram formados por duas cromaticidades definidas no eixo M-L do Espaço de Cores DKL (CIE1976: verde, u’=0,131, v’=0,380; vermelho, u’=0,216, v’=0,371). Os estímulos da outra série foram formados por duas cromaticidades definidas ao longo de um eixo horizontal verde-vermelho definido no Espaço de Cores CIE1976 (verde, u’=0,150, v’=0,480; vermelho, u’=0,255, v’=0,480). Os estímulos de referência eram compostos por redes de três freqüências espaciais diferentes (0,4, 2 e 10 ciclos por grau) e envelope gaussiano com desvio padrão de 1 grau. Os estímulos de testes eram compostos por uma entre 19 freqüências espaciais diferentes em torno da freqüência espacial de referência e um entre 21 envelopes gaussianos diferentes com desvio padrão em torno de 1 grau. Na condição acromática, foram estudados quatro níveis de contraste de Michelson: 2%, 5%, 10% e 100%. Nas duas condições cromáticas foi usado o nível mais alto de contraste agregado de cones permitidos pelo gamut do monitor, 17%. O experimento consistiu numa escolha forçada de dois intervalos, cujo procedimento de testagem compreendeu a seguinte seqüência: i) apresentação de um estímulo de referência por 1 s; ii) substituição do estímulo de referência por um fundo eqüiluminante de mesma cromaticidade por 1 s; iii) apresentação do estímulo de teste também por 1 s, diferindo em relação ao estímulo de referência seja em freqüência espacial, seja em extensão espacial, com um estímulo sonoro sinalizando ao sujeito que era necessário responder se o estímulo de teste era igual ou diferente do estímulo de referência; iv) substituição do estímulo de teste pelo fundo. A extensão espacial ou a freqüência espacial do estímulo de teste foi mudada aleatoriamente de tentativa para tentativa usando o método dos estímulos constantes. Numa série de 300 tentativas, a freqüencia espacial foi variada, noutra série também de 300 tentativas, a extensão espacial foi variada, sendo que cada estímulo de teste em cada série foi apresentado pelo menos 10 vezes. A resposta do indivíduo em cada tentativa era guardada como correta ou errada para posterior construção das curvas psicométricas. Os pontos experimentais das funções psicométricas para espaço e freqüência espacial em cada nível de contraste, correspondentes aos percentuais de acertos, foram ajustados com funções gaussianas usando-se o método dos mínimos quadrados. Para cada nível de contraste, as entropias para espaço e freqüência espacial foram estimadas pelos desvios padrões dessas funções gaussianas e a entropia conjunta foi obtida multiplicando-se a raiz quadrada da entropia para espaço pela entropia para freqüência espacial. Os valores de entropia conjunta foram comparados com o mínimo teórico para sistemas lineares, 1/4π ou 0,0796. Para freqüências espaciais baixas e intermediárias, a entropia conjunta atingiu níveis abaixo do mínimo teórico em contrastes altos, sugerindo interações não lineares entre dois ou mais mecanismos visuais. Este fenômeno occorreu em todas as condições (acromática, cromática e cromática eqüiluminante) e foi mais acentuado para a frequência espacial de 0,4 ciclos / grau. Uma possível explicação para este fenômeno é a interação não linear entre as vias visuais retino-genículo-estriadas, tais como as vias K, M e P, na área visual primária ou em níveis mais altos de processamento neural.
Resumo:
The processing of orientations is at the core of our visual experience. Orientation selectivity in human visual cortex has been inferred from psychophysical experiments and more recently demonstrated with functional magnetic resonance imaging (fMRI). One method to identify orientation-selective responses is fMRI adaptation, in which two stimuli—either with the same or with different orientations—are presented successively. A region containing orientation-selective neurons should demonstrate an adapted response to the “same orientation” condition in contrast to the “different orientation” condition. So far, human primary visual cortex (V1) showed orientation-selective fMRI adaptation only in experimental designs using prolonged pre-adaptation periods (∼40 s) in combination with top-up stimuli that are thought to maintain the adapted level. This finding has led to the notion that orientation-selective short-term adaptation in V1 (but not V2 or V3) cannot be demonstrated using fMRI. The present study aimed at re-evaluating this question by testing three differently timed adaptation designs. With the use of a more sensitive analysis technique, we show robust orientation-selective fMRI adaptation in V1 evoked by a short-term adaptation design.
Resumo:
OBJECTIVE: To investigate whether autistic subjects show a different pattern of neural activity than healthy individuals during processing of faces and complex patterns. METHODS: Blood oxygen level-dependent (BOLD) signal changes accompanying visual processing of faces and complex patterns were analyzed in an autistic group (n = 7; 25.3 [6.9] years) and a control group (n = 7; 27.7 [7.8] years). RESULTS: Compared with unaffected subjects, autistic subjects demonstrated lower BOLD signals in the fusiform gyrus, most prominently during face processing, and higher signals in the more object-related medial occipital gyrus. Further signal increases in autistic subjects vs controls were found in regions highly important for visual search: the superior parietal lobule and the medial frontal gyrus, where the frontal eye fields are located. CONCLUSIONS: The cortical activation pattern during face processing indicates deficits in the face-specific regions, with higher activations in regions involved in visual search. These findings reflect different strategies for visual processing, supporting models that propose a predisposition to local rather than global modes of information processing in autism.