925 resultados para Human visual processing
Resumo:
The reading of printed materials implies the visual processing of information originated in two distinct semiotic systems. The rapid identification of redundancy, complementation or contradiction rhetoric strategies between the two information types may be crucial for an adequate interpretation of bimodal materials. Hybrid texts (verbal and visual) are particular instances of bimodal materials, where the redundant information is often neglected while the complementary and the contradictory ones are essential.Studies using the 504 ASL eye-tracking system while reading either additive or exhibiting captions (Baptista, 2009) revealed fixations on the verbal material and transitions between the written and the pictorial in a much higher number and duration than the initially foreseen as necessary to read the verbal text. We therefore hypothesized that confirmation strategies of the written information are taking place, by using information available in the other semiotic system.Such eye-gaze patterns obtained from denotative texts and pictures seem to contradict some of the scarce existing data on visual processing of texts and images, namely cartoons (Carroll, Young and Guertain, 1992), descriptive captions (Hegarty, 1992 a and b), and advertising images with descriptive and explanatory texts (cf. Rayner and Rotello, 2001, who refer to a previous reading of the whole text before looking at the image, or even Rayner, Miller and Rotello, 2008 who refer to an earlier and longer look at the picture) and seem to consolidate findings of Radach et al. (2003) on systematic transitions between text and image.By framing interest areas in the printed pictorial material of non redundant hybrid texts, we have identified the specific areas where transitions take place after fixations in the verbal text. The way those transitions are processed brings a new interest to further research.
Resumo:
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Using fMRI, we examined the neural correlates of maternal responsiveness. Ten healthy mothers viewed alternating blocks of video: (i) 40 s of their own infant; (ii) 20 s of a neutral video; (iii) 40 s of an unknown infant and (iv) 20 s of neutral video, repeated 4 times. Predominant BOLD signal change to the contrast of infants minus neutral stimulus occurred in bilateral visual processing regions BA minus neutral stimulus occurred in bilateral visual processing regions (BA 38), left amygdala and visual cortex (BA 19), and to the unknown infant minus own infant contrast in bilateral orbitofrontal cortex (BA 10,47) and medial prefrontal cortex (BA 8). These findings suggest that amygdala and temporal pole may be key sites in mediating a mother's response to her infant and reaffirms their importance in face emotion processing and social behaviour.
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
Within the context of active vision, scant attention has been paid to the execution of motion saccades—rapid re-adjustments of the direction of gaze to attend to moving objects. In this paper we first develop a methodology for, and give real-time demonstrations of, the use of motion detection and segmentation processes to initiate capture saccades towards a moving object. The saccade is driven by both position and velocity of the moving target under the assumption of constant target velocity, using prediction to overcome the delay introduced by visual processing. We next demonstrate the use of a first order approximation to the segmented motion field to compute bounds on the time-to-contact in the presence of looming motion. If the bound falls below a safe limit, a panic saccade is fired, moving the camera away from the approaching object. We then describe the use of image motion to realize smooth pursuit, tracking using velocity information alone, where the camera is moved so as to null a single constant image motion fitted within a central image region. Finally, we glue together capture saccades with smooth pursuit, thus effecting changes in both what is being attended to and how it is being attended to. To couple the different visual activities of waiting, saccading, pursuing and panicking, we use a finite state machine which provides inherent robustness outside of visual processing and provides a means of making repeated exploration. We demonstrate in repeated trials that the transition from saccadic motion to tracking is more likely to succeed using position and velocity control, than when using position alone.
Resumo:
A wealth of literature suggests that emotional faces are given special status as visual objects: Cognitive models suggest that emotional stimuli, particularly threat-relevant facial expressions such as fear and anger, are prioritized in visual processing and may be identified by a subcortical “quick and dirty” pathway in the absence of awareness (Tamietto & de Gelder, 2010). Both neuroimaging studies (Williams, Morris, McGlone, Abbott, & Mattingley, 2004) and backward masking studies (Whalen, Rauch, Etcoff, McInerney, & Lee, 1998) have supported the notion of emotion processing without awareness. Recently, our own group (Adams, Gray, Garner, & Graf, 2010) showed adaptation to emotional faces that were rendered invisible using a variant of binocular rivalry: continual flash suppression (CFS, Tsuchiya & Koch, 2005). Here we (i) respond to Yang, Hong, and Blake's (2010) criticisms of our adaptation paper and (ii) provide a unified account of adaptation to facial expression, identity, and gender, under conditions of unawareness
Resumo:
The loss of motor function at the elbow joint can result as a consequence of stroke. Stroke is a clinical illness resulting in long lasting neurological deficits often affecting somatosensory and motor cortices. More than half of those that recover from a stroke survive with disability in their upper arm and need rehabilitation therapy to help in regaining functions of daily living. In this paper, we demonstrated a prototype of a low-cost, ultra-light and wearable soft robotic assistive device that could aid administration of elbow motion therapies to stroke patients. In order to assist the rotation of the elbow joint, the soft modules which consist of soft wedge-like cellular units was inflated by air to produce torque at the elbow joint. Highly compliant rotation can be naturally realised by the elastic property of soft silicone and pneumatic control of air. Based on the direct visual-actuation control, a higher control loop utilised visual processing to apply positional control, the lower control loop was implemented by an electronic circuit to achieve the desired pressure of the soft modules by Pulse Width Modulation. To examine the functionality of the proposed soft modular system, we used an anatomical model of the upper limb and performed the experiments with healthy participants.
Resumo:
Electrical coupling provided by connexins (Cx) in gap junctions (GJ) plays important roles in both the developing and the mature retina. In mammalian nocturnal species, Cx36 is an essential component in the rod pathway, the retinal circuit specialized for night, scotopic vision. Here, we report the expression of Cx36 in a species (Gallus gallus) that phylogenetic development endows with an essentially rodless retina. Cx36 gene is very highly expressed in comparison with other Cxs previously described in the adult retina, such as Cx43, Cx45, and Cx50. Moreover, real-time PCR, Western blot, and immunofluorescence all revealed that Cx36 expression massively increased over time during development. We thoroughly examined Cx36 in the inner and outer plexiform layers, where this protein was particularly abundant. Cx36 was observed mainly in the off sublamina of the inner plexiform layer rather than in the on sublamina previously described in the mammalian retina. In addition, Cx36 colocalized with specific cell markers, revealing the expression of this protein in distinct amacrine cells. To investigate further the involvement of Cx36 in visual processing, we examined its functional regulation in retinas from dark-adapted animals. Light deprivation markedly up-regulates Cx36 gene expression in the retina, resulting in an increased accumulation of the protein within and between cone synaptic terminals. In summary, the developmental regulation of Cx36 expression results in particular circuitry-related roles in the chick retina. Moreover, this study demonstrated that Cx36 onto- and phylogenesis in the vertebrate retina simultaneously exhibit similarities and particularities. J. Comp. Neurol. 512:651-663, 2009. (C) 2008 Wiley-Liss, Inc.
Resumo:
Gap junction (GJ) channels couple adjacent cells, allowing transfer of second messengers, ions, and molecules up to 1 kDa. These channels are composed by a multigene family of integral membrane proteins called connexins (Cx). In the retina, besides being essential circuit element in the visual processing, GJ channels also play important roles during its development. Herein, we analyzed Cx43, Cx45, Cx50, and Cx56 expression during chick retinal histogenesis. Cx exhibited distinct expression profiles during retinal development, except for Cx56, whose expression was not detected. Cx43 immunolabeling was observed at early development, in the transition of ventricular zone and pigmented epithelium. Later, Cx43 was seen in the outer plexiform and ganglion cell layers, and afterwards also in the inner plexiform layer. We observed remarkable changes in the phosphorylation status of this protein, which indicated modifications in functional properties of this Cx during retinal histogenesis. By contrast, Cx45 showed stable gene expression levels throughout development and ubiquitous immunoreactivity in progenitor cells. From later embryonic development, Cx45 was mainly observed in the inner retina, and it was expressed by glial cells and neurons. In turn, Cx50 was virtually absent in the chick retina at initial embryonic phases. Combination of PCR, immunohistochemistry and Western blot indicated that this Cx was present in differentiated cells, arising in parallel with the formation of the visual circuitry. Characterization of Cx expression in the developing chick retina indicated particular roles for these proteins and revealed similarities and differences when compared to other species. (C) 2008 Wiley Periodicals, Inc.
Resumo:
The present study aimed to analyze the gene and protein expression and the pattern of distribution of the vanilloid receptors TRPV1 and TRPV2 in the developing rat retina. During the early phases of development, TRPV1 was found mainly in the neuroblastic layer of the retina and in the pigmented epithelium. In the adult, TRPV1 was found in microglial cells, blood vessels, astrocytes and in neuronal structures, namely synaptic boutons of both retina] plexiform layers, as well as in cell bodies of the inner nuclear layer and the ganglion cell layer. The pattern of distribution of TRPV1 was mainly punctate, and there was higher TRPV1 labeling in the peripheral retina than in central regions. TRPV2 expression was quite distinct. its expression was virtually undetectable by immunoblotting before P1, and that receptor was found by immunohistochemistry only by postnatal day 15 (PI 5). RNA and protein analysis showed that the adult levels are only reached by P60, which includes small processes in the retinal plexiform layers, and labeled cellular bodies in the inner nuclear layer and the ganglion cell layer. There was no overlapping between the signal observed for both receptors. in conclusion, our results showed that the patterns of distribution of TRPV1 and TRPV2 are different during the development of the rat retina, suggesting that they have specific roles in both visual processing and in providing specific cues to neural development. (C) 2009 ISDN. Published by Elsevier Ltd. All rights reserved.
Resumo:
Processing in the visual system starts in the retina. Its complex network of cells with different properties enables for parallel encoding and transmission of visual information to the lateral geniculate nucleus (LGN) and to the cortex. In the retina, it has been shown that responses are often accompanied by fast synchronous oscillations (30 - 90 Hz) in a stimulus-dependent manner. Studies in the frog, rabbit, cat and monkey, have shown strong oscillatory responses to large stimuli which probably encode global stimulus properties, such as size and continuity (Neuenschwander and Singer, 1996; Ishikane et al., 2005). Moreover, simultaneous recordings from different levels in the visual system have demonstrated that the oscillatory patterning of retinal ganglion cell responses are transmitted to the cortex via the LGN (Castelo-Branco et al., 1998). Overall these results suggest that feedforward synchronous oscillations contribute to visual encoding. In the present study on the LGN of the anesthetized cat, we further investigate the role of retinal oscillations in visual processing by applying complex stimuli, such as natural visual scenes, light spots of varying size and contrast, and flickering checkerboards. This is a necessary step for understanding encoding mechanisms in more naturalistic conditions, as currently most data on retinal oscillations have been limited to simple, flashed and stationary stimuli. Correlation analysis of spiking responses confirmed previous results showing that oscillatory responses in the retina (observed here from the LGN responses) largely depend on the size and stationarity of the stimulus. For natural scenes (gray-level and binary movies) oscillations appeared only for brief moments probably when receptive fields were dominated by large continuous, flat-contrast surfaces. Moreover, oscillatory responses to a circle stimulus could be broken with an annular mask indicating that synchronization arises from relatively local interactions among populations of activated cells in the retina. A surprising finding in this study was that retinal oscillations are highly dependent on halothane anesthesia levels. In the absence of halothane, oscillatory activity vanished independent of the characteristics of the stimuli. The same results were obtained for isoflurane, which has similar pharmacological properties. These new and unexpected findings question whether feedfoward oscillations in the early visual system are simply due to an imbalance between excitation and inhibition in the retinal networks generated by the halogenated anesthetics. Further studies in awake behaving animals are necessary to extend these conclusions
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
O estudo tem como objetivo caracterizar e comparar o desempenho cognitivo-linguístico de escolares com distúrbio de aprendizagem com escolares com bom desempenho acadêmico. Participaram 40 escolares de 8 a 12 anos de idade, de 2ª a 4ª séries do Ensino Fundamental de escolas municipais da cidade de Marília - SP, divididos em GI (escolares com bom desempenho acadêmico) e GII (escolares com distúrbio de aprendizagem). Como procedimento foi utilizado o Teste de Desempenho Cognitivo-Linguístico, versões coletiva e individual. Os resultados evidenciaram desempenho superior do GI em relação ao GII nas habilidades de leitura, escrita, velocidade de processamento, processamento auditivo e visual. Concluiu-se que o desempenho inferior do GII nas habilidades indica uma limitação no desempenho linguístico desses escolares se comparados com os do GI, exceto na habilidade de consciência fonológica, em que, os grupos apresentaram dificuldades semelhantes, sugerindo que essa dificuldade não seja específica de escolares com distúrbio de aprendizagem.
Resumo:
Fifty-four extracted human mandibular molars were embedded and sectioned at two levels. The reassembled mesial root canals were prepared with stainless-steel hand K-files (Flexofiles) and either Nitiflex or Mity nickel-titanium hand K-files using a push-pull anticurvature filing technique. Each of the three experimental groups contained 36 mesial canals randomly distributed. Superimposed pre- and postinstrumentation cross-sectional root images were magnified using a stereomicroscope and transferred to a computer for measurement and statistical analysis. The direction and extent of canal center movement were evaluated. At the apical level, the groups produced no significant difference of direction of canal center movement. In cervical sections, all groups tended to move in a distolingual direction. The three groups, however, produced no significant difference in the cervical sections in the extent of canal center movement. In apical sections, Nitiflex produced the least canal center movement. Copyright © 1999 by The American Association of Endodontists.
Resumo:
The orbitofrontal cortex (OfC) is a heterogeneous prefrontal sector selectively connected with a wide constellation of other prefrontal, limbic, sensory and premotor areas. Among the limbic cortical connections, the ones with the bippocampus and parabippocampal cortex are particularly salient. Sensory cortices connected with the OfC include areas involved in olfactory, gustatory, somatosensory, auditory and visual processing. Subcortical structures with prominent OfC connections include the amygdala, numerous thalamic nuclei, the striatum, hypothalamus, periaqueductal gray matter, and biochemically specific cell groups in the basal forebrain and brainstem. Architectonic and connectional evidence supports parcellation of the OfC. The rostrally placed isocortical sector is mainly connected with isocortical areas, including sensory areas of the auditory, somatic and visual modalities, whereas the caudal non-isocortical sector is principally connected with non-isocortical areas, and, in the sensory domain, with olfactory and gustatory areas. The connections of the isocortical and non- isocortical orbital sectors with the amygdala, thalamus, striatum, hypotbalamus and periaqueductal gray matter are also specific. The medial sector of the OfC is selectively connected with the bippocampus, posterior parabippocampal cortex, posterior cingulate and retrosplenial areas, and area prostriata, while the lateral orbitofrontal sector is the most heavily connected with sensory areas of the gustatory, somatic and visual modalities, with premotor regions, and with the amygdala.