992 resultados para Perceptual learning


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper explores the situated body by briefly surveying the historical studies of effect and of affect which converge in current work on attention. This common approach to the situated body through attention prompted the coining of a more inclusive term, Æffect, to indicate the situated body’s mode of observation. Examples from the work of artist-turned-architects, Arakawa and Gins, will be discussed to show how architectural environments can act as heuristic tools that allow the situated body to research its own conditions. Rather than isolating effect from affect, observer from subject, organism from environment, Arakawa and Gins’ work optimises the use of situated complexity in the study of the site of person. By constructing surrounding in which to observe and learn about the shape of awareness, their procedural architecture suggests ways in which the interaction of top-down conceptual knowledge and bottom-up perceptual learning may construct possibilities in emergent rather than programmatic ways.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wild bearded capuchin monkeys, Cebus libidinosus, use stone tools to crack palm nuts to obtain the kernel. In five experiments, we gave 10 monkeys from one wild group of bearded capuchins a choice of two nuts differing in resistance and size and/or two manufactured stones of the same shape, volume and composition but different mass. Monkeys consistently selected the nut that was easier to crack and the heavier stone. When choosing between two stones differing in mass by a ratio of 1.3:1, monkeys frequently touched the stones or tapped them with their fingers or with a nut. They showed these behaviours more frequently before making their first selection of a stone than afterward. These results suggest that capuchins discriminate between nuts and between stones, selecting materials that allow them to crack nuts with fewer strikes, and generate exploratory behaviours to discriminate stones of varying mass. In the final experiment, humans effectively discriminated the mass of stones using the same tapping and handling behaviours as capuchins. Capuchins explore objects in ways that allow them to perceive invariant properties (e.g. mass) of objects, enabling selection of objects for specific uses. We predict that species that use tools will generate behaviours that reveal invariant properties of objects such as mass; species that do not use tools are less likely to explore objects in this way. The precision with which individuals can judge invariant properties may differ considerably, and this also should predict prevalence of tool use across species. (C) 2010 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human brain is provided with a flexible audio-visual system, which interprets and guides responses to external events according to spatial alignment, temporal synchronization and effectiveness of unimodal signals. The aim of the present thesis was to explore the possibility that such a system might represent the neural correlate of sensory compensation after a damage to one sensory pathway. To this purpose, three experimental studies have been conducted, which addressed the immediate, short-term and long-term effects of audio-visual integration on patients with Visual Field Defect (VFD). Experiment 1 investigated whether the integration of stimuli from different modalities (cross-modal) and from the same modality (within-modal) have a different, immediate effect on localization behaviour. Patients had to localize modality-specific stimuli (visual or auditory), cross-modal stimulus pairs (visual-auditory) and within-modal stimulus pairs (visual-visual). Results showed that cross-modal stimuli evoked a greater improvement than within modal stimuli, consistent with a Bayesian explanation. Moreover, even when visual processing was impaired, cross-modal stimuli improved performance in an optimal fashion. These findings support the hypothesis that the improvement derived from multisensory integration is not attributable to simple target redundancy, and prove that optimal integration of cross-modal signals occurs in processing stage which are not consciously accessible. Experiment 2 examined the possibility to induce a short term improvement of localization performance without an explicit knowledge of visual stimulus. Patients with VFD and patients with neglect had to localize weak sounds before and after a brief exposure to a passive cross-modal stimulation, which comprised spatially disparate or spatially coincident audio-visual stimuli. After exposure to spatially disparate stimuli in the affected field, only patients with neglect exhibited a shifts of auditory localization toward the visual attractor (the so called Ventriloquism After-Effect). In contrast, after adaptation to spatially coincident stimuli, both neglect and hemianopic patients exhibited a significant improvement of auditory localization, proving the occurrence of After Effect for multisensory enhancement. These results suggest the presence of two distinct recalibration mechanisms, each mediated by a different neural route: a geniculo-striate circuit and a colliculus-extrastriate circuit respectively. Finally, Experiment 3 verified whether a systematic audio-visual stimulation could exert a long-lasting effect on patients’ oculomotor behaviour. Eye movements responses during a visual search task and a reading task were studied before and after visual (control) or audio-visual (experimental) training, in a group of twelve patients with VFD and twelve controls subjects. Results showed that prior to treatment, patients’ performance was significantly different from that of controls in relation to fixations and saccade parameters; after audiovisual training, all patients reported an improvement in ocular exploration characterized by fewer fixations and refixations, quicker and larger saccades, and reduced scanpath length. Similarly, reading parameters were significantly affected by the training, with respect to specific impairments observed in left and right hemisphere–damaged patients. The present findings provide evidence that a systematic audio-visual stimulation may encourage a more organized pattern of visual exploration with long lasting effects. In conclusion, results from these studies clearly demonstrate that the beneficial effects of audio-visual integration can be retained in absence of explicit processing of visual stimulus. Surprisingly, an improvement of spatial orienting can be obtained not only when a on-line response is required, but also after either a brief or a long adaptation to audio-visual stimulus pairs, so suggesting the maintenance of mechanisms subserving cross-modal perceptual learning after a damage to geniculo-striate pathway. The colliculus-extrastriate pathway, which is spared in patients with VFD, seems to play a pivotal role in this sensory compensation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is a range of tempos within which listeners can identify familiar tunes (around 0.8 to 6.0 notes/s). Faster and slower tunes are difficult to identify. The authors assessed fast and slow melody-identification thresholds for 80 listeners ages 17–79 years with expertise varying from musically untrained to professional. On fast-to-slow (FS) trials the tune started at a very fast tempo and slowed until the listener identified it. Slow-to-fast (SF) trials started slow and accelerated. Tunes either retained their natural rhythms or were stylized isochronous versions. Increased expertise led to better performance for both FS and SF thresholds (r = .45). Performance declined uniformly across the 62-year age range in the FS condition (r = .27). SF performance was unaffected by age. Although early encoding processes may slow with age, expertise has a greater effect. Musical expertise involves perceptual learning with melodies at a wide range of tempos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In typical perceptual learning experiments, one stimulus type (e.g., a bisection stimulus offset either to the left or right) is presented per trial. In roving, two different stimulus types (e.g., a 30′ and a 20′ wide bisection stimulus) are randomly interleaved from trial to trial. Roving can impair both perceptual learning and task sensitivity. Here, we investigate the relationship between the two. Using a bisection task, we found no effect of roving before training. We next trained subjects and they improved. A roving condition applied after training impaired sensitivity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform and asked to indicate the direction of motion. A total of eleven participants underwent 3,360 practice trials, distributed over twelve (Experiment 1) or 6 days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent studies show that neuronal mechanisms for learning and memory both dynamically modulate and permanently alter the representations of visual stimuli in the adult monkey cortex. Three commonly observed neuronal effects in memory-demanding tasks are repetition suppression, enhancement, and delay activity. In repetition suppression, repeated experience with the same visual stimulus leads to both short- and long-term suppression of neuronal responses in subpopulations of visual neurons. Enhancement works in an opposite fashion, in that neuronal responses are enhanced for objects with learned behavioral relevance. Delay activity is found in tasks in which animals are required to actively hold specific information “on-line” for short periods. Repetition suppression appears to be an intrinsic property of visual cortical areas such as inferior temporal cortex and is thought to be important for perceptual learning and priming. By contrast, enhancement and delay activity may depend on feedback to temporal cortex from prefrontal cortex and are thought to be important for working memory. All of these mnemonic effects on neuronal responses bias the competitive interactions that take place between stimulus representations in the cortex when there is more than one stimulus in the visual field. As a result, memory will often determine the winner of these competitions and, thus, will determine which stimulus is attended.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Amblyopia is a neuronal abnormality of vision that is often considered irreversible in adults. We found strong and significant improvement of Vernier acuity in human adults with naturally occurring amblyopia following practice. Learning was strongest at the trained orientation and did not transfer to an untrained task (detection), but it did transfer partially to the untrained eye (primarily at the trained orientation). We conclude that this perceptual learning reflects alterations in early neural processes that are localized beyond the site of convergence of the two eyes. Our results suggest a significant degree of plasticity in the visual system of adults with amblyopia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells within each cortical area over distances of 6-8 mm. The relationship between horizontal connections and cortical functional architecture suggests a role in visual segmentation and spatial integration. The distribution of lateral interactions within striate cortex was visualized with optical recording, and their functional consequences were explored by using comparable stimuli in human psychophysical experiments and in recordings from alert monkeys. They may represent the substrate for perceptual phenomena such as illusory contours, surface fill-in, and contour saliency. The dynamic nature of receptive field properties and cortical architecture has been seen over time scales ranging from seconds to months. One can induce a remapping of the topography of visual cortex by making focal binocular retinal lesions. Shorter-term plasticity of cortical receptive fields was observed following brief periods of visual stimulation. The mechanisms involved entailed, for the short-term changes, altering the effectiveness of existing cortical connections, and for the long-term changes, sprouting of axon collaterals and synaptogenesis. The mutability of cortical function implies a continual process of calibration and normalization of the perception of visual attributes that is dependent on sensory experience throughout adulthood and might further represent the mechanism of perceptual learning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We summarize the various strands of research on peripheral vision and relate them to theories of form perception. After a historical overview, we describe quantifications of the cortical magnification hypothesis, including an extension of Schwartz's cortical mapping function. The merits of this concept are considered across a wide range of psychophysical tasks, followed by a discussion of its limitations and the need for non-spatial scaling. We also review the eccentricity dependence of other low-level functions including reaction time, temporal resolution, and spatial summation, as well as perimetric methods. A central topic is then the recognition of characters in peripheral vision, both at low and high levels of contrast, and the impact of surrounding contours known as crowding. We demonstrate how Bouma's law, specifying the critical distance for the onset of crowding, can be stated in terms of the retinocortical mapping. The recognition of more complex stimuli, like textures, faces, and scenes, reveals a substantial impact of mid-level vision and cognitive factors. We further consider eccentricity-dependent limitations of learning, both at the level of perceptual learning and pattern category learning. Generic limitations of extrafoveal vision are observed for the latter in categorization tasks involving multiple stimulus classes. Finally, models of peripheral form vision are discussed. We report that peripheral vision is limited with regard to pattern categorization by a distinctly lower representational complexity and processing speed. Taken together, the limitations of cognitive processing in peripheral vision appear to be as significant as those imposed on low-level functions and by way of crowding.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We summarize the various strands of research on peripheral vision and relate them to theories of form perception. After a historical overview, we describe quantifications of the cortical magnification hypothesis, including an extension of Schwartz's cortical mapping function. The merits of this concept are considered across a wide range of psychophysical tasks, followed by a discussion of its limitations and the need for non-spatial scaling. We also review the eccentricity dependence of other low-level functions including reaction time, temporal resolution, and spatial summation, as well as perimetric methods. A central topic is then the recognition of characters in peripheral vision, both at low and high levels of contrast, and the impact of surrounding contours known as crowding. We demonstrate how Bouma's law, specifying the critical distance for the onset of crowding, can be stated in terms of the retinocortical mapping. The recognition of more complex stimuli, like textures, faces, and scenes, reveals a substantial impact of mid-level vision and cognitive factors. We further consider eccentricity-dependent limitations of learning, both at the level of perceptual learning and pattern category learning. Generic limitations of extrafoveal vision are observed for the latter in categorization tasks involving multiple stimulus classes. Finally, models of peripheral form vision are discussed. We report that peripheral vision is limited with regard to pattern categorization by a distinctly lower representational complexity and processing speed. Taken together, the limitations of cognitive processing in peripheral vision appear to be as significant as those imposed on low-level functions and by way of crowding.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the overarching questions in the field of infant perceptual and cognitive development concerns how selective attention is organized during early development to facilitate learning. The following study examined how infants' selective attention to properties of social events (i.e., prosody of speech and facial identity) changes in real time as a function of intersensory redundancy (redundant audiovisual, nonredundant unimodal visual) and exploratory time. Intersensory redundancy refers to the spatially coordinated and temporally synchronous occurrence of information across multiple senses. Real time macro- and micro-structural change in infants' scanning patterns of dynamic faces was also examined. ^ According to the Intersensory Redundancy Hypothesis, information presented redundantly and in temporal synchrony across two or more senses recruits infants' selective attention and facilitates perceptual learning of highly salient amodal properties (properties that can be perceived across several sensory modalities such as the prosody of speech) at the expense of less salient modality specific properties. Conversely, information presented to only one sense facilitates infants' learning of modality specific properties (properties that are specific to a particular sensory modality such as facial features) at the expense of amodal properties (Bahrick & Lickliter, 2000, 2002). ^ Infants' selective attention and discrimination of prosody of speech and facial configuration was assessed in a modified visual paired comparison paradigm. In redundant audiovisual stimulation, it was predicted infants would show discrimination of prosody of speech in the early phases of exploration and facial configuration in the later phases of exploration. Conversely, in nonredundant unimodal visual stimulation, it was predicted infants would show discrimination of facial identity in the early phases of exploration and prosody of speech in the later phases of exploration. Results provided support for the first prediction and indicated that following redundant audiovisual exposure, infants showed discrimination of prosody of speech earlier in processing time than discrimination of facial identity. Data from the nonredundant unimodal visual condition provided partial support for the second prediction and indicated that infants showed discrimination of facial identity, but not prosody of speech. The dissertation study contributes to the understanding of the nature of infants' selective attention and processing of social events across exploratory time.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent findings indicate that bimodal-redundant stimulation promotes perceptual learning and recruits attention to amodal properties in non-human as well as human infants. However it is not clear if bimodal-redundant stimulation can also facilitate memory during the postnatal period. Moreover, most animal and human studies have employed an operant paradigm to study memory, but have not compared the effectiveness of contingent versus passive presentation of information on memory. The current study investigated the role of unimodal versus bimodal presentation and, the role of a contingent versus passive exposure in memory retention in the bobwhite quail (Colinus virginianus). Results revealed that contingently trained chicks demonstrated a preference for the familiarized call under both unimodal and bimodal conditions. Between-group analyses revealed that the contingent-bimodal group preferred the familiarized call as compared to the passive-bimodal group. These results indicate that the contingency paradigm accompanied with the bimodal stimulus type facilitated memory during early development.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Resumen tomado de la publicaci??n