930 resultados para Visual and auditory processing
Resumo:
The coding of body part location may depend upon both visual and proprioceptive information, and allows targets to be localized with respect to the body. The present study investigates the interaction between visual and proprioceptive localization systems under conditions of multisensory conflict induced by optokinetic stimulation (OKS). Healthy subjects were asked to estimate the apparent motion speed of a visual target (LED) that could be located either in the extrapersonal space (visual encoding only, V), or at the same distance, but stuck on the subject's right index finger-tip (visual and proprioceptive encoding, V-P). Additionally, the multisensory condition was performed with the index finger kept in position both passively (V-P passive) and actively (V-P active). Results showed that the visual stimulus was always perceived to move, irrespective of its out- or on-the-body location. Moreover, this apparent motion speed varied consistently with the speed of the moving OKS background in all conditions. Surprisingly, no differences were found between V-P active and V-P passive conditions in the speed of apparent motion. The persistence of the visual illusion during the active posture maintenance reveals a novel condition in which vision totally dominates over proprioceptive information, suggesting that the hand-held visual stimulus was perceived as a purely visual, external object despite its contact with the hand.
Resumo:
The 'irrelevant sound effect' in short-term memory is commonly believed to entail a number of direct consequences for cognitive performance in the office and other workplaces (e.g. S. P. Banbury, S. Tremblay, W. J. Macken, & D. M. Jones, 2001). It may also help to identify what types of sound are most suitable as auditory warning signals. However, the conclusions drawn are based primarily upon evidence from a single task (serial recall) and a single population (young adults). This evidence is reconsidered from the standpoint of different worker populations confronted with common workplace tasks and auditory environments. Recommendations are put forward for factors to be considered when assessing the impact of auditory distraction in the workplace. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Perception of our own bodies is based on integration of visual and tactile inputs, notably by neurons in the brain’s parietal lobes. Here we report a behavioural consequence of this integration process. Simply viewing the arm can speed up reactions to an invisible tactile stimulus on the arm. We observed this visual enhancement effect only when a tactile task required spatial computation within a topographic map of the body surface and the judgements made were close to the limits of performance. This effect of viewing the body surface was absent or reversed in tasks that either did not require a spatial computation or in which judgements were well above performance limits. We consider possible mechanisms by which vision may influence tactile processing.
Resumo:
The existence of hand-centred visual processing has long been established in the macaque premotor cortex. These hand-centred mechanisms have been thought to play some general role in the sensory guidance of movements towards objects, or, more recently, in the sensory guidance of object avoidance movements. We suggest that these hand-centred mechanisms play a specific and prominent role in the rapid selection and control of manual actions following sudden changes in the properties of the objects relevant for hand-object interactions. We discuss recent anatomical and physiological evidence from human and non-human primates, which indicates the existence of rapid processing of visual information for hand-object interactions. This new evidence demonstrates how several stages of the hierarchical visual processing system may be bypassed, feeding the motor system with hand-related visual inputs within just 70 ms following a sudden event. This time window is early enough, and this processing rapid enough, to allow the generation and control of rapid hand-centred avoidance and acquisitive actions, for aversive and desired objects, respectively
Resumo:
Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. Methods: We examined analysis of basic acoustic stimuli in Wernicke’s aphasia participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in “moving ripple” stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Results: Participants with Wernicke’s aphasia showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both frequency and dynamic modulation detection correlated significantly with auditory comprehension abilities in the Wernicke’s aphasia participants. Conclusion: These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectrotemporal nonverbal stimuli in Wernicke’s aphasia, which may have a causal contribution to the auditory language comprehension impairment Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing.
Resumo:
Across two studies, we examined the association between adiposity, restrictive feeding practices and cortical processing bias to food stimuli in children. We assessed P3b event-related potential (ERP) during visual oddball tasks in which the frequently presented stimulus was non-food and the infrequently presented stimulus was either a food (Study 1) or non-food (Study 2) item. Children responded to the infrequently presented stimulus and accuracy and speed responses were collected. Restrictive feeding practices, children's height and weight were also measured. In Study 1, the difference in P3b amplitude for infrequently presented food stimuli, relative to frequently presented non-food stimuli, was negatively associated with adiposity and positively associated with restrictive feeding practices after controlling for adiposity. There was no association between P3b amplitude difference and adiposity or restriction in Study 2, suggesting that the effects seen in Study 1 were not due to general attentional processes. Taken together, our results suggest that attentional salience, as indexed by the P3b amplitude, may be important for understanding the neural correlates of adiposity and restrictive feeding practices in children.
Resumo:
Background: Auditory discrimination is significantly impaired in Wernicke’s aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Methods: Behavioural auditory discrimination thresholds of CVC syllables and pure tones were measured in WA (n=7) and control (n=7) participants. Threshold results were used to develop multiple-deviant mismatch negativity (MMN) oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). Results: MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. Conclusions: The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.
Resumo:
Threat detection is a challenging problem, because threats appear in many variations and differences to normal behaviour can be very subtle. In this paper, we consider threats on a parking lot, where theft of a truck’s cargo occurs. The threats range from explicit, e.g. a person attacking the truck driver, to implicit, e.g. somebody loitering and then fiddling with the exterior of the truck in order to open it. Our goal is a system that is able to recognize a threat instantaneously as they develop. Typical observables of the threats are a person’s activity, presence in a particular zone and the trajectory. The novelty of this paper is an encoding of these threat observables in a semantic, intermediate-level representation, based on low-level visual features that have no intrinsic semantic meaning themselves. The aim of this representation was to bridge the semantic gap between the low-level tracks and motion and the higher-level notion of threats. In our experiments, we demonstrate that our semantic representation is more descriptive for threat detection than directly using low-level features. We find that a person’s activities are the most important elements of this semantic representation, followed by the person’s trajectory. The proposed threat detection system is very accurate: 96.6 % of the tracks are correctly interpreted, when considering the temporal context.
Resumo:
Adults diagnosed with autism spectrum disorder (ASD) show a reduced sensitivity (degree of selective response) to social stimuli such as human voices. In order to determine whether this reduced sensitivity is a consequence of years of poor social interaction and communication or is present prior to significant experience, we used functional MRI to examine cortical sensitivity to auditory stimuli in infants at high familial risk for later emerging ASD (HR group, N = 15), and compared this to infants with no family history of ASD (LR group, N = 18). The infants (aged between 4 and 7 months) were presented with voice and environmental sounds while asleep in the scanner and their behaviour was also examined in the context of observed parent-infant interaction. Whereas LR infants showed early specialisation for human voice processing in right temporal and medial frontal regions, the HR infants did not. Similarly, LR infants showed stronger sensitivity than HR infants to sad vocalisations in the right fusiform gyrus and left hippocampus. Also, in the HR group only, there was an association between each infant's degree of engagement during social interaction and the degree of voice sensitivity in key cortical regions. These results suggest that at least some infants at high-risk for ASD have atypical neural responses to human voice with and without emotional valence. Further exploration of the relationship between behaviour during social interaction and voice processing may help better understand the mechanisms that lead to different outcomes in at risk populations.
Resumo:
Among lampyrids, intraspecific sexual communication is facilitated by spectral correspondence between visual sensitivity and bioluminescence emission from the single lantern in the tail. Could a similar strategy be utilized by the elaterids (click beetles), which have one ventral abdominal and two dorsal prothoracic lanterns? Spectral sensitivity [S(lambda)] and bioluminescence were investigated in four Brazilian click beetle species Fulgeochlizus bruchii, Pyrearinus termitilluminans, Pyrophorus punctatissimus and P. divergens, representing three genera. In addition, in situ microspectrophotometric absorption spectra were obtained for visual and screening pigments in P. punctatissimus and P. divergens species. In all species, the electroretinographic S(lambda) functions showed broad peaks in the green with a shoulder in the near-ultraviolet, suggesting the presence of short- and long-wavelength receptors in the compound eyes. The long-wavelength receptor in Pyrophorus species is mediated by a P540 rhodopsin in conjunction with a species-specific screening pigment. A correspondence was found between green to yellow bioluminescence emissions and its broad S(lambda) maximum in each of the four species. It is hypothesized that in elaterids, bioluminescence of the abdominal lantern is an optical signal for intraspecifc sexual communication, while the signals from the prothoracic lanterns serve to warn predators and may also provide illumination in flight.
Resumo:
The effectiveness of Cognitive Behavioral Therapy (CBT) for eating disorders has established a link between cognitive processes and unhealthy eating behaviors. However, the relationship between individual differences in unhealthy eating behaviors that are not related to clinical eating disorders, such as overeating and restrained eating, and the processing of food related verbal stimuli remains undetermined. Furthermore, the cognitive processes that promote unhealthy and healthy exercise patterns remain virtually unexplored by previous research. The present study compared individual differences in attitudes and behaviors around eating and exercise to responses to food and exercise-related words using a Lexical Decision Task (LDT). Participants were recruited from Colby (n = 61) and the greater Waterville community (n = 16). The results indicate the following trends in the data: Individuals who scored high in “thin ideal” responded faster to food-related words than individuals with low “thin Ideal” scores did. Regarding the exercise-related data, individuals who engage in more “low intensity exercise” responded faster to exercise-related words than individuals who engage in less “low intensity exercise” did. These findings suggest that cognitive schemata about food and exercise might mediate individual’s eating and exercise patterns.
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Resumo:
Aims: To determine the occurrence of isolated and recurrent episodes of conductive hearing loss (CHL) during the first two years of life in very low birth weight (VLBW) infants with and without bronchopulmonary dysplasia (BPD).Study design, subjects and outcome measures: In a longitudinal clinical study. 187 children were evaluated at 6, 9, 12,15 18 and 24 months of age by visual reinforcement audiometry, tympanometry and auditory brain response system.Results: of the children with BPD, 54.5% presented with episodes of CHL, as opposed to 34.7% of the children without BPD. This difference was found to be statistically significant. The recurrent or persistent episodes were more frequent among children with BPD (25.7%) than among those without BPD (8.3%). The independent variables that contributed to this finding were small for gestational age and a 5 min Apgar score.Conclusions: Recurrent CHL episodes are more frequent among VLBW infants with BPD than among VLBW infants without BPD. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Statement of problem. An increase in occlusal vertical dimension (OVD) may occur after processing complete dentures. Although many factors that generate this change are known, no information is available in the dental literature regarding the effect that the occlusal scheme may have on the change in OVD.Purpose. This in vitro study compared the increase in OVD, after processing, between complete dentures with teeth arranged in lingualized balanced occlusion and conventional balanced occlusion.Material and methods. Thirty sets of complete dentures were evaluated as follows: 15 sets of complete dentures were arranged in conventional balanced occlusion (control) and 15 sets of complete dentures were arranged in lingualized balanced occlusion. All dentures were compression molded with a long polymerization cycle. The occlusal vertical dimension was measured with a micrometer (mm) before and after processing each set of dentures. Data were analyzed using an independent t test (alpha=.05).Results. The mean increase in the OVD, after processing, was 0.87 +/- 0.21 mm for the control group and 0.90 +/- 0.27 mm for the experimental group. There was no significant difference between the groups.Conclusion. After processing, dentures set in lingualized balanced occlusion showed an increase in OVD similar to those set in conventional balanced occlusion.
Resumo:
The purpose of this study was to examine the effects of visual and somatosensory information on body sway in individuals with Down syndrome (DS). Nine adults with DS (19-29 years old) and nine control subjects (CS) (19-29 years old) stood in the upright stance in four experimental conditions: no vision and no touch; vision and no touch; no vision and touch; and vision and touch. In the vision condition, participants looked at a target placed in front of them; in the no vision condition, participants wore a black cotton mask. In the touch condition, participants touched a stationary surface with their right index finger; in the no touch condition, participants kept their arms hanging alongside their bodies. A force plate was used to estimate center of pressure excursion for both anterior-posterior and medial-lateral directions. MANOVA revealed that both the individuals with DS and the control subjects used vision and touch to reduce overall body sway, although individuals with DS still oscillated more than did the CS. These results indicate that adults with DS are able to use sensory information to reduce body sway, and they demonstrate that there is no difference in sensory integration between the individuals with DS and the CS.