976 resultados para Perceptual modalities


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this experiment was to determine the effectiveness of two video-based perceptual training approaches designed to improve the anticipatory skills of junior tennis players. Players were assigned equally to an explicit learning group, an implicit learning group, a placebo group or a control group. A progressive temporal occlusion paradigm was used to examine, before and after training, the ability of the players to predict the direction of an opponent's service in an in-vivo on-court setting. The players responded either through hitting a return stroke or making a verbal prediction of stroke direction. Results revealed that the implicit learning group, whose training required them to predict serve speed direction while viewing temporally occluded video footage of the return-of-serve scenario, significantly improved their prediction accuracy after the training intervention. However, this training effect dissipated after a 32 day unfilled retention interval. The explicit learning group, who received instructions about the specific aspects of the pre-contact service kinematics that are informative with respect to service direction, did not demonstrate any significant performance improvements after the intervention. This, together with the absence of any significant improvements for the placebo and control groups, demonstrated that the improvement observed for the implicit learning group was not a consequence of either expectancy or familiarity effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compression amplification significantly alters the acoustic speech signal in comparison to linear amplification. The central hypothesis of the present study was that the compression settings of a two-channel aid that best preserved the acoustic properties of speech compared to linear amplification would yield the best perceptual results, and that the compression settings that most altered the acoustic properties of speech compared to linear would yield significantly poorer speech perception. On the basis of initial acoustic analysis of the test stimuli recorded through a hearing aid, two different compression amplification settings were chosen for the perceptual study. Participants were 74 adults with mild to moderate sensorineural hearing impairment. Overall, the speech perception results supported the hypothesis. A further aim of the study was to determine if variation in participants' speech perception with compression amplification (compared to linear amplification) could be explained by the individual characteristics of age, degree of loss, dynamic range, temporal resolution, and frequency selectivity; however, no significant relationships were found.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When asked to compare two lateralized shapes for horizontal size, neglect patients often indicate the left stimulus to be smaller. Gainotti and Tiacci (1971) hypothesized that this phenomenon might be related to a rightward bias in the patients' gaze. This study aimed to assess the relation between this size underestimation and oculomotor asymmetries. Eye movements were recorded while three neglect patients judged the horizontal extent of two rectangles. Two experimental manipulations were performed to increase the likelihood of symmetrical scanning of the stimulus display. The first manipulation entailed a sequential, rather than simultaneous presentation of the two rectangles. The second required adaptation to rightward displacing prisms, which is known to reduce many manifestations of neglect. All patients consistently underestimated the left rectangle, but the pattern of verbal responses and eye movements suggested different underlying causes. These include a distortion of space perception without ocular asymmetry, a failure to view the full leftward extent of the left stimulus, and a high-level response bias. Sequential presentation of the rectangles and prism adaptation reduced ocular asymmetries without affecting size underestimation. Overall, the results suggest that leftward size underestimation in neglect can arise for a number of different reasons. Incomplete leftward scanning may perhaps be sufficient to induce perceptual size distortion, but it is not a necessary prerequisite.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When interacting with each other, people often synchronize spontaneously their movements, e.g. during pendulum swinging, chair rocking[5], walking [4][7], and when executing periodic forearm movements[3].Although the spatiotemporal information that establishes the coupling, leading to synchronization, might be provided by several perceptual systems, the systematic study of different sensory modalities contribution is widely neglected. Considering a) differences in the sensory dominance on the spatial and temporal dimension[5] , b) different cue combination and integration strategies [1][2], and c) that sensory information might provide different aspects of the same event, synchronization should be moderated by the type of sensory modality. Here, 9 naïve participants placed a bottle periodically between two target zones, 40 times, in 12 conditions while sitting in front of a confederate executing the same task. The participant could a) see and hear, b) see , c) hear the confederate, d) or audiovisual information about the movements of the confederate was absent. The couple started in 3 different relative positions (i.e., in-phase, anti-phase, out of phase). A retro-reflective marker was attached to the top of the bottles. Bottle displacement was captured by a motion capture system. We analyzed the variability of the continuous relative phase reflecting the degree of synchronization. Results indicate the emergence of spontaneous synchronization, an increase with bimodal information, and an influence of the initial phase relation on the particular synchronization pattern. Results have theoretical implication for studying cue combination in interpersonal coordination and are consistent with coupled oscillator models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many of our everyday tasks require the control of the serial order and the timing of component actions. Using the dynamic neural field (DNF) framework, we address the learning of representations that support the performance of precisely time action sequences. In continuation of previous modeling work and robotics implementations, we ask specifically the question how feedback about executed actions might be used by the learning system to fine tune a joint memory representation of the ordinal and the temporal structure which has been initially acquired by observation. The perceptual memory is represented by a self-stabilized, multi-bump activity pattern of neurons encoding instances of a sensory event (e.g., color, position or pitch) which guides sequence learning. The strength of the population representation of each event is a function of elapsed time since sequence onset. We propose and test in simulations a simple learning rule that detects a mismatch between the expected and realized timing of events and adapts the activation strengths in order to compensate for the movement time needed to achieve the desired effect. The simulation results show that the effector-specific memory representation can be robustly recalled. We discuss the impact of the fast, activation-based learning that the DNF framework provides for robotics applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Symbol Digit Modalities Test (SDMT) is a widely used instrument to assess information processing speed, attention, visual scanning, and tracking. Considering that repeated evaluations are a common need in neuropsychological assessment routines, we explored test–retest reliability and practice effects of two alternate SDMT forms with a short inter-assessment interval. A total of 123 university students completed the written SDMT version in two different time points separated by a 150-min interval. Half of the participants accomplished the same form in both occasions, while the other half filled different forms. Overall, reasonable test–retest reliabilities were found (r = .70), and the subjects that completed the same form revealed significant practice effects (p < .001, dz = 1.61), which were almost non-existent in those filling different forms. These forms were found to be moderately reliable and to elicit a similar performance across participants, suggesting their utility in repeated cognitive assessments when brief inter-assessment intervals are required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract (English)General backgroundMultisensory stimuli are easier to recognize, can improve learning and a processed faster compared to unisensory ones. As such, the ability an organism has to extract and synthesize relevant sensory inputs across multiple sensory modalities shapes his perception of and interaction with the environment. A major question in the scientific field is how the brain extracts and fuses relevant information to create a unified perceptual representation (but also how it segregates unrelated information). This fusion between the senses has been termed "multisensory integration", a notion that derives from seminal animal single-cell studies performed in the superior colliculus, a subcortical structure shown to create a multisensory output differing from the sum of its unisensory inputs. At the cortical level, integration of multisensory information is traditionally deferred to higher classical associative cortical regions within the frontal, temporal and parietal lobes, after extensive processing within the sensory-specific and segregated pathways. However, many anatomical, electrophysiological and neuroimaging findings now speak for multisensory convergence and interactions as a distributed process beginning much earlier than previously appreciated and within the initial stages of sensory processing.The work presented in this thesis is aimed at studying the neural basis and mechanisms of how the human brain combines sensory information between the senses of hearing and touch. Early latency non-linear auditory-somatosensory neural response interactions have been repeatedly observed in humans and non-human primates. Whether these early, low-level interactions are directly influencing behavioral outcomes remains an open question as they have been observed under diverse experimental circumstances such as anesthesia, passive stimulation, as well as speeded reaction time tasks. Under laboratory settings, it has been demonstrated that simple reaction times to auditory-somatosensory stimuli are facilitated over their unisensory counterparts both when delivered to the same spatial location or not, suggesting that audi- tory-somatosensory integration must occur in cerebral regions with large-scale spatial representations. However experiments that required the spatial processing of the stimuli have observed effects limited to spatially aligned conditions or varying depending on which body part was stimulated. Whether those divergences stem from task requirements and/or the need for spatial processing has not been firmly established.Hypotheses and experimental resultsIn a first study, we hypothesized that auditory-somatosensory early non-linear multisensory neural response interactions are relevant to behavior. Performing a median split according to reaction time of a subset of behavioral and electroencephalographic data, we found that the earliest non-linear multisensory interactions measured within the EEG signal (i.e. between 40-83ms post-stimulus onset) were specific to fast reaction times indicating a direct correlation of early neural response interactions and behavior.In a second study, we hypothesized that the relevance of spatial information for task performance has an impact on behavioral measures of auditory-somatosensory integration. Across two psychophysical experiments we show that facilitated detection occurs even when attending to spatial information, with no modulation according to spatial alignment of the stimuli. On the other hand, discrimination performance with probes, quantified using sensitivity (d'), is impaired following multisensory trials in general and significantly more so following misaligned multisensory trials.In a third study, we hypothesized that behavioral improvements might vary depending which body part is stimulated. Preliminary results suggest a possible dissociation between behavioral improvements andERPs. RTs to multisensory stimuli were modulated by space only in the case when somatosensory stimuli were delivered to the neck whereas multisensory ERPs were modulated by spatial alignment for both types of somatosensory stimuli.ConclusionThis thesis provides insight into the functional role played by early, low-level multisensory interac-tions. Combining psychophysics and electrical neuroimaging techniques we demonstrate the behavioral re-levance of early and low-level interactions in the normal human system. Moreover, we show that these early interactions are hermetic to top-down influences on spatial processing suggesting their occurrence within cerebral regions having access to large-scale spatial representations. We finally highlight specific interactions between auditory space and somatosensory stimulation on different body parts. Gaining an in-depth understanding of how multisensory integration normally operates is of central importance as it will ultimately permit us to consider how the impaired brain could benefit from rehabilitation with multisensory stimula-Abstract (French)Background théoriqueDes stimuli multisensoriels sont plus faciles à reconnaître, peuvent améliorer l'apprentissage et sont traités plus rapidement comparé à des stimuli unisensoriels. Ainsi, la capacité qu'un organisme possède à extraire et à synthétiser avec ses différentes modalités sensorielles des inputs sensoriels pertinents, façonne sa perception et son interaction avec l'environnement. Une question majeure dans le domaine scientifique est comment le cerveau parvient à extraire et à fusionner des stimuli pour créer une représentation percep- tuelle cohérente (mais aussi comment il isole les stimuli sans rapport). Cette fusion entre les sens est appelée "intégration multisensorielle", une notion qui provient de travaux effectués dans le colliculus supérieur chez l'animal, une structure sous-corticale possédant des neurones produisant une sortie multisensorielle différant de la somme des entrées unisensorielles. Traditionnellement, l'intégration d'informations multisen- sorielles au niveau cortical est considérée comme se produisant tardivement dans les aires associatives supérieures dans les lobes frontaux, temporaux et pariétaux, suite à un traitement extensif au sein de régions unisensorielles primaires. Cependant, plusieurs découvertes anatomiques, électrophysiologiques et de neuroimageries remettent en question ce postulat, suggérant l'existence d'une convergence et d'interactions multisensorielles précoces.Les travaux présentés dans cette thèse sont destinés à mieux comprendre les bases neuronales et les mécanismes impliqués dans la combinaison d'informations sensorielles entre les sens de l'audition et du toucher chez l'homme. Des interactions neuronales non-linéaires précoces audio-somatosensorielles ont été observées à maintes reprises chez l'homme et le singe dans des circonstances aussi variées que sous anes- thésie, avec stimulation passive, et lors de tâches nécessitant un comportement (une détection simple de stimuli, par exemple). Ainsi, le rôle fonctionnel joué par ces interactions à une étape du traitement de l'information si précoce demeure une question ouverte. Il a également été démontré que les temps de réaction en réponse à des stimuli audio-somatosensoriels sont facilités par rapport à leurs homologues unisensoriels indépendamment de leur position spatiale. Ce résultat suggère que l'intégration audio- somatosensorielle se produit dans des régions cérébrales possédant des représentations spatiales à large échelle. Cependant, des expériences qui ont exigé un traitement spatial des stimuli ont produits des effets limités à des conditions où les stimuli multisensoriels étaient, alignés dans l'espace ou encore comme pouvant varier selon la partie de corps stimulée. Il n'a pas été établi à ce jour si ces divergences pourraient être dues aux contraintes liées à la tâche et/ou à la nécessité d'un traitement de l'information spatiale.Hypothèse et résultats expérimentauxDans une première étude, nous avons émis l'hypothèse que les interactions audio- somatosensorielles précoces sont pertinentes pour le comportement. En effectuant un partage des temps de réaction par rapport à la médiane d'un sous-ensemble de données comportementales et électroencépha- lographiques, nous avons constaté que les interactions multisensorielles qui se produisent à des latences précoces (entre 40-83ms) sont spécifique aux temps de réaction rapides indiquant une corrélation directe entre ces interactions neuronales précoces et le comportement.Dans une deuxième étude, nous avons émis l'hypothèse que si l'information spatiale devient perti-nente pour la tâche, elle pourrait exercer une influence sur des mesures comportementales de l'intégration audio-somatosensorielles. Dans deux expériences psychophysiques, nous montrons que même si les participants prêtent attention à l'information spatiale, une facilitation de la détection se produit et ce toujours indépendamment de la configuration spatiale des stimuli. Cependant, la performance de discrimination, quantifiée à l'aide d'un index de sensibilité (d') est altérée suite aux essais multisensoriels en général et de manière plus significative pour les essais multisensoriels non-alignés dans l'espace.Dans une troisième étude, nous avons émis l'hypothèse que des améliorations comportementales pourraient différer selon la partie du corps qui est stimulée (la main vs. la nuque). Des résultats préliminaires suggèrent une dissociation possible entre une facilitation comportementale et les potentiels évoqués. Les temps de réactions étaient influencés par la configuration spatiale uniquement dans le cas ou les stimuli somatosensoriels étaient sur la nuque alors que les potentiels évoqués étaient modulés par l'alignement spatial pour les deux types de stimuli somatosensorielles.ConclusionCette thèse apporte des éléments nouveaux concernant le rôle fonctionnel joué par les interactions multisensorielles précoces de bas niveau. En combinant la psychophysique et la neuroimagerie électrique, nous démontrons la pertinence comportementale des ces interactions dans le système humain normal. Par ailleurs, nous montrons que ces interactions précoces sont hermétiques aux influences dites «top-down» sur le traitement spatial suggérant leur occurrence dans des régions cérébrales ayant accès à des représentations spatiales de grande échelle. Nous soulignons enfin des interactions spécifiques entre l'espace auditif et la stimulation somatosensorielle sur différentes parties du corps. Approfondir la connaissance concernant les bases neuronales et les mécanismes impliqués dans l'intégration multisensorielle dans le système normale est d'une importance centrale car elle permettra d'examiner et de mieux comprendre comment le cerveau déficient pourrait bénéficier d'une réhabilitation avec la stimulation multisensorielle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most theories of perception assume a rigid relationship between objects of the physical world and the corresponding mental representations. We show by a priori reasoning that this assumption is not fulfilled. We claim instead that all object-representation correspondences have to be learned. However, we cannot learn to perceive all objects that there are in the world. We arrive at these conclusions by a combinatory analysis of a fictive stimulus world and the way to cope with its complexity, which is perceptual learning. We show that successful perceptual learning requires changes in the representational states of the brain that are not derived directly from the constitution of the physical world. The mind constitutes itself through perceptual learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION: We tested the hypothesis that twitch potentiation would be greater following conventional (CONV) neuromuscular electrical stimulation (50-µs pulse width and 25-Hz frequency) compared with wide-pulse high-frequency (WPHF) neuromuscular electrical stimulation (1-ms, 100-Hz) and voluntary (VOL) contractions, because of specificities in motor unit recruitment (random in CONV vs. random and orderly in WPHF vs. orderly in VOL). METHODS: A single twitch was evoked by means of tibial nerve stimulation before and 2 s after CONV, WPHF, and VOL conditioning contractions of the plantar flexors (intensity: 10% maximal voluntary contraction; duration: 10 s) in 13 young healthy subjects. RESULTS: Peak twitch increased (P<0.05) after CONV (+4.5±4.0%) and WPHF (+3.3±5.9%), with no difference between the 2 modalities, whereas no changes were observed after VOL (+0.8±2.6%). CONCLUSIONS: Our results demonstrate that presumed differences in motor unit recruitment between WPHF and CONV do not seem to influence twitch potentiation results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'imagerie mentale est définie comme une expérience similaire à la perception mais se déroulant en l'absence d'une stimulation physique. Des recherches antérieures ont montré que l'imagerie mentale améliore la performance dans certains domaines, comme par exemple le domaine moteur. Cependant, son rôle dans l'apprentissage perceptif n'a pas encore été étudié. L'apprentissage perceptif correspond à l'amélioration permanente des performances suite à la répétition de la même tâche. Cette thèse présente une série des résultats empiriques qui montrent que l'apprentissage perceptif peut aussi être achevé en l'absence des stimuli physiques. En effet, imaginer des stimuli visuels amène à une meilleure performance avec les stimuli réels. Donc, les processus sous-jacents l'apprentissage perceptif ne sont pas uniquement déclenchés par les stimuli sensoriels, mais également par des signaux internes. En plus, l'apprentissage perceptif à travers l'imagerie mentale ne se réalise que seule-ment quand les stimuli ne sont pas (complètement) présents, mais gaiement quand les stimuli montrés ne sont pas utiles quant à la résolution de la tâche. - Mental imagery is described as an experience that resembles pereeptnal ex-perience but which occurs in the absence ef a physical stimulation. Despite its beneficial effects in, among others, motor performance, the role of mental imagery m perceptual learning has not yet been addressed. Here we focus on a specific sensory modality: vision. Perceptual learning is the ability to improve perception in a stable way through the repetition of a given task Here I demonstrate by a series of empirical results that a perceptual improve¬ment can also occur in the absence of a stimulation. Imagining visual stimuli is sufficient for successful perceptual learning. Hence, processes underlying perceptual learning are not only stimulus-driven but can also be driven by internally generated signals. Moreover, I also show that perceptual learning via mental imagery can occur not only when physical stimuli are (partially) absent, but also in conditions where stimuli are uninformative with respect to the task that has to be learned.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.