993 resultados para auditory-motor interaction
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The aim of the present study was to examine tapping synchronization in children with and without Developmental Coordination Disorder (DCD). Participants were 27 children from which 13 diagnosed with motor difficulties composed the DCD group and 14 children with typical development (TD) the comparison group. The experimental task consisted of performing 25 continuous tapping on a surface of an electronic drum with the preferred hand. Participants were required to tap in synchrony with an auditory bip generated by customized software. Three interval values the tapping task were tested: 470 ms, 1000 ms, 1530 ms. The dependent variables were constant error (CE) and absolute error (AE) and standard deviation of absolute error (SD of AE). The ANOVA 2 x 3 x 3 (Group X Age x Interval) with repeated measures in the last factor for the CE indicated significant interaction among Group X Age X Interval. For the AE and SD of AE the ANOVAs yielded significant main effect of Interval and a significant interaction between Group X Interval. The results of the present study indicated that children with DCD were less accurate and more variable in the tapping synchronization than children with TD. Differences in performance between DCD and children with TD become larger as the interval of the auditory signal increases.
Resumo:
This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.
Resumo:
This study verifies the effects of contralateral noise on otoacoustic emissions and auditory evoked potentials. Short, middle and late auditory evoked potentials as well as otoacoustic emissions with and without white noise were assessed. Twenty-five subjects, normal-hearing, both genders, aged 18 to 30 years, were tested. In general, latencies of the various auditory potentials were increased at noise conditions, whereas amplitudes were diminished at noise conditions for short, middle and late latency responses combined in the same subject. The amplitude of otoacoustic emission decreased significantly in the condition with contralateral noise in comparison to the condition without noise. Our results indicate that most subjects presented different responses between conditions (with and without noise) in all tests, thereby suggesting that the efferent system was acting at both caudal and rostral portions of the auditory system.
Resumo:
The effect produced by a warning stimulus(i) (WS) in reaction time (RT) tasks is commonly attributed to a facilitation of sensorimotor mechanisms by alertness. Recently, evidence was presented that this effect is also related to a proactive inhibition of motor control mechanisms. This inhibition would hinder responding to the WS instead of the target stimulus (TS). Some studies have shown that auditory WS produce a stronger facilitatory effect than visual WS. The present study investigated whether the former WS also produces a stronger inhibitory effect than the latter WS. In one session, the RTs to a visual target in two groups of volunteers were evaluated. In a second session, subjects reacted to the visual target both with (50% of the trials) and without (50% of the trials) a WS. During trials, when subjects received a WS, one group received a visual WS and the other group was presented with an auditory WS. In the first session, the mean RTs of the two groups did not differ significantly. In the second session, the mean RT of the two groups in the presence of the WS was shorter than in their absence. The mean RT in the absence of the auditory WS was significantly longer than the mean RT in the absence of the visual WS. Mean RTs did not differ significantly between the present conditions of the visual and auditory WS. The longer RTs of the auditory WS group as opposed to the visual WS group in the WS-absent trials suggest that auditory WS exert a stronger inhibitory influence on responsivity than visual WS.
Resumo:
Healthcare, Human Computer Interfaces (HCI), Security and Biometry are the most promising application scenario directly involved in the Body Area Networks (BANs) evolution. Both wearable devices and sensors directly integrated in garments envision a word in which each of us is supervised by an invisible assistant monitoring our health and daily-life activities. New opportunities are enabled because improvements in sensors miniaturization and transmission efficiency of the wireless protocols, that achieved the integration of high computational power aboard independent, energy-autonomous, small form factor devices. Application’s purposes are various: (I) data collection to achieve off-line knowledge discovery; (II) user notification of his/her activities or in case a danger occurs; (III) biofeedback rehabilitation; (IV) remote alarm activation in case the subject need assistance; (V) introduction of a more natural interaction with the surrounding computerized environment; (VI) users identification by physiological or behavioral characteristics. Telemedicine and mHealth [1] are two of the leading concepts directly related to healthcare. The capability to borne unobtrusiveness objects supports users’ autonomy. A new sense of freedom is shown to the user, not only supported by a psychological help but a real safety improvement. Furthermore, medical community aims the introduction of new devices to innovate patient treatments. In particular, the extension of the ambulatory analysis in the real life scenario by proving continuous acquisition. The wide diffusion of emerging wellness portable equipment extended the usability of wearable devices also for fitness and training by monitoring user performance on the working task. The learning of the right execution techniques related to work, sport, music can be supported by an electronic trainer furnishing the adequate aid. HCIs made real the concept of Ubiquitous, Pervasive Computing and Calm Technology introduced in the 1988 by Marc Weiser and John Seeley Brown. They promotes the creation of pervasive environments, enhancing the human experience. Context aware, adaptive and proactive environments serve and help people by becoming sensitive and reactive to their presence, since electronics is ubiquitous and deployed everywhere. In this thesis we pay attention to the integration of all the aspects involved in a BAN development. Starting from the choice of sensors we design the node, configure the radio network, implement real-time data analysis and provide a feedback to the user. We present algorithms to be implemented in wearable assistant for posture and gait analysis and to provide assistance on different walking conditions, preventing falls. Our aim, expressed by the idea to contribute at the development of a non proprietary solutions, driven us to integrate commercial and standard solutions in our devices. We use sensors available on the market and avoided to design specialized sensors in ASIC technologies. We employ standard radio protocol and open source projects when it was achieved. The specific contributions of the PhD research activities are presented and discussed in the following. • We have designed and build several wireless sensor node providing both sensing and actuator capability making the focus on the flexibility, small form factor and low power consumption. The key idea was to develop a simple and general purpose architecture for rapid analysis, prototyping and deployment of BAN solutions. Two different sensing units are integrated: kinematic (3D accelerometer and 3D gyroscopes) and kinetic (foot-floor contact pressure forces). Two kind of feedbacks were implemented: audio and vibrotactile. • Since the system built is a suitable platform for testing and measuring the features and the constraints of a sensor network (radio communication, network protocols, power consumption and autonomy), we made a comparison between Bluetooth and ZigBee performance in terms of throughput and energy efficiency. Test in the field evaluate the usability in the fall detection scenario. • To prove the flexibility of the architecture designed, we have implemented a wearable system for human posture rehabilitation. The application was developed in conjunction with biomedical engineers who provided the audio-algorithms to furnish a biofeedback to the user about his/her stability. • We explored off-line gait analysis of collected data, developing an algorithm to detect foot inclination in the sagittal plane, during walk. • In collaboration with the Wearable Lab – ETH, Zurich, we developed an algorithm to monitor the user during several walking condition where the user carry a load. The remainder of the thesis is organized as follows. Chapter I gives an overview about Body Area Networks (BANs), illustrating the relevant features of this technology and the key challenges still open. It concludes with a short list of the real solutions and prototypes proposed by academic research and manufacturers. The domain of the posture and gait analysis, the methodologies, and the technologies used to provide real-time feedback on detected events, are illustrated in Chapter II. The Chapter III and IV, respectively, shown BANs developed with the purpose to detect fall and monitor the gait taking advantage by two inertial measurement unit and baropodometric insoles. Chapter V reports an audio-biofeedback system to improve balance on the information provided by the use centre of mass. A walking assistant based on the KNN classifier to detect walking alteration on load carriage, is described in Chapter VI.
Resumo:
Numerosi studi mostrano che gli intervalli temporali sono rappresentati attraverso un codice spaziale che si estende da sinistra verso destra, dove gli intervalli brevi sono rappresentati a sinistra rispetto a quelli lunghi. Inoltre tale disposizione spaziale del tempo può essere influenzata dalla manipolazione dell’attenzione-spaziale. La presente tesi si inserisce nel dibattito attuale sulla relazione tra rappresentazione spaziale del tempo e attenzione-spaziale attraverso l’uso di una tecnica che modula l’attenzione-spaziale, ovvero, l’Adattamento Prismatico (AP). La prima parte è dedicata ai meccanismi sottostanti tale relazione. Abbiamo mostrato che spostando l’attenzione-spaziale con AP, verso un lato dello spazio, si ottiene una distorsione della rappresentazione di intervalli temporali, in accordo con il lato dello spostamento attenzionale. Questo avviene sia con stimoli visivi, sia con stimoli uditivi, nonostante la modalità uditiva non sia direttamente coinvolta nella procedura visuo-motoria di AP. Questo risultato ci ha suggerito che il codice spaziale utilizzato per rappresentare il tempo, è un meccanismo centrale che viene influenzato ad alti livelli della cognizione spaziale. La tesi prosegue con l’indagine delle aree corticali che mediano l’interazione spazio-tempo, attraverso metodi neuropsicologici, neurofisiologici e di neuroimmagine. In particolare abbiamo evidenziato che, le aree localizzate nell’emisfero destro, sono cruciali per l’elaborazione del tempo, mentre le aree localizzate nell’emisfero sinistro sono cruciali ai fini della procedura di AP e affinché AP abbia effetto sugli intervalli temporali. Infine, la tesi, è dedicata allo studio dei disturbi della rappresentazione spaziale del tempo. I risultati ci indicano che un deficit di attenzione-spaziale, dopo danno emisferico destro, provoca un deficit di rappresentazione spaziale del tempo, che si riflette negativamente sulla vita quotidiana dei pazienti. Particolarmente interessanti sono i risultati ottenuti mediante AP. Un trattamento con AP, efficace nel ridurre il deficit di attenzione-spaziale, riduce anche il deficit di rappresentazione spaziale del tempo, migliorando la qualità di vita dei pazienti.
Resumo:
The aim of this thesis was to investigate the respective contribution of prior information and sensorimotor constraints to action understanding, and to estimate their consequences on the evolution of human social learning. Even though a huge amount of literature is dedicated to the study of action understanding and its role in social learning, these issues are still largely debated. Here, I critically describe two main perspectives. The first perspective interprets faithful social learning as an outcome of a fine-grained representation of others’ actions and intentions that requires sophisticated socio-cognitive skills. In contrast, the second perspective highlights the role of simpler decision heuristics, the recruitment of which is determined by individual and ecological constraints. The present thesis aims to show, through four experimental works, that these two contributions are not mutually exclusive. A first study investigates the role of the inferior frontal cortex (IFC), the anterior intraparietal area (AIP) and the primary somatosensory cortex (S1) in the recognition of other people’s actions, using a transcranial magnetic stimulation adaptation paradigm (TMSA). The second work studies whether, and how, higher-order and lower-order prior information (acquired from the probabilistic sampling of past events vs. derived from an estimation of biomechanical constraints of observed actions) interacts during the prediction of other people’s intentions. Using a single-pulse TMS procedure, the third study investigates whether the interaction between these two classes of priors modulates the motor system activity. The fourth study tests the extent to which behavioral and ecological constraints influence the emergence of faithful social learning strategies at a population level. The collected data contribute to elucidate how higher-order and lower-order prior expectations interact during action prediction, and clarify the neural mechanisms underlying such interaction. Finally, these works provide/open promising perspectives for a better understanding of social learning, with possible extensions to animal models.
Resumo:
Cytoplasmic dynein in filamentous fungi accumulates at microtubule plus-ends near the hyphal tip, which is important for minus-end-directed transport of early endosomes. It was hypothesized that dynein is switched on at the plus-end by cargo association. Here, we show in Aspergillus nidulans that kinesin-1-dependent plus-end localization is not a prerequisite for dynein ATPase activation. First, the Walker A and Walker B mutations in the dynein heavy chain AAA1 domain implicated in blocking different steps of the ATPase cycle cause different effects on dynein localization to microtubules, arguing against the suggestion that ATPase is inactive before arriving at the plus-end. Second, dynein from kinA (kinesin 1) mutant cells has normal ATPase activity despite the absence of dynein plus-end accumulation. In kinA hyphae, dynein localizes along microtubules and does not colocalize with abnormally accumulated early endosomes at the hyphal tip. This is in contrast to the colocalization of dynein and early endosomes in the absence of NUDF/LIS1. However, the Walker B mutation allows dynein to colocalize with the hyphal-tip-accumulated early endosomes in the kinA background. We suggest that the normal ability of dyenin to interact with microtubules as an active minus-end-directed motor demands kinesin-1-mediated plus-end accumulation for effective interactions with early endosomes.
When that tune runs through your head: A PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery forfamiliar tunes. Subjects either imagined the continuation of nonverbaltunes cued by their first few notes, listened to a short sequence of notesas a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area(SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealedactivation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliarsequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiartunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery for familiar tunes. Subjects either imagined the continuation of nonverbal tunes cued by their first few notes, listened to a short sequence of notes as a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area (SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealed activation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliar sequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiar tunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
Resumo:
Most people intuitively understand what it means to “hear a tune in your head.” Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. We discuss these findings as well as some methodological challenges. We also consider the role of core versus belt areas in musical imagery, the relation between auditory and motor systems during imagery of music performance, and practical implications of this research.
Resumo:
The vocal imitation of pitch by singing requires one to plan laryngeal movements on the basis of anticipated target pitch events. This process may rely on auditory imagery, which has been shown to activate motor planning areas. As such, we hypothesized that poor-pitch singing, although not typically associated with deficient pitch perception, may be associated with deficient auditory imagery. Participants vocally imitated simple pitch sequences by singing, discriminated pitch pairs on the basis of pitch height, and completed an auditory imagery self-report questionnaire (the Bucknell Auditory Imagery Scale). The percentage of trials participants sung in tune correlated significantly with self-reports of vividness for auditory imagery, although not with the ability to control auditory imagery. Pitch discrimination was not predicted by auditory imagery scores. The results thus support a link between auditory imagery and vocal imitation.
Resumo:
BACKGROUND:: The interaction of sevoflurane and opioids can be described by response surface modeling using the hierarchical model. We expanded this for combined administration of sevoflurane, opioids, and 66 vol.% nitrous oxide (N2O), using historical data on the motor and hemodynamic responsiveness to incision, the minimal alveolar concentration, and minimal alveolar concentration to block autonomic reflexes to nociceptive stimuli, respectively. METHODS:: Four potential actions of 66 vol.% N2O were postulated: (1) N2O is equivalent to A ng/ml of fentanyl (additive); (2) N2O reduces C50 of fentanyl by factor B; (3) N2O is equivalent to X vol.% of sevoflurane (additive); (4) N2O reduces C50 of sevoflurane by factor Y. These four actions, and all combinations, were fitted on the data using NONMEM (version VI, Icon Development Solutions, Ellicott City, MD), assuming identical interaction parameters (A, B, X, Y) for movement and sympathetic responses. RESULTS:: Sixty-six volume percentage nitrous oxide evokes an additive effect corresponding to 0.27 ng/ml fentanyl (A) with an additive effect corresponding to 0.54 vol.% sevoflurane (X). Parameters B and Y did not improve the fit. CONCLUSION:: The effect of nitrous oxide can be incorporated into the hierarchical interaction model with a simple extension. The model can be used to predict the probability of movement and sympathetic responses during sevoflurane anesthesia taking into account interactions with opioids and 66 vol.% N2O.
Resumo:
Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.