28 resultados para Howard Dillon
Resumo:
Robotic manipulanda are extensively used in investigation of the motor control of human arm movements. They permit the application of translational forces to the arm based on its state and can be used to probe issues ranging from mechanisms of neural control to biomechanics. However, most current designs are optimized for studying either motor learning or stiffness. Even fewer include end-point torque control which is important for the simulation of objects and the study of tool use. Here we describe a modular, general purpose, two-dimensional planar manipulandum (vBOT) primarily optimized for dynamic learning paradigms. It employs a carbon fibre arm arranged as a parallelogram which is driven by motors via timing pulleys. The design minimizes the intrinsic dynamics of the manipulandum without active compensation. A novel variant of the design (WristBOT) can apply torques at the handle using an add-on cable drive mechanism. In a second variant (StiffBOT) a more rigid arm can be substituted and zero backlash belts can be used, making the StiffBOT more suitable for the study of stiffness. The three variants can be used with custom built display rigs, mounting, and air tables. We investigated the performance of the vBOT and its variants in terms of effective end-point mass, viscosity and stiffness. Finally we present an object manipulation task using the WristBOT. This demonstrates that subjects can perceive the orientation of the principal axis of an object based on haptic feedback arising from its rotational dynamics.
Resumo:
Humans use their arms to engage in a wide variety of motor tasks during everyday life. However, little is known about the statistics of these natural arm movements. Studies of the sensory system have shown that the statistics of sensory inputs are key to determining sensory processing. We hypothesized that the statistics of natural everyday movements may, in a similar way, influence motor performance as measured in laboratory-based tasks. We developed a portable motion-tracking system that could be worn by subjects as they went about their daily routine outside of a laboratory setting. We found that the well-documented symmetry bias is reflected in the relative incidence of movements made during everyday tasks. Specifically, symmetric and antisymmetric movements are predominant at low frequencies, whereas only symmetric movements are predominant at high frequencies. Moreover, the statistics of natural movements, that is, their relative incidence, correlated with subjects' performance on a laboratory-based phase-tracking task. These results provide a link between natural movement statistics and motor performance and confirm that the symmetry bias documented in laboratory studies is a natural feature of human movement.
Resumo:
Our ability to skillfully manipulate an object often involves the motor system learning to compensate for the dynamics of the object. When the two arms learn to manipulate a single object they can act cooperatively, whereas when they manipulate separate objects they control each object independently. We examined how learning transfers between these two bimanual contexts by applying force fields to the arms. In a coupled context, a single dynamic is shared between the arms, and in an uncoupled context separate dynamics are experienced independently by each arm. In a composition experiment, we found that when subjects had learned uncoupled force fields they were able to transfer to a coupled field that was the sum of the two fields. However, the contribution of each arm repartitioned over time so that, when they returned to the uncoupled fields, the error initially increased but rapidly reverted to the previous level. In a decomposition experiment, after subjects learned a coupled field, their error increased when exposed to uncoupled fields that were orthogonal components of the coupled field. However, when the coupled field was reintroduced, subjects rapidly readapted. These results suggest that the representations of dynamics for uncoupled and coupled contexts are partially independent. We found additional support for this hypothesis by showing significant learning of opposing curl fields when the context, coupled versus uncoupled, was alternated with the curl field direction. These results suggest that the motor system is able to use partially separate representations for dynamics of the two arms acting on a single object and two arms acting on separate objects.
Resumo:
Predictions for a 75x205mm surface semi-elliptic defect in the NESC-1 spinning cylinder test have been made using BS PD 6493:1991, the R6 procedure, non-linear cracked body finite element analysis techniques and the local approach to fracture. All the techniques agree in predicting ductile tearing near the inner surface of the cylinder followed by cleavage initiation. However they differ in the amount of ductile tearing, and the exact location and time of any cleavage event. The amount of ductile tearing decreases with increasing sophistication in the analysis, due to the drop in peak crack driving force and more explicit consideration of constraint effects. The local approach predicts a high probability of cleavage in both HAZ and base material after 190s, while the other predictions suggest that cleavage is unlikely in the HAZ due to constraint loss, but likely in the underlying base material. The timing of this event varies from ∼150s for R6 predictions to ∼250-300s using non-linear cracked body analysis.
Resumo:
Recent research into the acquisition of spoken language has stressed the importance of learning through embodied linguistic interaction with caregivers rather than through passive observation. However the necessity of interaction makes experimental work into the simulation of infant speech acquisition difficult because of the technical complexity of building real-time embodied systems. In this paper we present KLAIR: a software toolkit for building simulations of spoken language acquisition through interactions with a virtual infant. The main part of KLAIR is a sensori-motor server that supplies a client machine learning application with a virtual infant on screen that can see, hear and speak. By encapsulating the real-time complexities of audio and video processing within a server that will run on a modern PC, we hope that KLAIR will encourage and facilitate more experimental research into spoken language acquisition through interaction. Copyright © 2009 ISCA.
Resumo:
In virtual assembly verification or remote maintenance tasks, bimanual haptic interfaces play a crucial role in successful task completion. This paper proposes a method for objectively comparing how well a haptic interface covers the reachable workspace of human arms. Two system configurations are analyzed for a recently introduced haptic device that is based on two DLR-KUKA light weight robots: the standard configuration, where the device is opposite the human operator, and the ergonomic configuration, where the haptic device is mounted behind the human operator. The human operator directly controls the robotic arms using handles. The analysis is performed using a representation of the robot arm workspace. The merits of restricting the comparisons to the most significant regions of the human workspace are discussed. Using this method, a greater workspace correspondence for the ergonomic configuration was shown. ©2010 IEEE.
Resumo:
Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process.
Resumo:
A recent study has found that toddlers do not compensate for an artificial alteration in a vowel they hear themselves producing. This raises questions about how young children learn speech sounds. © 2012 Elsevier Ltd.
Resumo:
Real-world tasks often require movements that depend on a previous action or on changes in the state of the world. Here we investigate whether motor memories encode the current action in a manner that depends on previous sensorimotor states. Human subjects performed trials in which they made movements in a randomly selected clockwise or counterclockwise velocity-dependent curl force field. Movements during this adaptation phase were preceded by a contextual phase that determined which of the two fields would be experienced on any given trial. As expected from previous research, when static visual cues were presented in the contextual phase, strong interference (resulting in an inability to learn either field) was observed. In contrast, when the contextual phase involved subjects making a movement that was continuous with the adaptation-phase movement, a substantial reduction in interference was seen. As the time between the contextual and adaptation movement increased, so did the interference, reaching a level similar to that seen for static visual cues for delays >600 ms. This contextual effect generalized to purely visual motion, active movement without vision, passive movement, and isometric force generation. Our results show that sensorimotor states that differ in their recent temporal history can engage distinct representations in motor memory, but this effect decays progressively over time and is abolished by ∼600 ms. This suggests that motor memories are encoded not simply as a mapping from current state to motor command but are encoded in terms of the recent history of sensorimotor states.
Resumo:
Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.
Resumo:
Humans appear to be sensitive to relative small changes in their surroundings. These changes are often initially perceived as irrelevant, but they can cause significant changes in behavior. However, how exactly people's behavior changes is often hard to quantify. A reliable and valid tool is needed in order to address such a question, ideally measuring an important point of interaction, such as the hand. Wearable-body-sensor systems can be used to obtain valuable, behavioral information. These systems are particularly useful for assessing functional interactions that occur between the endpoints of the upper limbs and our surroundings. A new method is explored that consists of computing hand position using a wearable sensor system and validating it against a gold standard reference measurement (optical tracking device). Initial outcomes related well to the gold standard measurements (r = 0.81) showing an acceptable average root mean square error of 0.09 meters. Subsequently, the use of this approach was further investigated by measuring differences in motor behavior, in response to a changing environment. Three subjects were asked to perform a water pouring task with three slightly different containers. Wavelet analysis was introduced to assess how motor consistency was affected by these small environmental changes. Results showed that the behavioral motor adjustments to a variable environment could be assessed by applying wavelet coherence techniques. Applying these procedures in everyday life, combined with correct research methodologies, can assist in quantifying how environmental changes can cause alterations in our motor behavior.