25 resultados para Hand radiography
em CentAUR: Central Archive University of Reading - UK
Resumo:
We investigated previously observed but unexplained differences in incubation success between wild and hand-reared common pheasants Phasianus colchicus. Hand-reared birds are widely released in late summer in Britain and elsewhere to supplement wild stocks for shooting purposes. We radio-tracked 53 wild and 35 previously released reared female pheasants occupying simultaneously the same areas on a game-keepered estate in eastern England between February and mid July 1999 and 2000. Predation of adult birds was comparatively low for both wild and reared birds, and overall survival did not differ between years or between groups. However, of 52 nests incubated by wild females 49% hatched, whereas of 30 nests incubated by reared pheasants only 22% hatched. Mayfield estimates of daily nest survival probability thus differed significantly between groups. However, predation of eggs was similar for both wild and reared birds. Instead the observed difference in hatch rates was due to nest abandonment, with more reared females (41%) deserting apparently unmolested nest sites than wild females (6%).
Resumo:
Background: Shifting gaze and attention ahead of the hand is a natural component in the performance of skilled manual actions. Very few studies have examined the precise co-ordination between the eye and hand in children with Developmental Coordination Disorder (DCD). Methods This study directly assessed the maturity of eye-hand co-ordination in children with DCD. A double-step pointing task was used to investigate the coupling of the eye and hand in 7-year-old children with and without DCD. Sequential targets were presented on a computer screen, and eye and hand movements were recorded simultaneously. Results There were no differences between typically developing (TD) and DCD groups when completing fast single-target tasks. There were very few differences in the completion of the first movement in the double-step tasks, but differences did occur during the second sequential movement. One factor appeared to be the propensity for the DCD children to delay their hand movement until some period after the eye had landed on the target. This resulted in a marked increase in eye-hand lead during the second movement, disrupting the close coupling and leading to a slower and less accurate hand movement among children with DCD. Conclusions In contrast to skilled adults, both groups of children preferred to foveate the target prior to initiating a hand movement if time allowed. The TD children, however, were more able to reduce this foveation period and shift towards a feedforward mode of control for hand movements. The children with DCD persevered with a look-then-move strategy, which led to an increase in error. For the group of DCD children in this study, there was no evidence of a problem in speed or accuracy of simple movements, but there was a difficulty in concatenating the sequential shifts of gaze and hand required for the completion of everyday tasks or typical assessment items.
Resumo:
Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.
Resumo:
The coding of body part location may depend upon both visual and proprioceptive information, and allows targets to be localized with respect to the body. The present study investigates the interaction between visual and proprioceptive localization systems under conditions of multisensory conflict induced by optokinetic stimulation (OKS). Healthy subjects were asked to estimate the apparent motion speed of a visual target (LED) that could be located either in the extrapersonal space (visual encoding only, V), or at the same distance, but stuck on the subject's right index finger-tip (visual and proprioceptive encoding, V-P). Additionally, the multisensory condition was performed with the index finger kept in position both passively (V-P passive) and actively (V-P active). Results showed that the visual stimulus was always perceived to move, irrespective of its out- or on-the-body location. Moreover, this apparent motion speed varied consistently with the speed of the moving OKS background in all conditions. Surprisingly, no differences were found between V-P active and V-P passive conditions in the speed of apparent motion. The persistence of the visual illusion during the active posture maintenance reveals a novel condition in which vision totally dominates over proprioceptive information, suggesting that the hand-held visual stimulus was perceived as a purely visual, external object despite its contact with the hand.
Resumo:
Defensive behaviors, such as withdrawing your hand to avoid potentially harmful approaching objects, rely on rapid sensorimotor transformations between visual and motor coordinates. We examined the reference frame for coding visual information about objects approaching the hand during motor preparation. Subjects performed a simple visuomanual task while a task-irrelevant distractor ball rapidly approached a location either near to or far from their hand. After the distractor ball appearance, single pulses of transcranial magnetic stimulation were delivered over the subject's primary motor cortex, eliciting motor evoked potentials (MEPs) in their responding hand. MEP amplitude was reduced when the ball approached near the responding hand, both when the hand was on the left and the right of the midline. Strikingly, this suppression occurred very early, at 70-80ms after ball appearance, and was not modified by visual fixation location. Furthermore, it was selective for approaching balls, since static visual distractors did not modulate MEP amplitude. Together with additional behavioral measurements, we provide converging evidence for automatic hand-centered coding of visual space in the human brain.
Resumo:
Historically, commercial hand prosthesis have adopted a low level of innovation mainly due to the strict conditions such a system must undergo. The difficult feedback to the prosthesis user has limited the functional range of commercial systems. Nevertheless, the use of advanced sensors in combination with performing hand mechanisms and microcontrollers could lead to more natural and functional prototypes. The Oxford and Manus intelligent hand prostheses are examples of innovative approaches. This paper compares and contrasts the technological solutions implemented in both systems to address the design conditions.
Resumo:
The Southampton Hand Assessment Procedure (SHAP) was devised to assess quantitatively the functional range of injured and healthy adult hands. It was designed to be a practical tool for use in a busy clinical setting; thus, it was made simple to use and easy to interpret. This paper describes four examples of its use: before and after a surgical procedure, to observe the impact of an injury, use with prostheses, and during recovery following a fracture. The cases show that the SHAP is capable of monitoring progress and recovery, identifying functional abilities in prosthetic hands and comparing the capabilities of different groups of injuries.
Resumo:
The spectral content of the myoelectric signals from the muscles of the remnant forearms of three persons with congenital absences (CA) of their forearms was compared with signals from their intact contra-lateral limbs, similar muscles in three persons with acquired losses (AL) and seven persons without absences [no loss (NL)]. The observed bandwidth for the CA subjects was broader with peak energy between 200 and 300 Hz. While the signals from the contra-lateral limbs and the AL and NL subjects was in the 100-150 Hz range: The mean skew of the signals from the AL subjects was 46.3 +/- 6.7 and those with NL of 45.4 +/- 8.7, while the signals from those with CAs had a skew of 11.0 +/- 11. The structure of the muscles of one CA subject was observed ultrasonically. The muscle showed greater disruption than normally developed muscles. It is speculated that the myographic signal reflects the structure of the muscle. which has developed in a more disorganized manner as a result of the muscle not being stretched by other muscles across the missing distal joint, even in the muscles that are used regularly to control arm prostheses.
Resumo:
The problem of the appropriate distribution of forces among the fingers of a four-fingered robot hand is addressed. The finger-object interactions are modelled as point frictional contacts, hence the system is indeterminate and an optimal solution is required for controlling forces acting on an object. A fast and efficient method for computing the grasping and manipulation forces is presented, where computation has been based on using the true model of the nonlinear frictional cone of contact. Results are compared with previously employed methods of linearizing the cone constraints and minimizing the internal forces.
Resumo:
Manipulation of an object by a multi-fingered robot hand requires task planning which involves computation of joint space vectors and fingertip forces. To implement a task as fast as possible, computations have to be carried out in minimum time. The state of the art in manipulation by multi-fingered robot hand designs has shown the possible use of remotely driven finger joints. Such remotely driven hands require computation of tendon displacement for evaluating joint space vectors before signals are sent to actuators. Alternatively, a direct drive hand is a mechanical hand in which the shafts of articulated joints are directly coupled to the rotors of motors with high output torques. This article has been divided into two main sections. The first section presents a brief view of manipulation using a direct drive approach. Meanwhile, the other section presents ongoing research which is being carried out to design a four-finger articulated hand in the Department of Cybernetics at the University of Reading.
Resumo:
The existence of hand-centred visual processing has long been established in the macaque premotor cortex. These hand-centred mechanisms have been thought to play some general role in the sensory guidance of movements towards objects, or, more recently, in the sensory guidance of object avoidance movements. We suggest that these hand-centred mechanisms play a specific and prominent role in the rapid selection and control of manual actions following sudden changes in the properties of the objects relevant for hand-object interactions. We discuss recent anatomical and physiological evidence from human and non-human primates, which indicates the existence of rapid processing of visual information for hand-object interactions. This new evidence demonstrates how several stages of the hierarchical visual processing system may be bypassed, feeding the motor system with hand-related visual inputs within just 70 ms following a sudden event. This time window is early enough, and this processing rapid enough, to allow the generation and control of rapid hand-centred avoidance and acquisitive actions, for aversive and desired objects, respectively