973 resultados para Visual feedback


Relevância:

60.00% 60.00%

Publicador:

Resumo:

L’objectif principal de la présente thèse était de déterminer les facteurs susceptibles d’influencer l’efficacité des processus de contrôle en ligne des mouvements d’atteinte manuelle. De nos jours, les mouvements d’atteinte manuelle réalisés dans un environnement virtuel (déplacer une souris d’ordinateur pour contrôler un curseur à l’écran, par exemple) sont devenus chose commune. Par comparaison aux mouvements réalisés en contexte naturel (appuyer sur le bouton de mise en marche de l’ordinateur), ceux réalisés en contexte virtuel imposent au système nerveux central des contraintes importantes parce que l’information visuelle et proprioceptive définissant la position de l’effecteur n’est pas parfaitement congruente. Par conséquent, la présente thèse s’articule autour des effets d’un contexte virtuel sur le contrôle des mouvements d’atteinte manuelle. Dans notre premier article, nous avons tenté de déterminer si des facteurs tels que (a) la quantité de pratique, (b) l’orientation du montage virtuel (aligné vs. non-aligné) ou encore (c) l’alternance d’un essai réalisé avec et sans la vision de l’effecteur pouvaient augmenter l’efficacité des processus de contrôle en ligne de mouvement réalisés en contexte virtuel. Ces facteurs n’ont pas influencé l’efficacité des processus de contrôle de mouvements réalisés en contexte virtuel, suggérant qu’il est difficile d’optimiser le contrôle des mouvements d’atteinte manuelle lorsque ceux-ci sont réalisés dans un contexte virtuel. L’un des résultats les plus surprenants de cette étude est que nous n’avons pas rapporté d’effet concernant l’orientation de l’écran sur la performance des participants, ce qui était en contradiction avec la littérature existante sur ce sujet. L’article 2 avait pour but de pousser plus en avant notre compréhension du contrôle du mouvement réalisé en contexte virtuel et naturel. Dans le deuxième article, nous avons mis en évidence les effets néfastes d’un contexte virtuel sur le contrôle en ligne des mouvements d’atteinte manuelle. Plus précisément, nous avons observé que l’utilisation d’un montage non-aligné (écran vertical/mouvement sur un plan horizontal) pour présenter l’information visuelle résultait en une importante diminution de la performance comparativement à un montage virtuel aligné et un montage naturel. Nous avons aussi observé une diminution de la performance lorsque les mouvements étaient réalisés dans un contexte virtuel aligné comparativement à un contexte naturel. La diminution de la performance notée dans les deux conditions virtuelles s’expliquait largement par une réduction de l’efficacité des processus de contrôle en ligne. Nous avons donc suggéré que l’utilisation d’une représentation virtuelle de la main introduisait de l’incertitude relative à sa position dans l’espace. Dans l’article 3, nous avons donc voulu déterminer l’origine de cette incertitude. Dans ce troisième article, deux hypothèses étaient à l’étude. La première suggérait que l’augmentation de l’incertitude rapportée dans le contexte virtuel de la précédente étude était due à une perte d’information visuelle relative à la configuration du bras. La seconde suggérait plutôt que l’incertitude provenait de l’information visuelle et proprioceptive qui n’est pas parfaitement congruente dans un contexte virtuel comparativement à un contexte naturel (le curseur n’est pas directement aligné avec le bout du doigt, par exemple). Les données n’ont pas supporté notre première hypothèse. Plutôt, il semble que l’incertitude soit causée par la dissociation de l’information visuelle et proprioceptive. Nous avons aussi démontré que l’information relative à la position de la main disponible sur la base de départ influence largement les processus de contrôle en ligne, même lorsque la vision de l’effecteur est disponible durant le mouvement. Ce résultat suggère que des boucles de feedback interne utilisent cette information afin de moduler le mouvement en cours d’exécution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les deux fonctions principales de la main sont la manipulation d’objet et l’exploration tactile. La détection du glissement, rapportée par les mécanorécepteurs de la peau glabre, est essentielle pour l’exécution de ces deux fonctions. Durant la manipulation d’objet, la détection rapide du micro-glissement (incipient slip) amène la main à augmenter la force de pince pour éviter que l’objet ne tombe. À l’opposé, le glissement est un aspect essentiel à l’exploration tactile puisqu’il favorise une plus grande acuité tactile. Pour ces deux actions, les forces normale et tangentielle exercées sur la peau permettent de décrire le glissement mais également ce qui arrive juste avant qu’il y ait glissement. Toutefois, on ignore comment ces forces contrôlées par le sujet pourraient être encodées au niveau cortical. C’est pourquoi nous avons enregistré l’activité unitaire des neurones du cortex somatosensoriel primaire (S1) durant l’exécution de deux tâches haptiques chez les primates. Dans la première tâche, deux singes devaient saisir une pastille de métal fixe et y exercer des forces de cisaillement sans glissement dans une de quatre directions orthogonales. Des 144 neurones enregistrés, 111 (77%) étaient modulés à la direction de la force de cisaillement. L’ensemble de ces vecteurs préférés s’étendait dans toutes les directions avec un arc variant de 50° à 170°. Plus de 21 de ces neurones (19%) étaient également modulés à l’intensité de la force de cisaillement. Bien que 66 neurones (59%) montraient clairement une réponse à adaptation lente et 45 autres (41%) une réponse à adaptation rapide, cette classification ne semblait pas expliquer la modulation à l’intensité et à la direction de la force de cisaillement. Ces résultats montrent que les neurones de S1 encodent simultanément la direction et l’intensité des forces même en l’absence de glissement. Dans la seconde tâche, deux singes ont parcouru différentes surfaces avec le bout des doigts à la recherche d’une cible tactile, sans feedback visuel. Durant l’exploration, les singes, comme les humains, contrôlaient les forces et la vitesse de leurs doigts dans une plage de valeurs réduite. Les surfaces à haut coefficient de friction offraient une plus grande résistance tangentielle à la peau et amenaient les singes à alléger la force de contact, normale à la peau. Par conséquent, la somme scalaire des composantes normale et tangentielle demeurait constante entre les surfaces. Ces observations démontrent que les singes contrôlent les forces normale et tangentielle qu’ils appliquent durant l’exploration tactile. Celles-ci sont également ajustées selon les propriétés de surfaces telles que la texture et la friction. Des 230 neurones enregistrés durant la tâche d’exploration tactile, 96 (42%) ont montré une fréquence de décharge instantanée reliée aux forces exercées par les doigts sur la surface. De ces neurones, 52 (54%) étaient modulés avec la force normale ou la force tangentielle bien que l’autre composante orthogonale avait peu ou pas d’influence sur la fréquence de décharge. Une autre sous-population de 44 (46%) neurones répondait au ratio entre la force normale et la force tangentielle indépendamment de l’intensité. Plus précisément, 29 (30%) neurones augmentaient et 15 (16%) autres diminuaient leur fréquence de décharge en relation avec ce ratio. Par ailleurs, environ la moitié de tous les neurones (112) étaient significativement modulés à la direction de la force tangentielle. De ces neurones, 59 (53%) répondaient à la fois à la direction et à l’intensité des forces. L’exploration de trois ou quatre différentes surfaces a permis d’évaluer l’impact du coefficient de friction sur la modulation de 102 neurones de S1. En fait, 17 (17%) neurones ont montré une augmentation de leur fréquence de décharge avec l’augmentation du coefficient de friction alors que 8 (8%) autres ont montré le comportement inverse. Par contre, 37 (36%) neurones présentaient une décharge maximale sur une surface en particulier, sans relation linéaire avec le coefficient de friction des surfaces. La classification d’adaptation rapide ou lente des neurones de S1 n’a pu être mise en relation avec la modulation aux forces et à la friction. Ces résultats montrent que la fréquence de décharge des neurones de S1 encode l’intensité des forces normale et tangentielle, le ratio entre les deux composantes et la direction du mouvement. Ces résultats montrent que le comportement d’une importante sous-population des neurones de S1 est déterminé par les forces normale et tangentielle sur la peau. La modulation aux forces présentée ici fait le pont entre les travaux évaluant les propriétés de surfaces telles que la rugosité et les études touchant à la manipulation d’objets. Ce système de référence s’applique en présence ou en absence de glissement entre la peau et la surface. Nos résultats quant à la modulation des neurones à adaptation rapide ou lente nous amènent à suggérer que cette classification découle de la manière que la peau est stimulée. Nous discuterons aussi de la possibilité que l’activité des neurones de S1 puisse inclure une composante motrice durant ces tâches sensorimotrices. Finalement, un nouveau cadre de référence tridimensionnel sera proposé pour décrire et rassembler, dans un même continuum, les différentes modulations aux forces normale et tangentielle observées dans S1 durant l’exploration tactile.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: To evaluate the effect of robot-mediated therapy on arm dysfunction post stroke. Design: A series of single-case studies using a randomized multiple baseline design with ABC or ACB order. Subjects (n = 20) had a baseline length of 8, 9 or 10 data points. They continued measurement during the B - robot-mediated therapy and C - sling suspension phases. Setting: Physiotherapy department, teaching hospital. Subjects: Twenty subjects with varying degrees of motor and sensory deficit completed the study. Subjects attended three times a week, with each phase lasting three weeks. Interventions: In the robot-mediated therapy phase they practised three functional exercises with haptic and visual feedback from the system. In the sling suspension phase they practised three single-plane exercises. Each treatment phase was three weeks long. Main measures: The range of active shoulder flexion, the Fugl-Meyer motor assessment and the Motor Assessment Scale were measured at each visit. Results: Each subject had a varied response to the measurement and intervention phases. The rate of recovery was greater during the robot-mediated therapy phase than in the baseline phase for the majority of subjects. The rate of recovery during the robot-mediated therapy phase was also greater than that during the sling suspension phase for most subjects. Conclusion: The positive treatment effect for both groups suggests that robot-mediated therapy can have a treatment effect greater than the same duration of non-functional exercises. Further studies investigating the optimal duration of treatment in the form of a randomized controlled trial are warranted.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Researchers in the rehabilitation engineering community have been designing and developing a variety of passive/active devices to help persons with limited upper extremity function to perform essential daily manipulations. Devices range from low-end tools such as head/mouth sticks to sophisticated robots using vision and speech input. While almost all of the high-end equipment developed to date relies on visual feedback alone to guide the user providing no tactile or proprioceptive cues, the “low-tech” head/mouth sticks deliver better “feel” because of the inherent force feedback through physical contact with the user's body. However, the disadvantage of a conventional head/mouth stick is that it can only function in a limited workspace and the performance is limited by the user's strength. It therefore seems reasonable to attempt to develop a system that exploits the advantages of the two approaches: the power and flexibility of robotic systems with the sensory feedback of a headstick. The system presented in this paper reflects the design philosophy stated above. This system contains a pair of master-slave robots with the master being operated by the user's head and the slave acting as a telestick. Described in this paper are the design, control strategies, implementation and performance evaluation of the head-controlled force-reflecting telestick system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes the design, implementation and testing of a high speed controlled stereo “head/eye” platform which facilitates the rapid redirection of gaze in response to visual input. It details the mechanical device, which is based around geared DC motors, and describes hardware aspects of the controller and vision system, which are implemented on a reconfigurable network of general purpose parallel processors. The servo-controller is described in detail and higher level gaze and vision constructs outlined. The paper gives performance figures gained both from mechanical tests on the platform alone, and from closed loop tests on the entire system using visual feedback from a feature detector.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The authors demonstrate four real-time reactive responses to movement in everyday scenes using an active head/eye platform. They first describe the design and realization of a high-bandwidth four-degree-of-freedom head/eye platform and visual feedback loop for the exploration of motion processing within active vision. The vision system divides processing into two scales and two broad functions. At a coarse, quasi-peripheral scale, detection and segmentation of new motion occurs across the whole image, and at fine scale, tracking of already detected motion takes place within a foveal region. Several simple coarse scale motion sensors which run concurrently at 25 Hz with latencies around 100 ms are detailed. The use of these sensors are discussed to drive the following real-time responses: (1) head/eye saccades to moving regions of interest; (2) a panic response to looming motion; (3) an opto-kinetic response to continuous motion across the image and (4) smooth pursuit of a moving target using motion alone.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Individuals with schizophrenia, particularly those with passivity symptoms, may not feel in control of their actions, believing them to be controlled by external agents. Cognitive operations that contribute to these symptoms may include abnormal processing in agency as well as body representations that deal with body schema and body image. However, these operations in schizophrenia are not fully understood, and the questions of general versus specific deficits in individuals with different symptom profiles remain unanswered. Using the projected-hand illusion (a digital video version of the rubber-hand illusion) with synchronous and asynchronous stroking (500 ms delay), and a hand laterality judgment task, we assessed sense of agency, body image, and body schema in 53 people with clinically stable schizophrenia (with a current, past, and no history of passivity symptoms) and 48 healthy controls. The results revealed a stable trait in schizophrenia with no difference between clinical subgroups (sense of agency) and some quantitative (specific) differences depending on the passivity symptom profile (body image and body schema). Specifically, a reduced sense of self-agency was a common feature of all clinical subgroups. However, subgroup comparisons showed that individuals with passivity symptoms (both current and past) had significantly greater deficits on tasks assessing body image and body schema, relative to the other groups. In addition, patients with current passivity symptoms failed to demonstrate the normal reduction in body illusion typically seen with a 500 ms delay in visual feedback (asynchronous condition), suggesting internal timing problems. Altogether, the results underscore self-abnormalities in schizophrenia, provide evidence for both trait abnormalities and state changes specific to passivity symptoms, and point to a role for internal timing deficits as a mechanistic explanation for external cues becoming a possible source of self-body input.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes a technique for the real-time modeling of deformable tissue. Specifically geared towards needle insertion simulation, the low computational requirements of the model enable highly accurate haptic feedback to a user without introducing noticeable time delay or buzzing generally associated with haptic surgery simulation. Using a spherical voxel array combined with aspects of computational geometry and agent communication and interaction principals, the model is capable of providing haptic update rates of over 1000Hz with real-time visual feedback. Iterating through over 1000 voxels per millisecond to determine collision and haptic response while making use of Vieta’s Theorem for extraneous force culling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although the emotion of anger has, in recent years, been the subject of increasing theoretical analysis, there are relatively few accounts of how interventions designed to reduce problematic anger might be related to cognitively oriented theories of emotion. In this review of the literature we describe how a cognitive-behavioural approach to the treatment of those with anger-related problems might be understood in relation to conceptualizations of anger from a cognitive perspective. Three additional interventions (visual feedback, chair-work, forgiveness therapy) are identified that aim to improve the perspective-taking skills of angry clients. It is concluded that such interventions might be considered for use within the context of cognitive-behavioural treatment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this study was to examine the reliability of normalisation methods used in the study of the posterior and posterolateral neck muscles in a group of healthy controls. Six asymptomatic male subjects performed a total of 12 maximum voluntary isometric contractions (MVIC) and 60%-submaximal isometric contractions (60%-MVIC) against the torque arm of an isokinetic dynamometer whilst surface and intramuscular electromyography (EMG) was recorded unilaterally from representative posterior and posterolateral locations. Reliability was calculated using intra-class correlation coefficient (ICC), relative standard error of measurement (%SEM) and relative coefficient of variation (%CV). Maximal torque output was found to be highly reliable in the directions of extension and right lateral bending when the first of three MVIC contractions was excluded. When averaged across contraction direction, high reliability was found for both surface (MVIC: ICC = 0.986, %SEM = 7.5, %CV = 9.2; 60%-MVIC: ICC = 0.975, %SEM = 10, %CV = 13.7) and intramuscular (MVIC: ICC = 0.910, %SEM = 20, %CV = 19.1; 60%-MVIC: ICC = 0.952, %SEM = 16.5, %CV = 13.5) electrodes. Intramuscular electrodes displayed the least reliability in right lateral bending. The use of visual feedback markedly increased the reliability of 60%-MVIC contractions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present methods for automatically constructing representations of fiction books in a range of modalities: audibly, graphically and as 3D virtual environments. The correspondence between the sequential ordering of events against the order of events presented in the text is used to correctly resolve the dynamic interactions for each representation. Synthesised audio created from the fiction text is used to calibrate the base time-line against which the other forms of media are correctly aligned. The audio stream is based on speech synthesis using the text of the book, and is enhanced using distinct voices for the different characters in a book. Sound effects are included automatically. The graphical representation represents the text (as subtitles), identifies active characters and provides visual feedback of the content of the story. Dynamic virtual environments conform to the constraints implied by the story, and are used as a source of further visual content. These representations are all aligned to a common time-line, and combined using sequencing facilities to provide a multimodal version of the original text.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Microrobotic cell injection is an area of growing research interest. Typically, operators rely on visual feedback to perceive the microscale environment and are subject to lengthy training times and low success rates. Haptic interaction offers the ability to utilise the operator’s haptic modality and to enhance operator performance. Our earlier work presented a haptically enabled system for assisting the operator with certain aspects of the cell injection task. The system aimed to enhance the operator’s controllability of the micropipette through a logical mapping between the haptic device and microrobot, as well as introducing virtual fixtures for haptic guidance. The system was also designed in such a way that given the availability of appropriate force sensors, haptic display of the cell penetration force is straightforward. This work presents our progress towards a virtual replication of the system, aimed at facilitating offline operator training. It is suggested that operators can use the virtual system to train offline and later transfer their skills to the physical system. In order to achieve the necessary representation of the cell within the virtual system, methods based on a particle-based cell model are utilised. In addition to providing the necessary visual representation, the cell model provides the ability to estimate cell penetration forces and haptically display them to the operator. Two different approaches to achieving the virtual system are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work an image pre-processing module has been developed to extract quantitative information from plantation images with various degrees of infestation. Four filters comprise this module: the first one acts on smoothness of the image, the second one removes image background enhancing plants leaves, the third filter removes isolated dots not removed by the previous filter, and the fourth one is used to highlight leaves' edges. At first the filters were tested with MATLAB, for a quick visual feedback of the filters' behavior. Then the filters were implemented in the C programming language. At last, the module as been coded in VHDL for the implementation on a Stratix II family FPGA. Tests were run and the results are shown in this paper. © 2008 Springer-Verlag Berlin Heidelberg.