969 resultados para Motion estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electromyography readings (EMGs) from quadriceps of fifteen subjects were recorded during whole body vibration treatment at different frequencies (10-50 Hz). Additional electrodes were placed on the patella to monitor the occurrence of motion artifact, triaxial accelerometers were placed onto quadriceps to monitor motion. Signal spectra revealed sharp peaks corresponding to vibration frequency and its harmonics, in accordance with the accelerometer data. EMG total power was compared to that associated with vibration harmonics narrow bands, before and during vibration. On average, vibration associated power resulted in only 3% (±0.9%) of the total power prior to vibration and 29% (±13.4%) during vibration. Often, studies employ surface EMG to quantitatively evaluate vibration evoked muscular activity and to set stimulation frequency. However, previous research has not accounted for motion artifacts. The data presented in this study emphasize the need for the removal of motion artifacts, as they consistently affect RMS estimation, which is often used as a concise muscle activity index during vibrations. Such artifacts, rather unpredictable in amplitude, might be the cause of large inter-study differences and must be eliminated before analysis. Motion artifact filtering will contribute to thorough and precise interpretation of neuromuscular response to vibration treatment. © 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to capture human motion allows researchers to evaluate an individual’s gait. Gait can be measured in different ways, from camera-based systems to Magnetic and Inertial Measurement Units (MIMU). The former uses cameras to track positional information of photo-reflective markers, while the latter uses accelerometers, gyroscopes, and magnetometers to measure segment orientation. Both systems can be used to measure joint kinematics, but the results vary because of their differences in anatomical calibrations. The objective of this thesis was to study potential solutions for reducing joint angle discrepancies between MIMU and camera-based systems. The first study worked to correct the anatomical frame differences between MIMU and camera-based systems via the joint angles of both systems. This study looked at full lower body correction versus correcting a single joint. Single joint correction showed slightly better alignment of both systems, but does not take into account that body segments are generally affected by more than one joint. The second study explores the possibility of anatomical landmarking using a single camera and a pointer apparatus. Results showed anatomical landmark position could be determined using a single camera, as the anatomical landmarks found from this study and a camera-based system showed similar results. This thesis worked on providing a novel way for obtaining anatomical landmarks with a single point-and-shoot camera, as well aligning anatomical frames between MIMUs and camera-based systems using joint angles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We apply the formalism of quantum estimation theory to extract information about potential collapse mechanisms of the continuous spontaneous localisation (CSL) form.
In order to estimate the strength with which the field responsible for the CSL mechanism couples to massive systems, we consider the optomechanical interaction
between a mechanical resonator and a cavity field. Our estimation strategy passes through the probing of either the state of the oscillator or that of the electromagnetic field that drives its motion. In particular, we concentrate on all-optical measurements, such as homodyne and heterodyne measurements.
We also compare the performances of such strategies with those of a spin-assisted optomechanical system, where the estimation of the CSL parameter is performed
through time-gated spin-like measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents MOTION, a modular on-line model for urban traffic signal control. It consists of a network and a local level and builds on enhanced traffic state estimation. Special consideration is given to the prioritization of public transit. MOTION provides possibilities for the interaction with integrated urban management systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robot-Assisted Rehabilitation (RAR) is relevant for treating patients affected by nervous system injuries (e.g., stroke and spinal cord injury) -- The accurate estimation of the joint angles of the patient limbs in RAR is critical to assess the patient improvement -- The economical prevalent method to estimate the patient posture in Exoskeleton-based RAR is to approximate the limb joint angles with the ones of the Exoskeleton -- This approximation is rough since their kinematic structures differ -- Motion capture systems (MOCAPs) can improve the estimations, at the expenses of a considerable overload of the therapy setup -- Alternatively, the Extended Inverse Kinematics Posture Estimation (EIKPE) computational method models the limb and Exoskeleton as differing parallel kinematic chains -- EIKPE has been tested with single DOFmovements of the wrist and elbow joints -- This paper presents the assessment of EIKPEwith elbow-shoulder compoundmovements (i.e., object prehension) -- Ground-truth for estimation assessment is obtained from an optical MOCAP (not intended for the treatment stage) -- The assessment shows EIKPE rendering a good numerical approximation of the actual posture during the compoundmovement execution, especially for the shoulder joint angles -- This work opens the horizon for clinical studies with patient groups, Exoskeleton models, and movements types --

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce travail présente deux nouveaux systèmes simples d'analyse de la marche humaine grâce à une caméra de profondeur (Microsoft Kinect) placée devant un sujet marchant sur un tapis roulant conventionnel, capables de détecter une marche saine et celle déficiente. Le premier système repose sur le fait qu'une marche normale présente typiquement un signal de profondeur lisse au niveau de chaque pixel avec moins de hautes fréquences, ce qui permet d'estimer une carte indiquant l'emplacement et l'amplitude de l'énergie de haute fréquence (HFSE). Le second système analyse les parties du corps qui ont un motif de mouvement irrégulier, en termes de périodicité, lors de la marche. Nous supposons que la marche d'un sujet sain présente partout dans le corps, pendant les cycles de marche, un signal de profondeur avec un motif périodique sans bruit. Nous estimons, à partir de la séquence vidéo de chaque sujet, une carte montrant les zones d'irrégularités de la marche (également appelées énergie de bruit apériodique). La carte avec HFSE ou celle visualisant l'énergie de bruit apériodique peut être utilisée comme un bon indicateur d'une éventuelle pathologie, dans un outil de diagnostic précoce, rapide et fiable, ou permettre de fournir des informations sur la présence et l'étendue de la maladie ou des problèmes (orthopédiques, musculaires ou neurologiques) du patient. Même si les cartes obtenues sont informatives et très discriminantes pour une classification visuelle directe, même pour un non-spécialiste, les systèmes proposés permettent de détecter automatiquement les individus en bonne santé et ceux avec des problèmes locomoteurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce travail présente deux nouveaux systèmes simples d'analyse de la marche humaine grâce à une caméra de profondeur (Microsoft Kinect) placée devant un sujet marchant sur un tapis roulant conventionnel, capables de détecter une marche saine et celle déficiente. Le premier système repose sur le fait qu'une marche normale présente typiquement un signal de profondeur lisse au niveau de chaque pixel avec moins de hautes fréquences, ce qui permet d'estimer une carte indiquant l'emplacement et l'amplitude de l'énergie de haute fréquence (HFSE). Le second système analyse les parties du corps qui ont un motif de mouvement irrégulier, en termes de périodicité, lors de la marche. Nous supposons que la marche d'un sujet sain présente partout dans le corps, pendant les cycles de marche, un signal de profondeur avec un motif périodique sans bruit. Nous estimons, à partir de la séquence vidéo de chaque sujet, une carte montrant les zones d'irrégularités de la marche (également appelées énergie de bruit apériodique). La carte avec HFSE ou celle visualisant l'énergie de bruit apériodique peut être utilisée comme un bon indicateur d'une éventuelle pathologie, dans un outil de diagnostic précoce, rapide et fiable, ou permettre de fournir des informations sur la présence et l'étendue de la maladie ou des problèmes (orthopédiques, musculaires ou neurologiques) du patient. Même si les cartes obtenues sont informatives et très discriminantes pour une classification visuelle directe, même pour un non-spécialiste, les systèmes proposés permettent de détecter automatiquement les individus en bonne santé et ceux avec des problèmes locomoteurs.