966 resultados para Optical Motion Capture


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Model-based optical motion capture systems require knowledge of the position of the markers relative to the underlying skeleton, the lengths of the skeleton's limbs, and which limb each marker is attached to. These model parameters are typically assumed and entered into the system manually, although techniques exist for calculating some of them, such as the position of the markers relative to the skeleton's joints. We present a fully automatic procedure for determining these model parameters. It tracks the 2D positions of the markers on the cameras' image planes and determines which markers lie on each limb before calculating the position of the underlying skeleton. The only assumption is that the skeleton consists of rigid limbs connected with ball joints. The proposed system is demonstrated on a number of real data examples and is shown to calculate good estimates of the model parameters in each. © 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical motion capture systems suffer from marker occlusions resulting in loss of useful information. This paper addresses the problem of real-time joint localisation of legged skeletons in the presence of such missing data. The data is assumed to be labelled 3d marker positions from a motion capture system. An integrated framework is presented which predicts the occluded marker positions using a Variable Turn Model within an Unscented Kalman filter. Inferred information from neighbouring markers is used as observation states; these constraints are efficient, simple, and real-time implementable. This work also takes advantage of the common case that missing markers are still visible to a single camera, by combining predictions with under-determined positions, resulting in more accurate predictions. An Inverse Kinematics technique is then applied ensuring that the bone lengths remain constant over time; the system can thereby maintain a continuous data-flow. The marker and Centre of Rotation (CoR) positions can be calculated with high accuracy even in cases where markers are occluded for a long period of time. Our methodology is tested against some of the most popular methods for marker prediction and the results confirm that our approach outperforms these methods in estimating both marker and CoR positions. © 2012 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical optical motion capture allows us to obtain kinematic and kinetic outcome measures that aid clinicians in diagnosing and treating different pathologies affecting healthy gait. The long term aim for gait centres is for subject-specific analyses that can predict, prevent, or reverse the effects of pathologies through gait retraining. To track the body, anatomical segment coordinate systems are commonly created by applying markers to the surface of the skin over specific, bony anatomy that is manually palpated. The location and placement of these markers is subjective and precision errors of up to 25mm have been reported [1]. Additionally, the selection of which anatomical landmarks to use in segment models can result in large angular differences; for example angular differences in the trunk can range up to 53o for the same motion depending on marker placement [2]. These errors can result in erroneous kinematic outcomes that either diminish or increase the apparent effects of a treatment or pathology compared to healthy data. Our goal was to improve the accuracy and precision of optical motion capture outcome measures. This thesis describes two separate studies. In the first study we aimed to establish an approach that would allow us to independently quantify the error among trunk models. Using this approach we determined if there was a best model to accurately track trunk motion. In the second study we designed a device to improve precision for test, re-test protocols that would also reduce the set-up time for motion capture experiments. Our method to compare a kinematically derived centre of mass velocity to one that was derived kinetically was successful in quantifying error among trunk models. Our findings indicate that models that use lateral shoulder markers as well as limit the translational degrees of freedom of the trunk through shared pelvic markers result in the least amount of error for the tasks we studied. We also successfully reduced intra- and inter-operator anatomical marker placement errors using a marker alignment device. The improved accuracy and precision resulting from the methods established in this thesis may lead to increased sensitivity to changes in kinematics, and ultimately result in more consistent treatment outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1999 Richards compared the accuracy of commercially available motion capture systems commonly used in biomechanics. Richards identified that in static tests the optical motion capture systems generally produced RMS errors of less than 1.0 mm. During dynamic tests, the RMS error increased to up to 4.2 mm in some systems. In the last 12 years motion capture systems have continued to evolve and now include high-resolution CCD or CMOS image sensors, wireless communication, and high full frame sampling frequencies. In addition to hardware advances, there have also been a number of advances in software, which includes improved calibration and tracking algorithms, real time data streaming, and the introduction of the c3d standard. These advances have allowed the system manufactures to maintain a high retail price in the name of advancement. In areas such as gait analysis and ergonomics many of the advanced features such as high resolution image sensors and high sampling frequencies are not required due to the nature of the task often investigated. Recently Natural Point introduced low cost cameras, which on face value appear to be suitable as at very least a high quality teaching tool in biomechanics and possibly even a research tool when coupled with the correct calibration and tracking software. The aim of the study was therefore to compare both the linear accuracy and quality of angular kinematics from a typical high end motion capture system and a low cost system during a simple task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The motion capture process places unique demands on performers. The impact of this process on the simultaneously artistic/somatic nature of dance practice is profound. This paper explores, from a performer’s perspective, how the process of performing in an optical motion capture system can impact and limit, but also expand and reconfigure a dancer’s somatic practice. This paper argues that working within motion capture processes affects not only the immediate contexts of capture and interactive performance, but also sets up a dialogue between dance practices within and beyond the motion capture studio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods for optical motion capture often require timeconsuming manual processing before the data can be used for subsequent tasks such as retargeting or character animation. These processing steps restrict the applicability of motion capturing especially for dynamic VR-environments with real time requirements. To solve these problems, we present two additional, fast and automatic processing stages based on our motion capture pipeline presented in [HSK05]. A normalization step aligns the recorded coordinate systems with the skeleton structure to yield a common and intuitive data basis across different recording sessions. A second step computes a parameterization based on automatically extracted main movement axes to generate a compact motion description. Our method does not restrict the placement of marker bodies nor the recording setup, and only requires a short calibration phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

3D Motion capture is a medium that plots motion, typically human motion, converting it into a form that can be represented digitally. It is a fast evolving field and recent inertial technology may provide new artistic possibilities for its use in live performance. Although not often used in this context, motion capture has a combination of attributes that can provide unique forms of collaboration with performance arts. The inertial motion capture suit used for this study has orientation sensors placed at strategic points on the body to map body motion. Its portability, real-time performance, ease of use, and its immunity from line-of-sight problems inherent in optical systems suggest it would work well as a live performance technology. Many animation techniques can be used in real-time. This research examines a broad cross-section of these techniques using four practice-led cases to assess the suitability of inertial motion capture to live performance. Although each case explores different visual possibilities, all make use of the performativity of the medium, using either an improvisational format or interactivity among stage, audience and screen that would be difficult to emulate any other way. A real-time environment is not capable of reproducing the depth and sophistication of animation people have come to expect through media. These environments take many hours to render. In time the combination of what can be produced in real-time and the tools available in a 3D environment will no doubt create their own tree of aesthetic directions in live performance. The case study looks at the potential of interactivity that this technology offers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. Made by Motion is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generate experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN – Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In my presentation I will be displaying and discussing a selected creative works from the project along with the process and considerations behind the work.