943 resultados para 3D motion capture


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study describes the validation of a new wearable system for assessment of 3D spatial parameters of gait. The new method is based on the detection of temporal parameters, coupled to optimized fusion and de-drifted integration of inertial signals. Composed of two wirelesses inertial modules attached on feet, the system provides stride length, stride velocity, foot clearance, and turning angle parameters at each gait cycle, based on the computation of 3D foot kinematics. Accuracy and precision of the proposed system were compared to an optical motion capture system as reference. Its repeatability across measurements (test-retest reliability) was also evaluated. Measurements were performed in 10 young (mean age 26.1±2.8 years) and 10 elderly volunteers (mean age 71.6±4.6 years) who were asked to perform U-shaped and 8-shaped walking trials, and then a 6-min walking test (6MWT). A total of 974 gait cycles were used to compare gait parameters with the reference system. Mean accuracy±precision was 1.5±6.8cm for stride length, 1.4±5.6cm/s for stride velocity, 1.9±2.0cm for foot clearance, and 1.6±6.1° for turning angle. Difference in gait performance was observed between young and elderly volunteers during the 6MWT particularly in foot clearance. The proposed method allows to analyze various aspects of gait, including turns, gait initiation and termination, or inter-cycle variability. The system is lightweight, easy to wear and use, and suitable for clinical application requiring objective evaluation of gait outside of the lab environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurement of three-dimensional (3D) knee joint angle outside a laboratory is of benefit in clinical examination and therapeutic treatment comparison. Although several motion capture devices exist, there is a need for an ambulatory system that could be used in routine practice. Up-to-date, inertial measurement units (IMUs) have proven to be suitable for unconstrained measurement of knee joint differential orientation. Nevertheless, this differential orientation should be converted into three reliable and clinically interpretable angles. Thus, the aim of this study was to propose a new calibration procedure adapted for the joint coordinate system (JCS), which required only IMUs data. The repeatability of the calibration procedure, as well as the errors in the measurement of 3D knee angle during gait in comparison to a reference system were assessed on eight healthy subjects. The new procedure relying on active and passive movements reported a high repeatability of the mean values (offset<1 degrees) and angular patterns (SD<0.3 degrees and CMC>0.9). In comparison to the reference system, this functional procedure showed high precision (SD<2 degrees and CC>0.75) and moderate accuracy (between 4.0 degrees and 8.1 degrees) for the three knee angle. The combination of the inertial-based system with the functional calibration procedure proposed here resulted in a promising tool for the measurement of 3D knee joint angle. Moreover, this method could be adapted to measure other complex joint, such as ankle or elbow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aging population has become a burning issue for all modern societies around the world recently. There are two important issues existing now to be solved. One is how to continuously monitor the movements of those people having suffered a stroke in natural living environment for providing more valuable feedback to guide clinical interventions. The other one is how to guide those old people effectively when they are at home or inside other buildings and to make their life easier and convenient. Therefore, human motion tracking and navigation have been active research fields with the increasing number of elderly people. However, motion capture has been extremely challenging to go beyond laboratory environments and obtain accurate measurements of human physical activity especially in free-living environments, and navigation in free-living environments also poses some problems such as the denied GPS signal and the moving objects commonly presented in free-living environments. This thesis seeks to develop new technologies to enable accurate motion tracking and positioning in free-living environments. This thesis comprises three specific goals using our developed IMU board and the camera from the imaging source company: (1) to develop a robust and real-time orientation algorithm using only the measurements from IMU; (2) to develop a robust distance estimation in static free-living environments to estimate people’s position and navigate people in static free-living environments and simultaneously the scale ambiguity problem, usually appearing in the monocular camera tracking, is solved by integrating the data from the visual and inertial sensors; (3) in case of moving objects viewed by the camera existing in free-living environments, to firstly design a robust scene segmentation algorithm and then respectively estimate the motion of the vIMU system and moving objects. To achieve real-time orientation tracking, an Adaptive-Gain Orientation Filter (AGOF) is proposed in this thesis based on the basic theory of deterministic approach and frequency-based approach using only measurements from the newly developed MARG (Magnet, Angular Rate, and Gravity) sensors. To further obtain robust positioning, an adaptive frame-rate vision-aided IMU system is proposed to develop and implement fast vIMU ego-motion estimation algorithms, where the orientation is estimated in real time from MARG sensors in the first step and then used to estimate the position based on the data from visual and inertial sensors. In case of the moving objects viewed by the camera existing in free-living environments, a robust scene segmentation algorithm is firstly proposed to obtain position estimation and simultaneously the 3D motion of moving objects. Finally, corresponding simulations and experiments have been carried out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose the design of a real-time system to recognize and interprethand gestures. The acquisition devices are low cost 3D sensors. 3D hand pose will be segmented, characterized and track using growing neural gas (GNG) structure. The capacity of the system to obtain information with a high degree of freedom allows the encoding of many gestures and a very accurate motion capture. The use of hand pose models combined with motion information provide with GNG permits to deal with the problem of the hand motion representation. A natural interface applied to a virtual mirrorwriting system and to a system to estimate hand pose will be designed to demonstrate the validity of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I sistemi di analisi del movimento sono in continua espansione e vengono sempre maggiormente utilizzati in ambito sportivo e riabilitativo. In particolare, la valutazione delle lesioni di LCA è, attualmente, affidata a procedure classiche, oltre che a strumenti analitici Gold Standard come il sistema optoelettronico. L’utilizzo dei sensori inerziali per l’analisi del movimento è in notevole aumento e sempre più utilizzato anche negli ambiti descritti. Tuttavia, è da valutare l’accuratezza di tali sistemi nell’esecuzione di gesti complessi ad alta dinamica. L’obiettivo del presente elaborato è stato quello di validare un sistema di sensori inerziali tramite uno optoelettronico, per lo svolgimento di specifici task motori in un’ottica di prevenzione dell’infortunio al LCA. Sono stati valutati 30 soggetti sani, attraverso l’utilizzo sincrono di due tecnologie: il sistema optoelettronico Vicon e il sistema inerziale Xsens. I movimenti svolti dai soggetti rientravano in un protocollo per la prevenzione del LCA, sviluppato presso il centro Isokinetic, il quale comprende 6 task ad elevata dinamica, ricorrenti negli sport maggiori. Si è evinta un’ottima correlazione e basso errore per tutti gli angoli articolari analizzati sul piano sagittale, una buona correlazione sul piano frontale per la maggior parte degli angoli (esclusi caviglia e pelvi), ed una minore riproducibilità sul piano trasverso, in particolare negli angoli di caviglia ginocchio e pelvi. I risultati hanno mostrato una scarsa dipendenza dalla tipologia di task analizzato. La tecnologia inerziale ha dimostrato di essere un’ottima alternativa per l’analisi del movimento in task specifici per la valutazione della biomeccanica di LCA. Le discrepanze evinte possono essere riconducibili ai diversi protocolli cinematici utilizzati ed al posizionamento dei markers. Evoluzioni della tecnologia potrebbero migliorare la precisione di questi sensori offrendo informazioni ancor più dettagliate dove ora ci sono lacune.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Già da qualche anno si è stati introdotti alla possibilità di vivere in un mondo virtuale; basta indossare un paio di visori di realtà aumentata, virtuale e mista che riproducono nell’ambiente circostante oggetti che fisicamente non esistono. Negli ultimi mesi, inoltre, questa possibilità sta diventando sempre più concreta con l’introduzione, da parte dei colossi dell’informatica, del concetto di "Metaverso": un universo parallelo completamente digitale dove sarà possibile svolgere ogni attività sociale. L’obiettivo di questa tesi è quello di contribuire in piccola parte a questo enorme progetto creando una modalità di interazione tra utenti virtuale ma che si basa su comportamenti del tutto reali. A questo proposito il titolo dell’elaborato è: \textit{“B-R1ING MoCap: registrazione e riproduzione dei movimenti umani su avatar 3D in realtà aumentata”}. Lo scopo del progetto è quello di permettere a una persona di registrare un video in cui c’è un soggetto in movimento, salvare i movimenti del soggetto in un pacchetto dati e infine riprodurlo su un \textit{avatar} 3D che viene fatto agire in realtà aumentata. Il tutto farà parte di un’applicazione “social network” che permette l’interazione tra utenti in questo modo. Un utente può quindi registrare i movimenti umani e inviarli ad un altro utente che può riprodurre il messaggio in realtà aumentata tramite il suo smartphone. Viene introdotto così un nuovo tipo di comunicazione digitale indiretta passando dalla comunicazione scritta, ormai salda da decenni nei messaggi, alla comunicazione orale, introdotta da qualche anno tramite i messaggi vocali, alla comunicazione gestuale resa possibile dal lavoro in oggetto. Le fasi principali del progetto sono state due: una in cui, dopo aver individuato la tecnica migliore, è stato effettuato il "motion capture", un’altra in cui il movimento registrato è stato trasformato in animazione per un soggetto 3D che viene visualizzata in realtà aumentata.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The evolution of computer animation represents one of the most relevant andrevolutionary aspects in the rise of contemporary digital visual culture (Darlew,2000), in particular, phenomena such as cinema “spectacular “ (Ibidem) and videogames. This article analyzes the characteristics of this “culture of simulation” (Turkle, 1995:20) relating the multidisciplinary and spectrum of technical and stylistic choices to the dimension of virtual characters acting. The result of these hybrid mixtures and computerized human motion capture techniques - called virtual cinema, universal capture, motion capture, etc. - cosists mainly on the sophistication of “rotoscoping”, as a new interpretation and appropriation of the captured image. This human motion capture technology, used largely by cinema and digital games, is one of the reasons why the authenticity of the animation is sometimes questioned. It is in the fi eld of 3D computer animation visual that this change is more signifi cant, appearing regularly innovative techniques of image manipulation and “hyper-cinema” (Lamarre, 2006: 31) character’s control with deeper sense of emotions. This shift in the culture that Manovich (2006: 27) calls “photo-GRAPHICS” - and Mulvey (2007) argue that creates a new form of possessive relationship with the viewer, in that it can analyze in detail the image, it can acquire it and modify it - is one of the most important aspects in the rise of Cubbit’s (2007) “cinema of attraction”. This article delves intrinsically into the analyze of virtual character animation — particularly in the fi eld of 3D computer animation and human digital acting.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neste texto pretendemos explorar a Realidade aumentada como meio de visualização para projectos de comunicação interactivos. Através das aplicações ARToolkit, Virtools e 3ds Max, pretendemos mostrar como criar uma plataforma interactiva portátil, que recorra ao meio ambiente e a markers para a construção do cenário de jogo. Pretendemos mostrar que o realismo da simulação, aliada à fusão dos objectos artificiais sobre o mundo real, poderá gerar empatia de interacção entre jogadores e os seus avatares.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this text, we intend to explore augmented reality as a means to visualise interactive communication projects. With ARToolkit, Virtools and 3ds Max applications, we aim to show how to create a portable interactive platform that resorts to the environment and markers for constructing the game’s scenario. We plan to show that the realism of simulation, together with the merger of artificial objects with the real world, can generate interactive empathy between players and their avatars.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the major challenges in the development of an immersive system is handling the delay between the tracking of the user’s head position and the updated projection of a 3D image or auralised sound, also called end-to-end delay. Excessive end-to-end delay can result in the general decrement of the “feeling of presence”, the occurrence of motion sickness and poor performance in perception-action tasks. These latencies must be known in order to provide insights on the technological (hardware/software optimization) or psychophysical (recalibration sessions) strategies to deal with them. Our goal was to develop a new measurement method of end-to-end delay that is both precise and easily replicated. We used a Head and Torso simulator (HATS) as an auditory signal sensor, a fast response photo-sensor to detect a visual stimulus response from a Motion Capture System, and a voltage input trigger as real-time event. The HATS was mounted in a turntable which allowed us to precisely change the 3D sound relative to the head position. When the virtual sound source was at 90º azimuth, the correspondent HRTF would set all the intensity values to zero, at the same time a trigger would register the real-time event of turning the HATS 90º azimuth. Furthermore, with the HATS turned 90º to the left, the motion capture marker visualization would fell exactly in the photo-sensor receptor. This method allowed us to precisely measure the delay from tracking to displaying. Moreover, our results show that the method of tracking, its tracking frequency, and the rendering of the sound reflections are the main predictors of end-to-end delay.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La industria de los videojuegos crece exponencialmente y está ya superando a otras industrias punteras del ocio. En este proyecto, nos hemos planteado la realización de un videojuego con visualización en el espacio real 3D. Para la realización del videojuego se ha usado el siguiente software: Blender para diseñar los modelos 3D, C++ como lenguaje de programación para desarrollar el código y un conjunto de librerías básicas para desarrollar un videojuego llamadas Ogre3d (Motor Gráfico). La lógica del movimiento 3D y los choques entre las partículas del juego ha sido diseñada enteramente en este proyecto acorde con las necesidades del videojuego, y de forma compatible a los ficheros de Blender y a las librerías OGRE3D.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A child's natural gait pattern may be affected by the gait laboratory environment. Wearable devices using body-worn sensors have been developed for gait analysis. The purpose of this study was to validate and explore the use of foot-worn inertial sensors for the measurement of selected spatio-temporal parameters, based on the 3D foot trajectory, in independently walking children with cerebral palsy (CP). We performed a case control study with 14 children with CP aged 6-15 years old and 15 age-matched controls. Accuracy and precision of the foot-worn device were measured using an optical motion capture system as the reference system. Mean accuracy±precision for both groups was 3.4±4.6cm for stride length, 4.3±4.2cm/s for speed and 0.5±2.9° for strike angle. Longer stance and shorter swing phases with an increase in double support were observed in children with CP (p=0.001). Stride length, speed and peak angular velocity during swing were decreased in paretic limbs, with significant differences in strike and lift-off angles. Children with cerebral palsy showed significantly higher inter-stride variability (measured by their coefficient of variation) for speed, stride length, swing and stance. During turning trajectories speed and stride length decreased significantly (p<0.01) for both groups, whereas stance increased significantly (p<0.01) in CP children only. Foot-worn inertial sensors allowed us to analyze gait spatiotemporal data outside a laboratory environment with good accuracy and precision and congruent results with what is known of gait variations during linear walking in children with CP.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thanks to decades of research, gait analysis has become an efficient tool. However, mainly due to the price of the motion capture systems, standard gait laboratories have the capability to measure only a few consecutive steps of ground walking. Recently, wearable systems were proposed to measure human motion without volume limitation. Although accurate, these systems are incompatible with most of existing calibration procedures and several years of research will be necessary for their validation. A new approach consisting of using a stationary system with a small capture volume for the calibration procedure and then to measure gait using a wearable system could be very advantageous. It could benefit from the knowledge related to stationary systems, allow long distance monitoring and provide new descriptive parameters. The aim of this study was to demonstrate the potential of this approach. Thus, a combined system was proposed to measure the 3D lower body joints angles and segmental angular velocities. It was then assessed in terms of reliability towards the calibration procedure, repeatability and concurrent validity. The dispersion of the joint angles across calibrations was comparable to those of stationary systems and good reliability was obtained for the angular velocities. The repeatability results confirmed that mean cycle kinematics of long distance walks could be used for subjects' comparison and pointed out an interest for the variability between cycles. Finally, kinematics differences were observed between participants with different ankle conditions. In conclusion, this study demonstrated the potential of a mixed approach for human movement analysis.