983 resultados para Motion capture, Cammino, mMrkerless, Segmentazione


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recenti studi hanno evidenziato come il cammino in ambiente acquatico possa portare a notevoli benefici nell’ambito di un processo riabilitativo: il cammino in acqua è infatti oggi considerato una delle principali terapie per pazienti con disturbi nella deambulazione, oltre ad essere impiegato per migliorare il recupero a seguito di interventi ed infortuni. Una caratterizzazione biomeccanica del cammino umano in acqua permetterebbe tuttavia di giungere a una conoscenza più approfondita degli effetti di quest’attività sul processo riabilitativo, e dunque a una sua prescrizione più mirata come parte delle terapie. Nonostante il crescente interesse, uno dei motivi per cui ancora pochi studi sono stati condotti in questo senso risiede nell’inadeguatezza di molti dei tradizionali sistemi di Motion Capture rispetto all’impiego subacqueo. La nuova branca della Markerless Motion Capture potrebbe invece in questo senso rappresentare una soluzione. In particolare, ci si occuperà in questo lavoro di tesi della tecnica markerless basata sulla ricostruzione del visual hull per retroproiezione delle silhouette. Il processo iniziale che permette di ottenere le silhouette dai video delle acquisizioni è detto segmentazione, la quale è anche una fase particolarmente importante per ottenere una buona accuratezza finale nella ricostruzione della cinematica articolare. Si sono pertanto sviluppati e caratterizzati in questo lavoro di tesi sette algoritmi di segmentazione, nati specificamente nell’ottica dell’analisi del cammino in acqua con tecnica markerless. Si mostrerà inoltre come determinate caratteristiche degli algoritmi influenzino la qualità finale della segmentazione, e sarà infine presentato un ulteriore algoritmo di post-processing per il miglioramento della qualità delle immagini segmentate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

3D Motion capture is a medium that plots motion, typically human motion, converting it into a form that can be represented digitally. It is a fast evolving field and recent inertial technology may provide new artistic possibilities for its use in live performance. Although not often used in this context, motion capture has a combination of attributes that can provide unique forms of collaboration with performance arts. The inertial motion capture suit used for this study has orientation sensors placed at strategic points on the body to map body motion. Its portability, real-time performance, ease of use, and its immunity from line-of-sight problems inherent in optical systems suggest it would work well as a live performance technology. Many animation techniques can be used in real-time. This research examines a broad cross-section of these techniques using four practice-led cases to assess the suitability of inertial motion capture to live performance. Although each case explores different visual possibilities, all make use of the performativity of the medium, using either an improvisational format or interactivity among stage, audience and screen that would be difficult to emulate any other way. A real-time environment is not capable of reproducing the depth and sophistication of animation people have come to expect through media. These environments take many hours to render. In time the combination of what can be produced in real-time and the tools available in a 3D environment will no doubt create their own tree of aesthetic directions in live performance. The case study looks at the potential of interactivity that this technology offers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

3D Motion capture is a fast evolving field and recent inertial technology may expand the artistic possibilities for its use in live performance. Inertial motion capture has three attributes that make it suitable for use with live performance; it is portable, easy to use and can operate in real-time. Using four projects, this paper discusses the suitability of inertial motion capture to live performance with a particular emphasis on dance. Dance is an artistic application of human movement and motion capture is the means to record human movement as digital data. As such, dance is clearly a field in which the use of real-time motion capture is likely to become more common, particularly as projected visual effects including real-time video are already often used in dance performances. Understandably, animation generated in real-time using motion capture is not as extensive or as clean as the highly mediated animation used in movies and games, but the quality is still impressive and the ‘liveness’ of the animation has compensating features that offer new ways of communicating with an audience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Silk Road Project was a practice-based research project investigating the potential of motion capture technology to inform perceptions of embodiment in dance performance. The project created a multi-disciplinary collaborative performance event using dance performance and real-time motion capture at Deakin University’s Deakin Motion Lab. Several new technological advances in producing real-time motion capture performance were produced, along with a performance event that examined the aesthetic interplay between a dancer’s movement and the precise mappings of its trajectories created by motion capture and real-time motion graphic visualisations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates virtual reality representations of the 1599 Boar’s Head Theatre and the Rose Theatre, two renaissance places and spaces. These models become a “world elsewhere” in that they represent virtual recreations of these venues in as much detail as possible. The models are based on accurate archeological and theatre historical records and are easy to navigate particularly for current use. This paper demonstrates the ways in which these models can be instructive for reading theatre today. More importantly we introduce human figures onto the stage via motion capture which allows us to explore the potential between space, actor and environment. This facilitates a new way of thinking about early modern playwrights’ “attitudes to locality and localities large and small”. These venues are thus activated to intersect productively with early modern studies so that the paper can test the historical and contemporary limits of such research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article investigates virtual reality representations of performance in London’s late sixteenth-century Rose Theatre, a venue that, by means of current technology, can once again challenge perceptions of space, performance, and memory. The VR model of The Rose represents a virtual recreation of this venue in as much detail as possible and attempts to recover graphic demonstrations of the trace memories of the performance modes of the day. The VR model is based on accurate archeological and theatre historical records and is easy to navigate. The introduction of human figures onto The Rose’s stage via motion capture allows us to explore the relationships between space, actor and environment. The combination of venue and actors facilitates a new way of thinking about how the work of early modern playwrights can be stored and recalled. This virtual theatre is thus activated to intersect productively with contemporary studies in performance; as such, our paper provides a perspective on and embodiment of the relation between technology, memory and experience. It is, at its simplest, a useful archiving project for theatrical history, but it is directly relevant to contemporary performance practice as well. Further, it reflects upon how technology and ‘re-enactments’ of sorts mediate the way in which knowledge and experience are transferred, and even what may be considered ‘knowledge.’ Our work provides opportunities to begin addressing what such intermedial confrontations might produce for ‘remembering, experiencing, thinking and imagining.’ We contend that these confrontations will enhance live theatre performance rather than impeding or disrupting contemporary performance practice. Our ‘paper’ is in the form of a video which covers the intellectual contribution while also permitting a demonstration of the interventions we are discussing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates virtual reality representations of performance in London’s late sixteenth-century Rose Theatre, a venue that, by means of current technology, can once again challenge perceptions of space, performance, and memory. The VR model of The Rose becomes a Camillo device in that it represents a virtual recreation of this venue in as much detail as possible and attempts to recover graphic demonstrations of the trace memories of the performance modes of the day. The VR model is based on accurate archeological and theatre historical records and is easy to navigate. The introduction of human figures onto The Rose’s stage via motion capture allows us to explore the relationships between space, actor and environment. The combination of venue and actors facilitates a new way of thinking about how the work of early modern playwrights can be stored and recalled. This virtual theatre is thus activated to intersect productively with contemporary studies in performance; as such, our paper provides a perspective on and embodiment of the relation between technology, memory and experience. It is, at its simplest, a useful archiving project for theatrical history, but it is directly relevant to contemporary performance practice as well. Further, it reflects upon how technology and ‘re-enactments’ of sorts mediate the way in which knowledge and experience are transferred, and even what may be considered ‘knowledge.’ Our work provides opportunities to begin addressing what such intermedial confrontations might produce for ‘remembering, experiencing, thinking and imagining.’ We contend that these confrontations will enhance live theatre performance rather than impeding or disrupting contemporary performance practice. This paper intersects with the CFP’s ‘Performing Memory’ and ‘Memory Lab’ themes. Our presentation (which includes a demonstration of the VR model and the motion capture it requires) takes the form of two closely linked papers that share a single abstract. The two papers will be given by two people, one of whom will be physically present in Utrecht, the other participating via Skype.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective quantification of three-dimensional kinematics during different functional and occupational tasks is now more in demand than ever. The introduction of new generation of low-cost passive motion capture systems from a number of manufacturers has made this technology accessible for teaching, clinical practice and in small/medium industry. Despite the attractive nature of these systems, their accuracy remains unproved in independent tests. We assessed static linear accuracy, dynamic linear accuracy and compared gait kinematics from a Vicon MX20 system to a Natural Point OptiTrack system. In all experiments data were sampled simultaneously. We identified both systems perform excellently in linear accuracy tests with absolute errors not exceeding 1%. In gait data there was again strong agreement between the two systems in sagittal and coronal plane kinematics. Transverse plane kinematics differed by up to 3 at the knee and hip, which we attributed to the impact of soft tissue artifact accelerations on the data. We suggest that low-cost systems are comparably accurate to their high-end competitors and offer a platform with accuracy acceptable in research for laboratories with a limited budget.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accuracy of marker placement on palpable surface anatomical landmarks is an important consideration in biomechanics. Although marker placement reliability has been studied in some depth, it remains unclear whether or not the markers are accurately positioned over the intended landmark in order to define the static position and orientation of the segment. A novel method using commonly available X-ray imaging was developed to identify the accuracy of markers placed on the shoe surface by palpating landmarks through the shoe. An anterior–posterior and lateral–medial X-ray was taken on 24 participants with a newly developed marker set applied to both the skin and shoe. The vector magnitude of both skin- and shoe-mounted markers from the anatomical landmark was calculated, as well as the mean marker offset between skin- and shoe-mounted markers. The accuracy of placing markers on the shoe relative to the skin-mounted markers, accounting for shoe thickness, was less than 5mm for all markers studied. Further, when using the developed guidelines provided in this study, the method was deemed reliable (Intra-rater ICCs¼0.50–0.92). In conclusion, the method proposed here can reliably assess marker placement accuracy on the shoe surface relative to chosen anatomical landmarks beneath the skin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1999 Richards compared the accuracy of commercially available motion capture systems commonly used in biomechanics. Richards identified that in static tests the optical motion capture systems generally produced RMS errors of less than 1.0 mm. During dynamic tests, the RMS error increased to up to 4.2 mm in some systems. In the last 12 years motion capture systems have continued to evolve and now include high-resolution CCD or CMOS image sensors, wireless communication, and high full frame sampling frequencies. In addition to hardware advances, there have also been a number of advances in software, which includes improved calibration and tracking algorithms, real time data streaming, and the introduction of the c3d standard. These advances have allowed the system manufactures to maintain a high retail price in the name of advancement. In areas such as gait analysis and ergonomics many of the advanced features such as high resolution image sensors and high sampling frequencies are not required due to the nature of the task often investigated. Recently Natural Point introduced low cost cameras, which on face value appear to be suitable as at very least a high quality teaching tool in biomechanics and possibly even a research tool when coupled with the correct calibration and tracking software. The aim of the study was therefore to compare both the linear accuracy and quality of angular kinematics from a typical high end motion capture system and a low cost system during a simple task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This creative work is the outcome of preliminary experiments through practice aiming to explore the collaboration of a Dancer/choreographer with an Animator, along with enquiry into the intergeneration of motion capture technologies within the work-flow. The animated visuals derived from the motion capture data is not aimed at just re-targeting of movement from one source to another but looks at describing the thought and emotions of the choreographed dance through visual aesthetics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motion capture continues to be adopted across a range of creative fields including, animation, games, visual effects, dance, live theatre and the visual arts. This panel will discuss and showcase the use of motion capture across these creative fields and consider the future of virtual production in the creative industries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. Made by Motion is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generate experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN – Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In my presentation I will be displaying and discussing a selected creative works from the project along with the process and considerations behind the work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction Markerless motion capture systems are relatively new devices that can significantly speed up capturing full body motion. A precision of the assessment of the finger’s position with this type of equipment was evaluated at 17.30 ± 9.56 mm when compare to an active marker system [1]. The Microsoft Kinect was proposed to standardized and enhanced clinical evaluation of patients with hemiplegic cerebral palsy [2]. Markerless motion capture systems have the potential to be used in a clinical setting for movement analysis, as well as for large cohort research. However, the precision of such system needs to be characterized. Global objectives • To assess the precision within the recording field of the markerless motion capture system Openstage 2 (Organic Motion, NY). • To compare the markerless motion capture system with an optoelectric motion capture system with active markers. Specific objectives • To assess the noise of a static body at 13 different location within the recording field of the markerless motion capture system. • To assess the smallest oscillation detected by the markerless motion capture system. • To assess the difference between both systems regarding the body joint angle measurement. Methods Equipment • OpenStage® 2 (Organic Motion, NY) o Markerless motion capture system o 16 video cameras (acquisition rate : 60Hz) o Recording zone : 4m * 5m * 2.4m (depth * width * height) o Provide position and angle of 23 different body segments • VisualeyezTM VZ4000 (PhoeniX Technologies Incorporated, BC) o Optoelectric motion capture system with active markers o 4 trackers system (total of 12 cameras) o Accuracy : 0.5~0.7mm Protocol & Analysis • Static noise: o Motion recording of an humanoid mannequin was done in 13 different locations o RMSE was calculated for each segment in each location • Smallest oscillation detected: o Small oscillations were induced to the humanoid mannequin and motion was recorded until it stopped. o Correlation between the displacement of the head recorded by both systems was measured. A corresponding magnitude was also measured. • Body joints angle: o Body motion was recorded simultaneously with both systems (left side only). o 6 participants (3 females; 32.7 ± 9.4 years old) • Tasks: Walk, Squat, Shoulder flexion & abduction, Elbow flexion, Wrist extension, Pronation / supination (not in results), Head flexion & rotation (not in results), Leg rotation (not in results), Trunk rotation (not in results) o Several body joint angles were measured with both systems. o RMSE was calculated between signals of both systems. Results Conclusion Results show that the Organic Motion markerless system has the potential to be used for assessment of clinical motor symptoms or motor performances However, the following points should be considered: • Precision of the Openstage system varied within the recording field. • Precision is not constant between limb segments. • The error seems to be higher close to the range of motion extremities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Action recognition plays an important role in various applications, including smart homes and personal assistive robotics. In this paper, we propose an algorithm for recognizing human actions using motion capture action data. Motion capture data provides accurate three dimensional positions of joints which constitute the human skeleton. We model the movement of the skeletal joints temporally in order to classify the action. The skeleton in each frame of an action sequence is represented as a 129 dimensional vector, of which each component is a 31) angle made by each joint with a fixed point on the skeleton. Finally, the video is represented as a histogram over a codebook obtained from all action sequences. Along with this, the temporal variance of the skeletal joints is used as additional feature. The actions are classified using Meta-Cognitive Radial Basis Function Network (McRBFN) and its Projection Based Learning (PBL) algorithm. We achieve over 97% recognition accuracy on the widely used Berkeley Multimodal Human Action Database (MHAD).