993 resultados para video capture


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This creative work is the outcome of preliminary experiments through practice aiming to explore the collaboration of a Dancer/choreographer with an Animator, along with enquiry into the intergeneration of motion capture technologies within the work-flow. The animated visuals derived from the motion capture data is not aimed at just re-targeting of movement from one source to another but looks at describing the thought and emotions of the choreographed dance through visual aesthetics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This video article articulates two exercises that have been developed to respond to the need for preparedness in the growing field of Performance Capture. The first is called Walking Through (Delbridge 2013), where the actor navigates a series of objects that exist in screen space through a developed sense of the existing physical particularities of the studio and an interaction with a screen (or feedback loop). The second exercise is called The Donut (Delbridge 2013), where the performer continues to navigate screen space but this time does so through the literal stepping through of a Torus in the virtual – again with nothing but the studio infrastructure and the screen as a guide. Notions of Motion Captured performance infer the existence of an interface that combines performer with system, separating (or intervening in) the space between performance and the screen. It is precisely the effect and provided opportunity of the intermediary device on the practice, craft and preparedness of the actor as they navigate the connection between 3D screen space and the physical properties of the studio that is examined here. Defining the scope of current practice for the contemporary actor is a key construct of this challenge, with the most appropriate definition revolving around the provision of a required mixture of performance and content for live, mediated, framed and variously captured formats. One of these particular formats is Performance Capture. The exercises presented here are two from a series, developed over a three year study that contribute to our understanding of the potential for a training regimen to be developed for the rigors of Performance Capture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple but accurate method for measuring the Earth’s radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of the sidereal day were used to calculate the radius of the Earth. The radius was measured as 6394.3 +/- 118 km, which is within 1.8% of the accepted average value of 6371 km and well within the experimental error. The experiment is suitable as a high school or university project and should produce a value for Earth’s radius within a few per cent at latitudes towards the equator, where at some times of the year the ecliptic is approximately normal to the horizon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Importance Active video games may offer an effective strategy to increase physical activity in overweight and obese children. However, the specific effects of active gaming when delivered within the context of a pediatric weight management program are unknown. Objective To evaluate the effects of active video gaming on physical activity and weight loss in children participating in an evidence-based weight management program delivered in the community. Design, Setting, and Participants Group-randomized clinical trial conducted during a 16-week period in YMCAs and schools located in Massachusetts, Rhode Island, and Texas. Seventy-five overweight or obese children (41 girls [55%], 34 whites [45%], 20 Hispanics [27%], and 17 blacks [23%]) enrolled in a community-based pediatric weight management program. Mean (SD) age of the participants was 10.0 (1.7) years; body mass index (BMI) z score, 2.15 (0.40); and percentage overweight from the median BMI for age and sex, 64.3% (19.9%). Interventions All participants received a comprehensive family-based pediatric weight management program (JOIN for ME). Participants in the program and active gaming group received hardware consisting of a game console and motion capture device and 1 active game at their second treatment session and a second game in week 9 of the program. Participants in the program-only group were given the hardware and 2 games at the completion of the 16-week program. Main Outcomes and Measures Objectively measured daily moderate-to-vigorous and vigorous physical activity, percentage overweight, and BMI z score. Results Participants in the program and active gaming group exhibited significant increases in moderate-to-vigorous (mean [SD], 7.4 [2.7] min/d) and vigorous (2.8 [0.9] min/d) physical activity at week 16 (P < .05). In the program-only group, a decline or no change was observed in the moderate-to-vigorous (mean [SD] net difference, 8.0 [3.8] min/d; P = .04) and vigorous (3.1 [1.3] min/d; P = .02) physical activity. Participants in both groups exhibited significant reductions in percentage overweight and BMI z scores at week 16. However, the program and active gaming group exhibited significantly greater reductions in percentage overweight (mean [SD], −10.9% [1.6%] vs −5.5% [1.5%]; P = .02) and BMI z score (−0.25 [0.03] vs −0.11 [0.03]; P < .001). Conclusions and Relevance Incorporating active video gaming into an evidence-based pediatric weight management program has positive effects on physical activity and relative weight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Viewer interests, evoked by video content, can potentially identify the highlights of the video. This paper explores the use of facial expressions (FE) and heart rate (HR) of viewers captured using camera and non-strapped sensor for identifying interesting video segments. The data from ten subjects with three videos showed that these signals are viewer dependent and not synchronized with the video contents. To address this issue, new algorithms are proposed to effectively combine FE and HR signals for identifying the time when viewer interest is potentially high. The results show that, compared with subjective annotation and match report highlights, ‘non-neutral’ FE and ‘relatively higher and faster’ HR is able to capture 60%-80% of goal, foul, and shot-on-goal soccer video events. FE is found to be more indicative than HR of viewer’s interests, but the fusion of these two modalities outperforms each of them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Markerless motion capture systems are relatively new devices that can significantly speed up capturing full body motion. A precision of the assessment of the finger’s position with this type of equipment was evaluated at 17.30 ± 9.56 mm when compare to an active marker system [1]. The Microsoft Kinect was proposed to standardized and enhanced clinical evaluation of patients with hemiplegic cerebral palsy [2]. Markerless motion capture systems have the potential to be used in a clinical setting for movement analysis, as well as for large cohort research. However, the precision of such system needs to be characterized. Global objectives • To assess the precision within the recording field of the markerless motion capture system Openstage 2 (Organic Motion, NY). • To compare the markerless motion capture system with an optoelectric motion capture system with active markers. Specific objectives • To assess the noise of a static body at 13 different location within the recording field of the markerless motion capture system. • To assess the smallest oscillation detected by the markerless motion capture system. • To assess the difference between both systems regarding the body joint angle measurement. Methods Equipment • OpenStage® 2 (Organic Motion, NY) o Markerless motion capture system o 16 video cameras (acquisition rate : 60Hz) o Recording zone : 4m * 5m * 2.4m (depth * width * height) o Provide position and angle of 23 different body segments • VisualeyezTM VZ4000 (PhoeniX Technologies Incorporated, BC) o Optoelectric motion capture system with active markers o 4 trackers system (total of 12 cameras) o Accuracy : 0.5~0.7mm Protocol & Analysis • Static noise: o Motion recording of an humanoid mannequin was done in 13 different locations o RMSE was calculated for each segment in each location • Smallest oscillation detected: o Small oscillations were induced to the humanoid mannequin and motion was recorded until it stopped. o Correlation between the displacement of the head recorded by both systems was measured. A corresponding magnitude was also measured. • Body joints angle: o Body motion was recorded simultaneously with both systems (left side only). o 6 participants (3 females; 32.7 ± 9.4 years old) • Tasks: Walk, Squat, Shoulder flexion & abduction, Elbow flexion, Wrist extension, Pronation / supination (not in results), Head flexion & rotation (not in results), Leg rotation (not in results), Trunk rotation (not in results) o Several body joint angles were measured with both systems. o RMSE was calculated between signals of both systems. Results Conclusion Results show that the Organic Motion markerless system has the potential to be used for assessment of clinical motor symptoms or motor performances However, the following points should be considered: • Precision of the Openstage system varied within the recording field. • Precision is not constant between limb segments. • The error seems to be higher close to the range of motion extremities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Action recognition plays an important role in various applications, including smart homes and personal assistive robotics. In this paper, we propose an algorithm for recognizing human actions using motion capture action data. Motion capture data provides accurate three dimensional positions of joints which constitute the human skeleton. We model the movement of the skeletal joints temporally in order to classify the action. The skeleton in each frame of an action sequence is represented as a 129 dimensional vector, of which each component is a 31) angle made by each joint with a fixed point on the skeleton. Finally, the video is represented as a histogram over a codebook obtained from all action sequences. Along with this, the temporal variance of the skeletal joints is used as additional feature. The actions are classified using Meta-Cognitive Radial Basis Function Network (McRBFN) and its Projection Based Learning (PBL) algorithm. We achieve over 97% recognition accuracy on the widely used Berkeley Multimodal Human Action Database (MHAD).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relative abundance of Bristol Bay red king crab (Paralithodes camtschaticus) is estimated each year for stock assessment by using catch-per-swept-area data collected on the Alaska Fisheries Science Center’s annual eastern Bering Sea bottom trawl survey. To estimate survey trawl capture efficiency for red king crab, an experiment was conducted with an auxiliary net (fitted with its own heavy chain-link footrope) that was attached beneath the trawl to capture crabs escaping under the survey trawl footrope. Capture probability was then estimated by fitting a model to the proportion of crabs captured and crab size data. For males, mean capture probability was 72% at 95 mm (carapace length), the size at which full vulnerability to the survey trawl is assigned in the current management model; 84.1% at 135 mm, the legal size for the fishery; and 93% at 184 mm, the maximum size observed in this study. For females, mean capture probability was 70% at 90 mm, the size at which full vulnerability to the survey trawl is assigned in the current management model, and 77% at 162 mm, the maximum size observed in this study. The precision of our estimates for each sex decreased for juveniles under 60 mm and for the largest crab because of small sample sizes. In situ data collected from trawl-mounted video cameras were used to determine the importance of various factors associated with the capture of individual crabs. Capture probability was significantly higher when a crab was standing when struck by the footrope, rather than crouching, and higher when a crab was hit along its body axis, rather than from the side. Capture probability also increased as a function of increasing crab size but decreased with increasing footrope distance from the bottom and when artificial light was provided for the video camera.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Establishing correspondences among object instances is still challenging in multi-camera surveillance systems, especially when the cameras’ fields of view are non-overlapping. Spatiotemporal constraints can help in solving the correspondence problem but still leave a wide margin of uncertainty. One way to reduce this uncertainty is to use appearance information about the moving objects in the site. In this paper we present the preliminary results of a new method that can capture salient appearance characteristics at each camera node in the network. A Latent Dirichlet Allocation (LDA) model is created and maintained at each node in the camera network. Each object is encoded in terms of the LDA bag-of-words model for appearance. The encoded appearance is then used to establish probable matching across cameras. Preliminary experiments are conducted on a dataset of 20 individuals and comparison against Madden’s I-MCHR is reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A project within a computing department at the University of Greenwich, has been carried out to identify whether podcasting can be used to help understanding and learning of a subject (3D Animation). We know that the benefits of podcasting in education (HE) can be justified, [1]; [2]; [3]; [4]; [5]; [6] and that some success has been proven, but this paper aims to report the results of a term-long project that provided podcast materials for students to help support their learning using Xserve and Podcast Producer technology. Findings in a previous study [6] identified podcasting as a way to diversify learning and provde a more personalised learning experience for students, as well as being able to provide access to a greater mix of learning styles [7]. Finally this paper aims to present the method of capture and distribution, the methodologies of the study, analysis of results, and conclusions that relate to podcasting and enhanced supported learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we discuss our experiences of using photography and video while observing contentious parades and protests in Belfast. We show how our use of these methods drew us into a series of unplanned for non-verbal interactions with other event participants who were also freely and abundantly using photographic and filming equipment to capture their own images. This interactive use of photography and video affected us emotionally and influenced what we noticed and what we omitted in our observations. In particular, it forced us to reflect upon and question our role as researchers in the events we observed and in the changing balance of power between researchers and researched.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a novel recurrent neural networkarchitecture for video-based person re-identification.Given the video sequence of a person, features are extracted from each frame using a convolutional neural network that incorporates a recurrent final layer, which allows information to flow between time-steps. The features from all time steps are then combined using temporal pooling to give an overall appearance feature for the complete sequence. The convolutional network, recurrent layer, and temporal pooling layer, are jointly trained to act as a feature extractor for video-based re-identification using a Siamese network architecture.Our approach makes use of colour and optical flow information in order to capture appearance and motion information which is useful for video re-identification. Experiments are conduced on the iLIDS-VID and PRID-2011 datasets to show that this approach outperforms existing methods of video-based re-identification.

https://github.com/niallmcl/Recurrent-Convolutional-Video-ReID
Project Source Code

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT
The proliferation in the use of video lecture capture in universities worldwide presents an opportunity to analyse video watching patterns in an attempt to quantify and qualify how students engage and learn with the videos. It also presents an opportunity to investigate if there are similar student learning patterns during the equivalent physical lecture. The goal of this action based research project was to capture and quantitatively analyse the viewing behaviours and patterns of a series of video lecture captures across several university Java programming modules. It sought to study if a quantitative analysis of viewing behaviours of Lecture Capture videos coupled with a qualitative evaluation from the students and lecturers could be correlated to provide generalised patterns that could then be used to understand the learning experience of students during videos and potentially face to face lectures and, thereby, present opportunities to reflectively enhance lecturer performance and the students’ overall learning experience. The report establishes a baseline understanding of the analytics of videos of several commonly used pedagogical teaching methods used in the delivery of programming courses. It reflects on possible concurrences within live lecture delivery with the potential to inform and improve lecturing performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.