910 resultados para VIDEOS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To determine the reproducibility and validity of video screen measurement (VSM) of sagittal plane joint angles during gait. METHODS: 17 children with spastic cerebral palsy walked on a 10m walkway. Videos were recorded and 3d-instrumented gait analysis was performed. Two investigators measured six sagittal joint/segment angles (shank, ankle, knee, hip, pelvis, and trunk) using a custom-made software package. The intra- and interrater reproducibility were expressed by the intraclass correlation coefficient (ICC), standard error of measurements (SEM) and smallest detectable difference (SDD). The agreement between VSM and 3d joint angles was illustrated by Bland-Altman plots and limits of agreement (LoA). RESULTS: Regarding the intrarater reproducibility of VSM, the ICC ranged from 0.99 (shank) to 0.58 (trunk), the SEM from 0.81 degrees (shank) to 5.97 degrees (trunk) and the SDD from 1.80 degrees (shank) to 16.55 degrees (trunk). Regarding the interrater reproducibility, the ICC ranged from 0.99 (shank) to 0.48 (trunk), the SEM from 0.70 degrees (shank) to 6.78 degrees (trunk) and the SDD from 1.95 degrees (shank) to 18.8 degrees (trunk). The LoA between VSM and 3d data ranged from 0.4+/-13.4 degrees (knee extension stance) to 12.0+/-14.6 degrees (ankle dorsiflexion swing). CONCLUSION: When performed by the same observer, VSM mostly allows the detection of relevant changes after an intervention. However, VSM angles differ from 3d-IGA and do not reflect the real sagittal joint position, probably due to the additional movements in the other planes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Only few standardized apraxia scales are available and they do not cover all domains and semantic features of gesture production. Therefore, the objective of the present study was to evaluate the reliability and validity of a newly developed test of upper limb apraxia (TULIA), which is comprehensive and still short to administer. METHODS: The TULIA consists of 48 items including imitation and pantomime domain of non-symbolic (meaningless), intransitive (communicative) and transitive (tool related) gestures corresponding to 6 subtests. A 6-point scoring method (0-5) was used (score range 0-240). Performance was assessed by blinded raters based on videos in 133 stroke patients, 84 with left hemisphere damage (LHD) and 49 with right hemisphere damage (RHD), as well as 50 healthy subjects (HS). RESULTS: The clinimetric findings demonstrated mostly good to excellent internal consistency, inter- and intra-rater (test-retest) reliability, both at the level of the six subtests and at individual item level. Criterion validity was evaluated by confirming hypotheses based on the literature. Construct validity was demonstrated by a high correlation (r = 0.82) with the De Renzi-test. CONCLUSION: These results show that the TULIA is both a reliable and valid test to systematically assess gesture production. The test can be easily applied and is therefore useful for both research purposes and clinical practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Action observation leads to neural activation of the human premotor cortex. This study examined how the level of motor expertise (expert vs. novice) in ballroom dancing and the visual viewpoint (internal vs. external viewpoint) influence this activation within different parts of this area of the brain. Results Sixteen dance experts and 16 novices observed ballroom dance videos from internal or external viewpoints while lying in a functional magnetic resonance imaging scanner. A conjunction analysis of all observation conditions showed that action observation activated distinct networks of premotor, parietal, and cerebellar structures. Experts revealed increased activation in the ventral premotor cortex compared to novices. An internal viewpoint led to higher activation of the dorsal premotor cortex. Conclusions The present results suggest that the ventral and dorsal premotor cortex adopt differential roles during action observation depending on the level of motor expertise and the viewpoint.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of our study is to investigate the effects of chronic estrogen administration on same-sex interactions during exposure to a social stressor and on oxytocin (OT) levels in prairie voles (Microtus orchrogaster). Estrogen and OT are two hormones known to be involved with social behavior and stress. Estogen is involved in the transcription of OT and its receptor. Because of this, it is generally thought that estrogen upregulates OT, but evidence to support this assumption is weak. While estrogen has been shown to either increase or decrease stress, OT has been shown to have stress-dampening properties. The goal of our experiment is to determine how estrogen affects OT levels as well as behavior in a social stressor in the voles. In addition, estrogen is required for many opposite-sex interactions, but little is known about its influence on same-sex interactions. We hypothesized that prairie voles receiving chronic estrogen injections would show an increase in OT levels in the brain and alter behavior in response to a social stressor called the resident-intruder test. To test this hypothesis, 73 female prairie voles were ovariectomized and then administered daily injections of estrogen (0.05 ¿g in peanut oil, s.c.) or vehicle for 8 days. On the final day of injections, half of the voles were given the resident-intruder test, a stressful 5 min interaction with a same-sex stranger. Their behavior was video-recorded. These animals were then sacrificed either 10 minutes or 60 minutes after the conclusion of the test. Half of the animals (no stress group) were not given the resident-intruder test. After sacrifice, trunk blood and brains were collected from the animals. Videos of the resident-intruder tests were analyzed for pro-social and aggressive behavior. Density of OT-activated neurons in the brain was measured via pixel count using immunohistochemistry. No differences were found in pro-social behavior (focal sniffing, p = 0.242; focal initiated sniffing p = 0.142; focal initiated sniffing/focal sniffing, p = 0.884) or aggressive behavior (total time fighting, p= 0.763; number of fights, p= 0.148; number of strikes, p = 0.714). No differences were found in activation of OT neurons in the brain, neither in the anterior paraventricular nucleus (PVN) (pixel count p= 0.358; % area that contains pixelated neurons p = 0.443) nor in the medial PVN (pixel count p= 0.999; % area that contains pixelated neurons p = 0.916). These results suggest that estrogen most likely does not directly upregulate OT and that estrogen does not alter behavior in stressful social interactions with a same-sex stranger. Estrogen may prepare the animal to respond to OT, instead of increasing the production of the peptide itself, suggesting that we need to shift the framework in which we consider estrogen and OT interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

From Bush’s September 20, 2001 “War on Terror” speech to Congress to President-Elect Barack Obama’s acceptance speech on November 4, 2008, the U.S. Army produced visual recruitment material that addressed the concerns of falling enlistment numbers—due to the prolonged and difficult war in Iraq—with quickly-evolving and compelling rhetorical appeals: from the introduction of an “Army of One” (2001) to “Army Strong” (2006); from messages focused on education and individual identity to high-energy adventure and simulated combat scenarios, distributed through everything from printed posters and music videos to first-person tactical-shooter video games. These highly polished, professional visual appeals introduced to the American public during a time of an unpopular war fought by volunteers provide rich subject matter for research and analysis. This dissertation takes a multidisciplinary approach to the visual media utilized as part of the Army’s recruitment efforts during the War on Terror, focusing on American myths—as defined by Barthes—and how these myths are both revealed and reinforced through design across media platforms. Placing each selection in its historical context, this dissertation analyzes how printed materials changed as the War on Terror continued. It examines the television ad that introduced “Army Strong” to the American public, considering how the combination of moving image, text, and music structure the message and the way we receive it. This dissertation also analyzes the video game America’s Army, focusing on how the interaction of the human player and the computer-generated player combine to enhance the persuasive qualities of the recruitment message. Each chapter discusses how the design of the particular medium facilitates engagement/interactivity of the viewer. The conclusion considers what recruitment material produced during this time period suggests about the persuasive strategies of different media and how they create distinct relationships with their spectators. It also addresses how theoretical frameworks and critical concepts used by a variety of disciplines can be combined to analyze recruitment media utilizing a Selber inspired three literacy framework (functional, critical, rhetorical) and how this framework can contribute to the multimodal classroom by allowing instructors and students to do a comparative analysis of multiple forms of visual media with similar content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today’s technology is evolving at an exponential rate. Everyday technology is finding more inroads into our education system. This study seeks to determine if having access to technology, including iPad tablets and a teacher's physical science webpage resources (videos, PowerPoint® presentations, and audio podcasts), assists ninth grade high school students in attaining greater comprehension and improved scientific literacy. Comprehension of the science concepts was measured by comparing current student pretest and post test scores on a teacher-written assessment. The current student post test scores were compared with prior classes’ (2010-2011 and 2009-2010) to determine if there was a difference in outcomes between the technology interventions and traditional instruction. Students entered responses to a technology survey that measured intervention usage and their perception of helpfulness of each intervention. The current year class’ mean composite scores, between the pretest and post test increased by 6.9 points (32.5%). Student composite scores also demonstrated that the interventions were successful in helping a majority of students (64.7%) attain the curriculum goals. The interventions were also successful in increasing student scientific literacy by meeting all of Bloom's cognitive levels that were assessed. When compared with prior 2010-2011 and 2009-2010 classes, the current class received a higher mean post test score indicating a positive effect of the use of technological interventions. The survey showed a majority of students utilized at least some of the technology interventions and perceived them as helpful, especially the videos and PowerPoint® presentations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dan Cornell returned to Vietnam in 2012, more than 40 years after he was stationed there. From 1970-1971, Dan spent time flying around Vietnam and the neighboring countries in a large, CH-47 helicopter. There was not much time to think about what he was doing or why. In spite of this, Dan managed to become enticed bu this country so different from his own. This presentation features videos and photos from his 8-week trip.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Good cooperation between farrier, veterinarian and horse owner is an important prerequisite for optimal support of the horse with regards to shoeing and hoof health. The introduction of a joint educational aid aims to improve the level of education of both veterinarians and farriers. The interactive, multimedia approach represents an innovative new dimension in instruction techniques, predominantly provided through images and videos. The contents of the new teaching aid will focus on detailed anatomy of the foot and distal limb, as well as currently accepted shoeing practices and techniques and pathologic conditions of the hoof and foot.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual fixation is employed by humans and some animals to keep a specific 3D location at the center of the visual gaze. Inspired by this phenomenon in nature, this paper explores the idea to transfer this mechanism to the context of video stabilization for a handheld video camera. A novel approach is presented that stabilizes a video by fixating on automatically extracted 3D target points. This approach is different from existing automatic solutions that stabilize the video by smoothing. To determine the 3D target points, the recorded scene is analyzed with a stateof- the-art structure-from-motion algorithm, which estimates camera motion and reconstructs a 3D point cloud of the static scene objects. Special algorithms are presented that search either virtual or real 3D target points, which back-project close to the center of the image for as long a period of time as possible. The stabilization algorithm then transforms the original images of the sequence so that these 3D target points are kept exactly in the center of the image, which, in case of real 3D target points, produces a perfectly stable result at the image center. Furthermore, different methods of additional user interaction are investigated. It is shown that the stabilization process can easily be controlled and that it can be combined with state-of-theart tracking techniques in order to obtain a powerful image stabilization tool. The approach is evaluated on a variety of videos taken with a hand-held camera in natural scenes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet Service Providers’ liability for copyright infringement is a debated issue in France and Belgium, particularly with respect to intermediaries such as providers of hyperlinks and location tool services for which the e-commerce directive does not set explicitly any exemption from liability. Thus, the question arises among other things whether the safe harbour provisions provided for in respect of caching and hosting also could apply to search engines. French and Belgian Courts had recently to decide on this issue in several cases concerning Google’s complementary tools such as Google Videos, Google Images, Google Suggest and Google News. This article seeks to give a summary of and to assess this recent case law.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mobile learning, in the past defined as learning with mobile devices, now refers to any type of learning-on-the-go or learning that takes advantage of mobile technologies. This new definition shifted its focus from the mobility of technology to the mobility of the learner (O'Malley and Stanton 2002; Sharples, Arnedillo-Sanchez et al. 2009). Placing emphasis on the mobile learner’s perspective requires studying “how the mobility of learners augmented by personal and public technology can contribute to the process of gaining new knowledge, skills, and experience” (Sharples, Arnedillo-Sanchez et al. 2009). The demands of an increasingly knowledge based society and the advances in mobile phone technology are combining to spur the growth of mobile learning. Around the world, mobile learning is predicted to be the future of online learning, and is slowly entering the mainstream education. However, for mobile learning to attain its full potential, it is essential to develop more advanced technologies that are tailored to the needs of this new learning environment. A research field that allows putting the development of such technologies onto a solid basis is user experience design, which addresses how to improve usability and therefore user acceptance of a system. Although there is no consensus definition of user experience, simply stated it focuses on how a person feels about using a product, system or service. It is generally agreed that user experience adds subjective attributes and social aspects to a space that has previously concerned itself mainly with ease-of-use. In addition, it can include users’ perceptions of usability and system efficiency. Recent advances in mobile and ubiquitous computing technologies further underline the importance of human-computer interaction and user experience (feelings, motivations, and values) with a system. Today, there are plenty of reports on the limitations of mobile technologies for learning (e.g., small screen size, slow connection), but there is a lack of research on user experience with mobile technologies. This dissertation will fill in this gap by a new approach in building a user experience-based mobile learning environment. The optimized user experience we suggest integrates three priorities, namely a) content, by improving the quality of delivered learning materials, b) the teaching and learning process, by enabling live and synchronous learning, and c) the learners themselves, by enabling a timely detection of their emotional state during mobile learning. In detail, the contributions of this thesis are as follows: • A video codec optimized for screencast videos which achieves an unprecedented compression rate while maintaining a very high video quality, and a novel UI layout for video lectures, which together enable truly mobile access to live lectures. • A new approach in HTTP-based multimedia delivery that exploits the characteristics of live lectures in a mobile context and enables a significantly improved user experience for mobile live lectures. • A non-invasive affective learning model based on multi-modal emotion detection with very high recognition rates, which enables real-time emotion detection and subsequent adaption of the learning environment on mobile devices. The technology resulting from the research presented in this thesis is in daily use at the School of Continuing Education of Shanghai Jiaotong University (SOCE), a blended-learning institution with 35.000 students.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a user supported tracking framework that combines automatic tracking with extended user input to create error free tracking results that are suitable for interactive video production. The goal of our approach is to keep the necessary user input as small as possible. In our framework, the user can select between different tracking algorithms - existing ones and new ones that are described in this paper. Furthermore, the user can automatically fuse the results of different tracking algorithms with our robust fusion approach. The tracked object can be marked in more than one frame, which can significantly improve the tracking result. After tracking, the user can validate the results in an easy way, thanks to the support of a powerful interpolation technique. The tracking results are iteratively improved until the complete track has been found. After the iterative editing process the tracking result of each object is stored in an interactive video file that can be loaded by our player for interactive videos.