397 resultados para Turning voice emotion into graphical movement
Resumo:
Purpose. To determine how Developmental Eye Movement (DEM) test results relate to reading eye movement patterns recorded with the Visagraph in visually normal children, and whether DEM results and recorded eye movement patterns relate to standardized reading achievement scores. Methods. Fifty-nine school-age children (age = 9.7 ± 0.6 years) completed the DEM test and had eye movements recorded with the Visagraph III test while reading for comprehension. Monocular visual acuity in each eye and random dot stereoacuity were measured and standardized scores on independently administered reading comprehension tests [reading progress test (RPT)] were obtained. Results. Children with slower DEM horizontal and vertical adjusted times tended to have slower reading rates with the Visagraph (r = -0.547 and -0.414 respectively). Although a significant correlation was also found between the DEM ratio and Visagraph reading rate (r = -0.368), the strength of the relationship was less than that between DEM horizontal adjusted time and reading rate. DEM outcome scores were not significantly associated with RPT scores. When the relative contribution of reading ability (RPT) and DEM scores was accounted for in multivariate analysis, DEM outcomes were not significantly associated with Visagraph reading rate. RPT scores were associated with Visagraph outcomes of duration of fixations (r = -0.403) and calculated reading rate (r = 0.366) but not with DEM outcomes. Conclusions.DEM outcomes can identify children whose Visagraph recorded eye movement patterns show slow reading rates. However, when reading ability is accounted for, DEM outcomes are a poor predictor of reading rate. Visagraph outcomes of duration of fixation and reading rate relate to standardized reading achievement scores; however, DEM results do not. Copyright © 2011 American Academy of Optometry.
Resumo:
High levels of sitting have been linked with poor health outcomes. Previously a pragmatic MTI accelerometer data cut-point (100 count/min-1) has been used to estimate sitting. Data on the accuracy of this cut-point is unavailable. PURPOSE: To ascertain whether the 100 count/min-1 cut-point accurately isolates sitting from standing activities. METHODS: Participants fitted with an MTI accelerometer were observed performing a range of sitting, standing, light & moderate activities. 1-min epoch MTI data were matched to observed activities, then re-categorized as either sitting or not using the 100 count/min-1 cut-point. Self-report demographics and current physical activity were collected. Generalized estimating equation for repeated measures with a binary logistic model analyses (GEE), corrected for age, gender and BMI, were conducted to ascertain the odds of the MTI data being misclassified. RESULTS: Data were from 26 healthy subjects (8 men; 50% aged <25 years; mean BMI (SD) 22.7(3.8)m/kg2). MTI sitting and standing data mode was 0 count/min-1, with 46% of sitting activities and 21% of standing activities recording 0 count/min-1. The GEE was unable to accurately isolate sitting from standing activities using the 100 count/min-1 cut-point, since all sitting activities were incorrectly predicted as standing (p=0.05). To further explore the sensitivity of MTI data to delineate sitting from standing, the upper 95% confidence interval of the mean for the sitting activities (46 count/min-1) was used to re-categorise the data; this resulted in the GEE correctly classifying 49% of sitting, and 69% of standing activities. Using the 100 count/min-1 cut-point the data were re-categorised into a combined ‘sit/stand’ category and tested against other light activities: 88% of sit/stand and 87% of light activities were accurately predicted. Using Freedson’s moderate cut-point of 1952 count/min-1 the GEE accurately predicted 97% of light vs. 90% of moderate activities. CONCLUSION: The distributions of MTI recorded sitting and standing data overlap considerably, as such the 100 count/min -1 cut-point did not accurately isolate sitting from other static standing activities. The 100 count/min -1 cut-point more accurately predicted sit/stand vs. other movement orientated activities.
Resumo:
The 5th World Summit on Media for Children and Youth held in Karlstad, Sweden in June 2010 provided a unique media literacy experience for approximately thirty young people from diverse backgrounds through participation in the Global Youth Media Council. This article focuses on the Summit’s aim to give young people a ‘voice’ through intercultural dialogue about media reform. The accounts of four young Australians are discussed in order to consider how successful the Summit was in achieving this goal. The article concludes by making recommendations for future international media literacy conferences involving young people. It also advocates for the expansion of the Global Youth Media Council concept as a grass roots movement to involve more young people in discussions about media reform.
Resumo:
An ability to recognise and resolve ethical dilemmas was identified by the Australian Law Reform Commission as one of the ten fundamental lawyering skills. While the ‘Priestley 11’ list of areas of law required to qualify for legal practice includes ethics and professional responsibility, the commitment to ethics learning in Australian law schools has been far from uniform. The obligation imposed by the Priestley 11 is frequently discharged by a traditional teaching and learning approach involving lectures and/or tutorials and focusing on the content of the formal rules of professional responsibility. However, the effectiveness of such an approach is open to question. Instead, a practical rather than a theoretical approach to the teaching of legal ethics is required. Effective final-year student learning of ethics may be achieved by an approach which engages students, enabling them to appreciate the relevance of what they are learning to the real world and facilitating their transition from study to their working lives. Entry into Valhalla comprises a suite of modules featuring ‘machinima’ (computer-generated imagery) created using the Second Life virtual environment to contextualise otherwise abstract concepts. It provides an engaging learning environment which enables students to obtain an appreciation of ethical responsibility in a real-world context and facilitates understanding and problem-solving ability.
Resumo:
Gesture in performance is widely acknowledged in the literature as an important element in making a performance expressive and meaningful. The body has been shown to play an important role in the production and perception of vocal performance in particular. This paper is interested in the role of gesture in creative works that seek to extend vocal performance via technology. A creative work for vocal performer, laptop computer and a Human Computer Interface called the eMic (Extended Microphone Stand Interface controller) is presented as a case study, to explore the relationships between movement, voice production, and musical expression. The eMic is an interface for live vocal performance that allows the singers’ gestures and interactions with a sensor based microphone stand to be captured and mapped to musical parameters. The creative work discussed in this paper presents a new compositional approach for the eMic by working with movement as a starting point for the composition and thus using choreographed gesture as the basis for musical structures. By foregrounding the body and movement in the creative process, the aim is to create a more visually engaging performance where the performer is able to more effectively use the body to express their musical objectives.
Resumo:
Travel time is an important network performance measure and it quantifies congestion in a manner easily understood by all transport users. In urban networks, travel time estimation is challenging due to number of reasons such as, fluctuations in traffic flow due to traffic signals, significant flow to/from mid link sinks/sources, etc. The classical analytical procedure utilizes cumulative plots at upstream and downstream locations for estimating travel time between the two locations. In this paper, we discuss about the issues and challenges with classical analytical procedure such as its vulnerability to non conservation of flow between the two locations. The complexity with respect to exit movement specific travel time is discussed. Recently, we have developed a methodology utilising classical procedure to estimate average travel time and its statistic on urban links (Bhaskar, Chung et al. 2010). Where, detector, signal and probe vehicle data is fused. In this paper we extend the methodology for route travel time estimation and test its performance using simulation. The originality is defining cumulative plots for each exit turning movement utilising historical database which is self updated after each estimation. The performance is also compared with a method solely based on probe (Probe-only). The performance of the proposed methodology has been found insensitive to different route flow, with average accuracy of more than 94% given a probe per estimation interval which is more than 5% increment in accuracy with respect to Probe-only method.
Resumo:
Visual activity detection of lip movements can be used to overcome the poor performance of voice activity detection based solely in the audio domain, particularly in noisy acoustic conditions. However, most of the research conducted in visual voice activity detection (VVAD) has neglected addressing variabilities in the visual domain such as viewpoint variation. In this paper we investigate the effectiveness of the visual information from the speaker’s frontal and profile views (i.e left and right side views) for the task of VVAD. As far as we are aware, our work constitutes the first real attempt to study this problem. We describe our visual front end approach and the Gaussian mixture model (GMM) based VVAD framework, and report the experimental results using the freely available CUAVE database. The experimental results show that VVAD is indeed possible from profile views and we give a quantitative comparison of VVAD based on frontal and profile views The results presented are useful in the development of multi-modal Human Machine Interaction (HMI) using a single camera, where the speaker’s face may not always be frontal.