850 resultados para Visual Performance
Resumo:
The international focus on embracing daylighting for energy efficient lighting purposes and the corporate sector’s indulgence in the perception of workplace and work practice “transparency” has spurned an increase in highly glazed commercial buildings. This in turn has renewed issues of visual comfort and daylight-derived glare for occupants. In order to ascertain evidence, or predict risk, of these events; appraisals of these complex visual environments require detailed information on the luminances present in an occupant’s field of view. Conventional luminance meters are an expensive and time consuming method of achieving these results. To create a luminance map of an occupant’s visual field using such a meter requires too many individual measurements to be a practical measurement technique. The application of digital cameras as luminance measurement devices has solved this problem. With high dynamic range imaging, a single digital image can be created to provide luminances on a pixel-by-pixel level within the broad field of view afforded by a fish-eye lens: virtually replicating an occupant’s visual field and providing rapid yet detailed luminance information for the entire scene. With proper calibration, relatively inexpensive digital cameras can be successfully applied to the task of luminance measurements, placing them in the realm of tools that any lighting professional should own. This paper discusses how a digital camera can become a luminance measurement device and then presents an analysis of results obtained from post occupancy measurements from building assessments conducted by the Mobile Architecture Built Environment Laboratory (MABEL) project. This discussion leads to the important realisation that the placement of such tools in the hands of lighting professionals internationally will provide new opportunities for the lighting community in terms of research on critical issues in lighting such as daylight glare and visual quality and comfort.
Resumo:
Paul Makeham’s work in AusStage Phase 3 has centred on regional mapping of live performance activity. A pilot mapping project was developed to identify regional clusters of performance as well as key regional organisations. In designing this pilot project, reference was made to two other ARC-funded projects. The first of these was Talking Theatre, an audience development research initiative for Queensland and the Northern Territory supported by an ARC Projects-Linkage grant. Talking Theatre was funded between 2004 and 2006 as a Linkage between the ARC, NARPACA (the Northern Australian Regional Performing Arts Centres Association), Arts Queensland, Arts Northern Territory, and QUT. The second project was the Creative Digital Industries National Mapping Project, operating through QUT’s Centre for Excellence in the Creative Industries (CCi). The NMP is designed to develop and publish a range of accurate and timely measures of the Creative Digital Industries in Australia.
Resumo:
This 90 minute panel session is designed to explore issues relating to the teaching of drama, performance studies, and theatre studies within Higher Education. Some of the issues that will be raised include: developing an understanding of the learning that students believe they are experiencing through performance; contemporary models for teaching; and the suggestion that the body can be an important site for acquiring a variety of different knowledges. Paul Makeham will present a general position paper to commence the session (15 minutes). Maryrose Casey, Gillian Kehoul, and Delyse Ryan will each speak briefly (15 minutes) about aspects of their research into Higher Education teaching before opening the floor for a round-table discussion of issues affecting the teaching of these disciplines.
Resumo:
In 2003, Bill Dunstone, John McCallum and Paul Makeham began a collaboration with researchers at the Centre for the Management of Arid Environments (CMAE) in Kalgoorlie, Western Australia. CMAE researchers are keen to develop 'people-oriented' strategies for implementing agricultural extension initiatives in their region. Traditional hierarchies of knowledge-transfer have impeded the 'connectedness' between community and researchers that gives meaning and relevance to useful practice (Ison and Russell, 2000). Our aim is to establish a partnership between the Live Events Research Network (LERN) and CMAE, investigating ways to link creative, performance-based research and practice with the scientific methodologies associated with natural resources management. This accords with recent work undertaken by Deborah Mills and Paul Brown, showing how community cultural development strategies enhance the implementation of policy concerned with community wellbeing. Mills and Brown 'adopted a concept of wellbeing which builds on a social and environmental view of health', and considered such themes as ecological sustainability, rural economic revitalisation, community strengthening, health and wellbeing (Mills, 2003). We propose that rangeland communities can creatively manage some of the challenges confronting them through performance-based projects which: - activate the stories through which a community enacts its sense of place; - facilitate live events in which the community enacts ownership of its culture and identity; - directly involve the community in the formulation of research issues
Resumo:
When communicating emotion in music, composers and performers encode their expressive intentions through the control of basic musical features such as: pitch, loudness, timbre, mode, and articulation. The extent to which emotion can be controlled through the systematic manipulation of these features has not been fully examined. In this paper we present CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time. CMERS performance was evaluated in two rounds of perceptual testing. In experiment I, 20 participants continuously rated the perceived emotion of 15 music samples generated by CMERS. Three music works, each with five emotional variations were used (normal, happy, sad, angry, and tender). The intended emotion by CMERS was correctly identified 78% of the time, with significant shifts in valence and arousal also recorded, regardless of the works’ original emotion.
Resumo:
Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult. ----- By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system. ----- We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.
Resumo:
Performance evaluation of object tracking systems is typically performed after the data has been processed, by comparing tracking results to ground truth. Whilst this approach is fine when performing offline testing, it does not allow for real-time analysis of the systems performance, which may be of use for live systems to either automatically tune the system or report reliability. In this paper, we propose three metrics that can be used to dynamically asses the performance of an object tracking system. Outputs and results from various stages in the tracking system are used to obtain measures that indicate the performance of motion segmentation, object detection and object matching. The proposed dynamic metrics are shown to accurately indicate tracking errors when visually comparing metric results to tracking output, and are shown to display similar trends to the ETISEO metrics when comparing different tracking configurations.
Resumo:
This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.
Resumo:
Purpose: To investigate the impact of glaucomatous visual impairment on postural sway and falls among older adults.Methods: The sample comprised 72 community-dwelling older adults with open-angle glaucoma, aged 74.0 5.8 years (range 62 to 90 years). Measures of visual function included binocular visual acuity (high-contrast), binocular contrast sensitivity (Pelli- Robson) and binocular visual fields (merged monocular HFA 24-2 SITA-Std). Postural stability was assessed under four conditions: eyes open and closed, on a firm and on a foam surface. Falls were monitored for six months with prospective falls diaries. Regression models, adjusting for age and gender, examined the association between vision measures and postural stability (linear regression) and the number of falls (negative binomial regression). Results: Greater visual field loss was significantly associated with poorer postural stability with eyes open, both on firm (r = 0.34, p < 0.01) and foam (r = 0.45, p < 0.001) surfaces. Eighteen (25 per cent) participants experienced at least one fall: 12 (17 per cent) participants fell only once and six (eight per cent) participants fell two or more times (up to five falls). Visual field loss was significantly associated with falling; the rate of falls doubled for every 10 dB reduction in field sensitivity (rate ratio = 1.08, 95% CI = 1.02–1.13). Importantly, in a model comprising upper and lower field sensitivity, only lower field loss was significantly associated with the number of falls (rate ratio = 1.17, 95% CI = 1.04–1.33). Conclusions: Binocular visual field loss was significantly associated with postural instability and falls among older adults with glaucoma. These findings provide valuable directions for developing falls risk assessment and falls prevention strategies for this population.
Resumo:
Network Jamming systems provide real-time collaborative media performance experiences for novice or inexperienced users. In this paper we will outline the theoretical and developmental drivers for our Network Jamming software, called jam2jam. jam2jam employs generative algorithmic techniques with particular implications for accessibility and learning. We will describe how theories of engagement have directed the design and development of jam2jam and show how iterative testing cycles in numerous international sites have informed the evolution of the system and its educational potential. Generative media systems present an opportunity for users to leverage computational systems to make sense of complex media forms through interactive and collaborative experiences. Generative music and art are a relatively new phenomenon that use procedural invention as a creative technique to produce music and visual media. These kinds of systems present a range of affordances that can facilitate new kinds of relationships with music and media performance and production. Early systems have demonstrated the potential to provide access to collaborative ensemble experiences to users with little formal musical or artistic expertise.This presentation examines the educational affordances of these systems evidenced by field data drawn from the Network Jamming Project. These generative performance systems enable access to a unique kind of music/media’ ensemble performance with very little musical/ media knowledge or skill and they further offer the possibility of unique interactive relationships with artists and creative knowledge through collaborative performance. Through the process of observing, documenting and analysing young people interacting with the generative media software jam2jam a theory of meaningful engagement has emerged from the need to describe and codify how users experience creative engagement with music/media performance and the locations of meaning. In this research we observed that the musical metaphors and practices of ‘ensemble’ or collaborative performance and improvisation as a creative process for experienced musicians can be made available to novice users. The relational meanings of these musical practices afford access to high level personal, social and cultural experiences. Within the creative process of collaborative improvisation lie a series of modes of creative engagement that move from appreciation through exploration, selection, direction toward embodiment. The expressive sounds and visions made in real-time by improvisers collaborating are immediate and compelling. Generative media systems let novices access these experiences with simple interfaces that allow them to make highly professional and expressive sonic and visual content simply by using gestures and being attentive and perceptive to their collaborators. These kinds of experiences present the potential for highly complex expressive interactions with sound and media as a performance. Evidence that has emerged from this research suggest that collaborative performance with generative media is transformative and meaningful. In this presentation we draw out these ideas around an emerging theory of meaningful engagement that has evolved from the development of network jamming software. Primarily we focus on demonstrating how these experiences might lead to understandings that may be of educational and social benefit.
Resumo:
Purpose: To investigate whether wearing different presbyopic vision corrections alters the pattern of eye and head movements when viewing dynamic driving-related traffic scenes. Methods: Participants included 20 presbyopes (mean age: 56±5.7 years) who had no experience of wearing presbyopic vision corrections (i.e. all were single vision wearers). Eye and head movements were recorded while wearing five different vision corrections: single vision lenses (SV), progressive addition spectacle lenses (PALs), bifocal spectacle lenses (BIF), monovision (MV) and multifocal contact lenses (MTF CL) in random order. Videotape recordings of traffic scenes of suburban roads and expressways (with edited targets) were presented as dynamic driving-related stimuli and digital numeric display panels included as near visual stimuli (simulating speedometer and radio). Eye and head movements were recorded using the faceLAB™ system and the accuracy of target identification was also recorded. Results: The magnitude of eye movements while viewing the driving-related traffic scenes was greater when wearing BIF and PALs than MV and MTF CL (p≤0.013). The magnitude of head movements was greater when wearing SV, BIF and PALs than MV and MTF CL (p<0.0001) and the number of saccades was significantly higher for BIF and PALs than MV (p≤0.043). Target recognition accuracy was poorer for all vision corrections when the near stimulus was located at eccentricities inferiorly and to the left, rather than directly below the primary position of gaze (p=0.008), and PALs gave better performance than MTF CL (p=0.043). Conclusions: Different presbyopic vision corrections alter eye and head movement patterns. In particular, the larger magnitude of eye and head movements and greater number of saccades associated with the spectacle presbyopic corrections, may impact on driving performance.
Resumo:
There has been a developing interest in smart grids, the possibility of significantly enhanced performance from remote measurements and intelligent controls. For transmission the use of PMU signals from remote sites and direct load shed controls can give significant enhancement for large system disturbances rather than relying on local measurements and linear controls. This lecture will emphasize what can be found from remote measurements and the mechanisms to get a smarter response to major disturbances. For distribution systems there has been a significant history in the area of distribution reconfiguration automation. This lecture will emphasize the incorporation of Distributed Generation into distribution networks and the impact on voltage/frequency control and protection. Overall the performance of both transmission and distribution will be impacted by demand side management and the capabilities built into the system. In particular, we consider different time scales of load communication and response and look to the benefits for system, energy and lines.
Resumo:
In this article, we investigate the pay-performance relationship of soccer players using individual data from eight seasons of the German soccer league Bundesliga. We find a nonlinear pay-performance relationship, indicating that salary does indeed affect individual performance. The results further show that player performance is affected not only by absolute income level but also by relative income position. An additional analysis of the performance impact of team effects provides evidence of a direct impact of team-mate attributes on individual player performance.
Resumo:
Research on expertise, talent identification and development has tended to be mono-disciplinary, typically adopting adopting neurogenetic deterministic or environmentalist positions, with an over-riding focus on operational issues. In this paper the validity of dualist positions on sport expertise is evaluated. It is argued that, to advance understanding of expertise and talent development, a shift towards a multi-disciplinary and integrative science focus is necessary, along with the development of a comprehensive multi-disciplinary theoretical rationale. Here we elucidate dynamical systems theory as a multi-disciplinary theoretical rationale for capturing how multiple interacting constraints can shape the development of expert performers. This approach suggests that talent development programmes should eschew the notion of common optimal performance models, emphasise the individual nature of pathways to expertise, and identify the range of interacting constraints that impinge on performance potential of individual athletes, rather than evaluating current performance on physical tests referenced to group norms.