116 resultados para Body image, form perception
Resumo:
Purpose: In animal models hemi-field deprivation results in localised, graded vitreous chamber elongation and presumably deprivation induced localised changes in retinal processing. The aim of this research was to determine if there are variations in ERG responses across the retina in normal chick eyes and to examine the effect of hemi-field and full-field deprivation on ERG responses across the retina and at earlier times than have previously been examined electrophysiologically. Methods: Chicks were either untreated, wore monocular full-diffusers or half-diffusers (depriving nasal retina) (n = 6-8 each group) from day 8. mfERG responses were measured using the VERIS mfERG system across the central 18.2º× 16.7º (H × V) field. The stimulus consisted of 61 unscaled hexagons with each hexagon modulated between black and white according to a pseudorandom binary m-sequence. The mfERG was measured on day 12 in untreated chicks, following 4 days of hemi-field diffuser wear, and 2, 48 and 96 h after application of full-field diffusers. Results: The ERG response of untreated chick eyes did not vary across the measured field; there was no effect of retinal location on the N1-P1 amplitude (p = 0.108) or on P1 implicit time (p > 0.05). This finding is consistent with retinal ganglion cell density of the chick varying by only a factor of two across the entire retina. Half-diffusers produced a ramped retina and a graded effect of negative lens correction (p < 0.0001); changes in retinal processing were localized. The untreated retina showed increasing complexity of the ERG waveform with development; form-deprivation prevented the increasing complexity of the response at the 2, 48 and 96 h measurement times and produced alterations in response timing. Conclusions: Form-deprivation and its concomitant loss of image contrast and high spatial frequency images prevented development of the ERG responses, consistent with a disruption of development of retinal feedback systems. The characterisation of ERG responses in normal and deprived chick eyes across the retina allows the assessment of concurrent visual and retinal manipulations in this model. (Ophthalmic & Physiological Optics © 2013 The College of Optometrists.)
Resumo:
Aim: To explore weight status perception and its relation to actual weight status in a contemporary cohort of 5- to 17-year-old children and adolescents. Methods: Body mass index (BMI), derived from height and weight measurements, and perception of weight status (‘too thin’, ‘about right’ and ‘too fat’) were evaluated in 3043 participants from the Healthy Kids Queensland Survey. In children less than 12 years of age, weight status perception was obtained from the parents, whereas the adolescents self-reported their perceived weight status. Results: Compared with measured weight status by established BMI cut-offs, just over 20% of parents underestimated their child's weight status and only 1% overestimated. Adolescent boys were more likely to underestimate their weight status compared with girls (26.4% vs. 10.2%, P < 0.05) whereas adolescent girls were more likely to overestimate than underestimate (11.8% vs. 3.4%, P < 0.05). Underestimation was greater by parents of overweight children compared with those of obese children, but still less than 50% of parents identified their obese child as ‘too fat’. There was greater recognition of overweight status in the adolescents, with 83% of those who were obese reporting they were ‘too fat’. Conclusion: Whilst there was a high degree of accuracy of weight status perception in those of healthy weight, there was considerable underestimation of weight status, particularly by parents of children who were overweight or obese. Strategies are required that enable parents to identify what a healthy weight looks like and help them understand when intervention is needed to prevent further weight gain as the child gets older.
Resumo:
This thesis introduces improved techniques towards automatically estimating the pose of humans from video. It examines a complete workflow to estimating pose, from the segmentation of the raw video stream to extract silhouettes, to using the silhouettes in order to determine the relative orientation of parts of the human body. The proposed segmentation algorithms have improved performance and reduced complexity, while the pose estimation shows superior accuracy during difficult cases of self occlusion.
Resumo:
This research is in the field of arts education. Eisner claims that ‘teachers rarely view themselves as artists’ (Taylor, 1993:21). Situating professional dance artists and teacher-artists (Mc Lean, 2009) in close proximity to classroom dance teachers, spatially, through a shared rehearsal studio and creatively, by engaging them in a co-artistry approach, allows participants to map unique and new creative processes, kinaesthetically and experientially. This pratice encourages teachers to attune and align themselves with artists’ states of mind and enables them to nurture both their teacher-self and their artist-self (Lichtenstein 2009). The research question was: can interactions between professional dance artists, teacher-artists (Mc Lean, 2009) and classroom dance teachers change classroom dance teachers’ self-perceptions? The research found that Artists in Residence projects provide up-skilling in situ for classroom dance teachers, and give credence to the act of art making for classroom dance teachers within their peer context, positively enhancing their self-image and promoting self-identification as ‘teacher-artists’ (Mc Lean, 2009). This project received an Artist in Residence Grant (an Australia Council for the Arts, Education Queensland and Queensland Arts Council partnership). The research findings were chosen for inclusion in the Queensland Performing Arts Complex program, Feet First: an invitation to dance, 2013 and selected for inclusion on the Creative Campus website, http://www.creative-campus.org.uk.
Resumo:
This body of photographic work has been created to firstly, explore a new approach to practice-led research that uses an “action genre” approach to reflective practice (Lemke) and secondly, to visually explore human interaction with the fundamental item in life - water. The first of these is based on the contention that to understand the meanings inherent in photographs we cannot look merely at the end result. It is essential to keep looking at the actions of practitioners, and the influences upon them, to determine how external influences affect the meaning potential of editorial photographs (Grayson, 2012). WATER therefore, provides an ideal platform to reflect upon the actions and influences involved in creating work within the photographic genre of photojournalism. It enables this practitioner to reflect on each stage of production to gain a better understanding of how external influences impact the narrative potential within images created. There are multi-faceted influences experienced by photographers who are creating images that, in turn, are part of constructing and presenting the narrative potential of editorial photographs. There is an important relationship between professional photographers and the technical, cultural, economic and institutional forces that impinge upon all stages of production and publication. What results is a greater understanding of technical, cultural, economic and institutional forces that impinge upon all stages of production and publication. Therefore, to understand the meanings inherent in photographs within WATER, I do not look merely at the end result. It provides a case study looking at my actions in the filed, and the influences upon me, to determine how external influences affect the meaning potential of these photographs (Grayson, 2012). As a result, this project adds to the body of scholarship around the definition of Photojournalism, how it has adapted to the current media environment and provides scope for further research into emerging new genres within editorial photography, such as citizen photojournalism. Concurrently, the photographs themselves were created to visually explore how there remains a humanistic desire to interact with the natural form of water even while living a modern cosmopolitan life around it. Taking a photojournalistic approach to exploring this phenomenon, the images were created by “capturing moments as they happened” with no posing or setting up of images. This serendipitous approach to the photographic medium provides the practitioner with at least an attempt to direct the subjectivity contained explicitly in photographs. What results is a series of images that extend the visual dialogue around the role of water within modern humanistic lifestyles and how it remains an integral part of our society’s behaviors. It captures important moments that document this relationship at this time of modern development. The resulting works were exhibited and published as part of the Head On Photo Festival, Australia's largest photo festival and the world's second largest festival in Sydney 20-24 May 2013. The WATER series of images were curated by three Magnum members; Ian Berry, Eli Reed and Chris Steele-Perkins. Magnum is a highly regarded international photographic co-operative with editorial offices in New York, London, Paris and Tokyo. There was a projection of the works as part of the official festival programme, presented to both members of the public and Sydney’s photography professionals. In addition, a sample of images from the WATER series was chosen for inclusion in the Magnum-published hardcover book. References Grayson, Louise. 2012. “Editorial photographs and patterns of practice.” Journalism Practice. Accessed: http://www.tandfonline.com/doi/abs/10.1080/17512786.2012.726836#.UbZN-L--1RQ Lemke, Jay. 1995. Textual Politics: Discourse and Social Dynamics. London: Taylor & Francis.
Resumo:
Research Statement: In this research project film groups of 4-5 students under my direction produced a 3-5 minute urban film that explored the Brisbane Northbank, and which would become the basis for an urban proposal and design of a small film studio for independent filmmakers in the site. The theoretical premise was that a film studio does not simply produce movies, it creates urban effects all around it and acts as a vortex of cultural activity and social life. For this modest facility where the cinema goes out into the street, the city itself becomes the studio. Students were called to observe the historical problematics of technique, image and effect that arise in the cinema, and to apply these to their own urban-film practice. A panel of judges working in film and architecture shortlisted the 12 best films in 2010 and a major public film screening event took place at the Tribal Cinema. The Shortlisted films today form a permanent "exhibit" in YouTube. The research project was funded by the Queensland University of Technology, School of Design and received accolades from film faculty in the Creative Industries Faculty. The diverse body of work that emanated from the screening contributed a unique analysis of the Northbank to Brisbane.
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.
Resumo:
This paper presents an approach to promote the integrity of perception systems for outdoor unmanned ground vehicles (UGV) operating in challenging environmental conditions (presence of dust or smoke). The proposed technique automatically evaluates the consistency of the data provided by two sensing modalities: a 2D laser range finder and a millimetre-wave radar, allowing for perceptual failure mitigation. Experimental results, obtained with a UGV operating in rural environments, and an error analysis validate the approach.
Resumo:
This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke. An example of application is given with monocular SLAM estimating the pose of the UGV while smoke is present in the environment. It is shown that the proposed novel quality metric can be used to anticipate situations where the quality of the pose estimate will be significantly degraded due to the input image data. This leads to decisions of advantageously switching between data sources (e.g. using infrared images instead of visual images).
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke.
Resumo:
This document describes large, accurately calibrated and time-synchronised datasets, gathered in controlled environmental conditions, using an unmanned ground vehicle equipped with a wide variety of sensors. These sensors include: multiple laser scanners, a millimetre wave radar scanner, a colour camera and an infra-red camera. Full details of the sensors are given, as well as the calibration parameters needed to locate them with respect to each other and to the platform. This report also specifies the format and content of the data, and the conditions in which the data have been gathered. The data collection was made in two different situations of the vehicle: static and dynamic. The static tests consisted of sensing a fixed ’reference’ terrain, containing simple known objects, from a motionless vehicle. For the dynamic tests, data were acquired from a moving vehicle in various environments, mainly rural, including an open area, a semi-urban zone and a natural area with different types of vegetation. For both categories, data have been gathered in controlled environmental conditions, which included the presence of dust, smoke and rain. Most of the environments involved were static, except for a few specific datasets which involve the presence of a walking pedestrian. Finally, this document presents illustrations of the effects of adverse environmental conditions on sensor data, as a first step towards reliability and integrity in autonomous perceptual systems.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.
Resumo:
This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.