836 resultados para Tragic vision
Resumo:
Purpose. To investigate the effect of various presbyopic vision corrections on nighttime driving performance on a closed-road driving circuit. Methods. Participants were 11 presbyopes (mean age, 57.3 ± 5.8 years), with a mean best sphere distance refractive error of R+0.23±1.53 DS and L+0.20±1.50 DS, whose only experience of wearing presbyopic vision correction was reading spectacles. The study involved a repeated-measures design by which a participant's nighttime driving performance was assessed on a closed-road circuit while wearing each of four power-matched vision corrections. These included single-vision distance lenses (SV), progressive-addition spectacle lenses (PAL), monovision contact lenses (MV), and multifocal contact lenses (MTF CL) worn in a randomized order. Measures included low-contrast road hazard detection and avoidance, road sign and near target recognition, lane-keeping, driving time, and legibility distance for street signs. Eye movement data (fixation duration and number of fixations) were also recorded. Results. Street sign legibility distances were shorter when wearing MV and MTF CL than SV and PAL (P < 0.001), and participants drove more slowly with MTF CL than with PALs (P = 0.048). Wearing SV resulted in more errors (P < 0.001) and in more (P = 0.002) and longer (P < 0.001) fixations when responding to near targets. Fixation duration was also longer when viewing distant signs with MTF CL than with PAL (P = 0.031). Conclusions. Presbyopic vision corrections worn by naive, unadapted wearers affected nighttime driving. Overall, spectacle corrections (PAL and SV) performed well for distance driving tasks, but SV negatively affected viewing near dashboard targets. MTF CL resulted in the shortest legibility distance for street signs and longer fixation times.
Resumo:
This paper argues that young people need to be given the opportunity to recognise the interaction between their own understandings of the world as it is now and the vision of what it might become. To support this argument, we discuss an urban planning project, known as the Lower Mill Site Project, which involved active participation of high school students from the local community. The outcomes of this project demonstrate the positive contributions young people can make to the process of urban redevelopment, the advantages of using a participatory design approach, and the utopian possibilities that can emerge when young people are invited to be part of an intergenerational community project.
Resumo:
This paper presents the development of a low-cost sensor platform for use in ground-based visual pose estimation and scene mapping tasks. We seek to develop a technical solution using low-cost vision hardware that allows us to accurately estimate robot position for SLAM tasks. We present results from the application of a vision based pose estimation technique to simultaneously determine camera poses and scene structure. The results are generated from a dataset gathered traversing a local road at the St Lucia Campus of the University of Queensland. We show the accuracy of the pose estimation over a 1.6km trajectory in relation to GPS ground truth.
Resumo:
We aim to demonstrate unaided visual 3D pose estimation and map reconstruction using both monocular and stereo vision techniques. To date, our work has focused on collecting data from Unmanned Aerial Vehicles, which generates a number of significant issues specific to the application. Such issues include scene reconstruction degeneracy from planar data, poor structure initialisation for monocular schemes and difficult 3D reconstruction due to high feature covariance. Most modern Visual Odometry (VO) and related SLAM systems make use of a number of sensors to inform pose and map generation, including laser range-finders, radar, inertial units and vision [1]. By fusing sensor inputs, the advantages and deficiencies of each sensor type can be handled in an efficient manner. However, many of these sensors are costly and each adds to the complexity of such robotic systems. With continual advances in the abilities, small size, passivity and low cost of visual sensors along with the dense, information rich data that they provide our research focuses on the use of unaided vision to generate pose estimates and maps from robotic platforms. We propose that highly accurate (�5cm) dense 3D reconstructions of large scale environments can be obtained in addition to the localisation of the platform described in other work [2]. Using images taken from cameras, our algorithm simultaneously generates an initial visual odometry estimate and scene reconstruction from visible features, then passes this estimate to a bundle-adjustment routine to optimise the solution. From this optimised scene structure and the original images, we aim to create a detailed, textured reconstruction of the scene. By applying such techniques to a unique airborne scenario, we hope to expose new robotic applications of SLAM techniques. The ability to obtain highly accurate 3D measurements of an environment at a low cost is critical in a number of agricultural and urban monitoring situations. We focus on cameras as such sensors are small, cheap and light-weight and can therefore be deployed in smaller aerial vehicles. This, coupled with the ability of small aerial vehicles to fly near to the ground in a controlled fashion, will assist in increasing the effective resolution of the reconstructed maps.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
Construction 2020 is a national initiative undertaken by CRC for Construction Innovation to focus its ongoing leadership of the Australian property and construction industry in applied research and best contribute to the industry's national and international growth and competitiveness. It is the first major report on the long-term outlook for the industry since the late 1990s. The report identifies nine key themes for the future of the property and construction industry. These visions describe the major concerns of the industry and the improved future working environment favoured by its stakeholders. The first and clearest vision, agreed across the industry, is that environmentally sustainable construction the creation of buildings and infrastructure that minimise their impact on the natural environment is an area of huge potential. Here technologies like Construction Innovation's LCADesign can make a big difference. This is a calculator that works out automatically from 3D computer-aided design the environmental costs of materials in a building all at the push of a button. By working with industry, we'd expect to have a comprehensive set of eco-design tools for all stages of the construction life cycle, to minimise energy use, greenhouse and other forms of waste or pollution. Other significant areas of focus in the report include the development of nationally uniform codes of practice, new tools to evaluate design and product performance, comparisons with overseas industries, and a worldwide research network to ensure that Australian technology is at the cutting edge.
Resumo:
Previous research has suggested that perceptual-motor difficulties may account for obese children's lower motor competence; however, specific evidence is currently lacking. Therefore, this study examined the effect of altered visual conditions on spatiotemporal and kinematic gait parameters in obese versus normal-weight children. Thirty-two obese and normal-weight children (11.2 ± 1.5 years) walked barefoot on an instrumented walkway at constant self-selected speed during LIGHT and DARK conditions. Three-dimensional motion analysis was performed to calculate spatiotemporal parameters, as well as sagittal trunk segment and lower extremity joint angles at heel-strike and toe-off. Self-selected speed did not significantly differ between groups. In the DARK condition, all participants walked at a significantly slower speed, decreased stride length, and increased stride width. Without normal vision, obese children had a more pronounced increase in relative double support time compared to the normal-weight group, resulting in a significantly greater percentage of the gait cycle spent in stance. Walking in the DARK, both groups showed greater forward tilt of the trunk and restricted hip movement. All participants had increased knee flexion at heel-strike, as well as decreased knee extension and ankle plantarflexion at toe-off in the DARK condition. The removal of normal vision affected obese children's temporal gait pattern to a larger extent than that of normal-weight peers. Results suggest an increased dependency on vision in obese children to control locomotion. Next to the mechanical problem of moving excess mass, a different coupling between perception and action appears to be governing obese children's motor coordination and control.
Resumo:
The OECD (2006 Starting Strong II: Early Childhood Education and Care. OECD Publishing: Paris) envisions early childhood education and care settings as meeting places for diverse social groups; places that build social capital. This vision was assessed in a comparison of three preschools types: full-fee paying, subsidised-fee and publicly funded. The social composition within each was examined and the connectedness of the children (n = 472) who attended compared. Publicly funded preschools had more socially diverse populations. The quantity of social connectedness did not differ but children in publicly funded preschools described higher quality social relationships. Not all preschool settings are socially diverse but, where they are, the quality of relationships is highest.
Resumo:
This study investigated the Kinaesthetic Fusion Effect (KFE) first described by Craske and Kenny in 1981. The current study did not replicate these findings. Participants did not perceive any reduction in the sagittal separation of a button pressed by the index finger of one arm and a probe touching the other, following repeated exposure to the tactile stimuli present on both unseen arms. This study’s failure to replicate the widely-cited KFE as described by Craske et al. (1984) suggests that it may be contingent on several aspects of visual information, especially the availability of a specific visual reference, the role of instructions regarding gaze direction, and the potential use of a line of sight strategy when referring felt positions to an interposed surface. In addition, a foreshortening effect was found; this may result from a line-of-sight judgment and represent a feature of the reporting method used. The transformed line of sight data were regressed against the participant reported values, resulting in a slope of 1.14 (right arm) and 1.11 (left arm), and r > 0.997 for each. The study also provides additional evidence that mis-perceptions of the mediolateral position of the limbs specifically their separation and consistent with notions of Gestalt grouping, is somewhat labile and can be influenced by active motions causing touch of one limb by the other. Finally, this research will benefit future studies that require participants to report the perceived locations of the unseen limbs.
Resumo:
The practice of robotics and computer vision each involve the application of computational algorithms to data. The research community has developed a very large body of algorithms but for a newcomer to the field this can be quite daunting. For more than 10 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This new book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and computer vision. It is written in a light but informative style, it is easy to read and absorb, and includes over 1000 MATLAB® and Simulink® examples and figures. The book is a real walk through the fundamentals of mobile robots, navigation, localization, arm-robot kinematics, dynamics and joint level control, then camera models, image processing, feature extraction and multi-view geometry, and finally bringing it all together with an extensive discussion of visual servo systems.
Resumo:
In this paper we describe a body of work aimed at extending the reach of mobile navigation and mapping. We describe how running topological and metric mapping and pose estimation processes concurrently, using vision and laser ranging, has produced a full six-degree-of-freedom outdoor navigation system. It is capable of producing intricate three-dimensional maps over many kilometers and in real time. We consider issues concerning the intrinsic quality of the built maps and describe our progress towards adding semantic labels to maps via scene de-construction and labeling. We show how our choices of representation, inference methods and use of both topological and metric techniques naturally allow us to fuse maps built from multiple sessions with no need for manual frame alignment or data association.
Resumo:
This paper presents a preliminary flight test based detection range versus false alarm performance characterisation of a morphological-hidden Markov model filtering approach to vision-based airborne dim-target collision detection. On the basis of compelling in-flight collision scenario data, we calculate system operating characteristic (SOC) curves that concisely illustrate the detection range versus false alarm rate performance design trade-offs. These preliminary SOC curves provide a more complete dim-target detection performance description than previous studies (due to the experimental difficulties involved, previous studies have been limited to very short flight data sample sets and hence have not been able to quantify false alarm behaviour). The preliminary investigation here is based on data collected from 4 controlled collision encounters and supporting non-target flight data. This study suggests head-on detection ranges of approximately 2.22 km under blue sky background conditions (1.26 km in cluttered background conditions), whilst experiencing false alarms at a rate less than 1.7 false alarms/hour (ie. less than once every 36 minutes). Further data collection is currently in progress.
Resumo:
Computer vision is an attractive solution for uninhabited aerial vehicle (UAV) collision avoidance, due to the low weight, size and power requirements of hardware. A two-stage paradigm has emerged in the literature for detection and tracking of dim targets in images, comprising of spatial preprocessing, followed by temporal filtering. In this paper, we investigate a hidden Markov model (HMM) based temporal filtering approach. Specifically, we propose an adaptive HMM filter, in which the variance of model parameters is refined as the quality of the target estimate improves. Filters with high variance (fat filters) are used for target acquisition, and filters with low variance (thin filters) are used for target tracking. The adaptive filter is tested in simulation and with real data (video of a collision-course aircraft). Our test results demonstrate that our adaptive filtering approach has improved tracking performance, and provides an estimate of target heading not present in previous HMM filtering approaches.