945 resultados para Camera Obscura
Corneal topography with Scheimpflug imaging and videokeratography : comparative study of normal eyes
Resumo:
PURPOSE: To compare the repeatability within anterior corneal topography measurements and agreement between measurements with the Pentacam HR rotating Scheimpflug camera and with a previously validated Placido disk–based videokeratoscope (Medmont E300). ------ SETTING: Contact Lens and Visual Optics Laboratory, School of Optometry, Queensland University of Technology, Brisbane, Queensland, Australia. ----- METHODS: Normal eyes in 101 young adult subjects had corneal topography measured using the Scheimpflug camera (6 repeated measurements) and videokeratoscope (4 repeated measurements). The best-fitting axial power corneal spherocylinder was calculated and converted into power vectors. Corneal higher-order aberrations (HOAs) (up to the 8th Zernike order) were calculated using the corneal elevation data from each instrument. ----- RESULTS: Both instruments showed excellent repeatability for axial power spherocylinder measurements (repeatability coefficients <0.25 diopter; intraclass correlation coefficients >0.9) and good agreement for all power vectors. Agreement between the 2 instruments was closest when the mean of multiple measurements was used in analysis. For corneal HOAs, both instruments showed reasonable repeatability for most aberration terms and good correlation and agreement for many aberrations (eg, spherical aberration, coma, higher-order root mean square). For other aberrations (eg, trefoil and tetrafoil), the 2 instruments showed relatively poor agreement. ----- CONCLUSIONS: For normal corneas, the Scheimpflug system showed excellent repeatability and reasonable agreement with a previously validated videokeratoscope for the anterior corneal axial curvature best-fitting spherocylinder and several corneal HOAs. However, for certain aberrations with higher azimuthal frequencies, the Scheimpflug system had poor agreement with the videokeratoscope; thus, caution should be used when interpreting these corneal aberrations with the Scheimpflug system.
Resumo:
The international focus on embracing daylighting for energy efficient lighting purposes and the corporate sector’s indulgence in the perception of workplace and work practice “transparency” has spurned an increase in highly glazed commercial buildings. This in turn has renewed issues of visual comfort and daylight-derived glare for occupants. In order to ascertain evidence, or predict risk, of these events; appraisals of these complex visual environments require detailed information on the luminances present in an occupant’s field of view. Conventional luminance meters are an expensive and time consuming method of achieving these results. To create a luminance map of an occupant’s visual field using such a meter requires too many individual measurements to be a practical measurement technique. The application of digital cameras as luminance measurement devices has solved this problem. With high dynamic range imaging, a single digital image can be created to provide luminances on a pixel-by-pixel level within the broad field of view afforded by a fish-eye lens: virtually replicating an occupant’s visual field and providing rapid yet detailed luminance information for the entire scene. With proper calibration, relatively inexpensive digital cameras can be successfully applied to the task of luminance measurements, placing them in the realm of tools that any lighting professional should own. This paper discusses how a digital camera can become a luminance measurement device and then presents an analysis of results obtained from post occupancy measurements from building assessments conducted by the Mobile Architecture Built Environment Laboratory (MABEL) project. This discussion leads to the important realisation that the placement of such tools in the hands of lighting professionals internationally will provide new opportunities for the lighting community in terms of research on critical issues in lighting such as daylight glare and visual quality and comfort.
Resumo:
This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.
Resumo:
Appearance-based mapping and localisation is especially challenging when separate processes of mapping and localisation occur at different times of day. The problem is exacerbated in the outdoors where continuous change in sun angle can drastically affect the appearance of a scene. We confront this challenge by fusing the probabilistic local feature based data association method of FAB-MAP with the pose cell filtering and experience mapping of RatSLAM. We evaluate the effectiveness of our amalgamation of methods using five datasets captured throughout the day from a single camera driven through a network of suburban streets. We show further results when the streets are re-visited three weeks later, and draw conclusions on the value of the system for lifelong mapping.
Resumo:
Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.
Resumo:
Five Minutes featured in the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. Five Minutes depicts an inverted image of the outside environment where two botanists stand. The light from this exterior passes though the pinhole camera's aperture and illuminates the internal scene which includes the ground of the site which lies inside the room along with another two botanists standing inside. The image evokes the connectively the botanists have with this landscape, a site which they are presently revegetating with endemic and indigenous plant species. By illuminating the botanists only with light projected from the landscape itself (through the agency of the pinhole camera's aperture) the inhabitant and their landscape are depicted as inseparable subjects.
Resumo:
Self Landscape is an introductory image of the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. Self Landscape depicts an inverted image of the outside environment - a revegetated farmland in the Great Southern Region of Western Australia. The light from the exterior landscape passes though the pinhole camera's aperture and illuminates the internal scene which includes that part of the landscape upon which the floorless room is erected, along with the author who is standing inside. The text "Hotspot" appears where the author has scribed that word with a torch during the long exposure. The image evokes the temporality of light. Here, light itself is portrayed as the primary medium through which we both perceive and describe landscape. It is through the agency of light that we construct our connectivity to landscape.
Resumo:
The Light of Gairdner is a key work of the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. The Light of Gairdner depicts two brothers Allan and Harvey Lynch during their barley harvest. Allan is standing outside the pinhole camera-room in the barley field. The light from this exterior landscape is 'projected' inside the camera-room and illuminates the interior scene which includes that part of the barley field upon which the floorless room is erected, along with Harvey who is standing inside. The image evokes the temporality of light. Here, light itself is portrayed as the primary medium through which we both perceive and describe landscape. It is through the agency of light that we construct our connectivity to landscape. The exhibition/catalogue statement. "Harvey and Allan Lynch lost their father Frank, in a crop dusting crash five years ago. They now manage their dad's 6000 acre farm and are photographed here at the time of their barley harvest."
Resumo:
The Light of Gairdner 2 is a key work of the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. The Light of Gairdner 2 depicts two brothers Allan and Harvey Lynch during their barley harvest. Allan is standing outside the pinhole camera-room in the barley field with their new 'CASE' harvester. The light from this exterior landscape is 'projected' inside the camera-room and illuminates the interior scene which includes that part of the barley field upon which the floorless room is erected, along with Harvey who is standing inside. The image evokes the temporality of light. Here, light itself is portrayed as the primary medium through which we both perceive and describe landscape. In this way it is through the agency of light that we construct our connectivity to landscape. The exhibition/catalogue statement. "Harvey and Allan Lynch lost their father Frank, in a crop dusting crash five years ago. They now manage their dad's 6000 acre farm and are photographed here at the time of their barley harvest. The Light of Gairdner 2 features their new 'CASE' harvester, and in the distance, the grain silos of Gairdner."
Resumo:
Barbara at Content Too is a key work of the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. The work depicts amateur botanist Barbara Miller-Hornsey conducting a botanical survey. The pinhole camera-room is sited with the biodiverse heath landscape at Bremer Bay in the Great Southern Region of Western Australia. The light from this exterior landscape is 'projected' inside the camera-room and illuminates the interior scene which includes that part of the heath upon which the floorless room is erected, along with Barbara who is kneeling inside. The image evokes the temporality of light. Here, light itself is portrayed as the primary medium through which we both perceive and describe landscape. In this way it is through the agency of light that we construct our connectivity to landscape.
Resumo:
Jack's Bay (the architecturalisation of memory) is a key work of the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. The work depicts octogenarian Jack Morris, who for forty years held the professional salmon fishing license in the hamlet of Bremer Bay, on the SE coast of Western Australia. The pinhole camera-room is sited within sand dunes new Jack's now demolished beachside camp. Three generations of Jack's descendents stand outside the room - from his daughter to his great grand children. The light from this exterior landscape is 'projected' inside the camera-room and illuminates the interior scene which includes that part of the sand dune upon which the floorless room is erected, along with Jack who is sitting inside. The image evokes the temporality of light. Here, light itself is portrayed as the primary medium through which we both perceive and describe landscape. In this way it is through the agency of light that we construct our connectivity to landscape.
Resumo:
Working Sheep on 'Glen Shiel' is a key work of the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. The work depicts octogenarian Ian Mangan who is both one of the first and last soldier settler farmers in the Gairdner-Jerramungup district in the Great Southern Region of Western Australia. Ian, his son, Stuart and Grandson Jacob, are preparing the last mob of sheep for sale before they move off their farm. The pinhole camera-room is sited amongst the sheep in the farm's sheep yards. Stuart and Jacob are depicted here standing amongst the sheep. The light from this exterior landscape is 'projected' inside the camera-room and illuminates the interior scene which includes that part of the sheep yards upon which the floorless room is erected, along with Ian who is standing motionless inside. The image evokes the temporality of light. Here, light itself is portrayed as the primary medium through which we both perceive and describe landscape. In this way it is through the agency of light that we construct our connectivity to landscape.
Resumo:
Lightsite is a room-sized pinhole camera which has been transported to a number of locations throughout the south of Western Australia. Lightsite was conceived as a way to celebrate a variety of individuals and their families, who have a very strong sense of connection to the landscapes in this region. The project documented herein was completed for “Hotspot” a cultural project initiated by Mix Artists Incorporated for the 2006 Perth International Arts Festival. The photographic works illustrated within were exhibited in regional centres throughout Western Australia during 2006, and continuing into 2007.
Resumo:
Position estimation for planetary rovers has been typically limited to odometry based on proprioceptive measurements such as the integration of distance traveled and measurement of heading change. Here we present and compare two methods of online visual odometry suited for planetary rovers. Both methods use omnidirectional imagery to estimate motion of the rover. One method is based on robust estimation of optical flow and subsequent integration of the flow. The second method is a full structure-from-motion solution. To make the comparison meaningful we use the same set of raw corresponding visual features for each method. The dataset is an sequence of 2000 images taken during a field experiment in the Atacama desert, for which high resolution GPS ground truth is available.
Resumo:
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and TCAS). This paper describes the development and evaluation of a real-time vision-based collision detection system suitable for fixed-wing aerial robotics. Using two fixed-wing UAVs to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units found on commercially-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1024 by 768 pixel image frames at a rate of approximately 30Hz. Flight trials using manned Cessna aircraft where all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms.