953 resultados para Pushbroom camera


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exposure to particles emitted by cooking activities may be responsible for a variety of respiratory health effects. However, the relationship between these exposures and their subsequent effects on health cannot be evaluated without understanding the properties of the emitted aerosol or the main parameters that influence particle emissions during cooking. Whilst traffic-related emissions, stack emissions and ultrafine particle concentrations (UFP, diameter < 100 nm) in urban ambient air have been widely investigated for many years, indoor exposure to UFPs is a relatively new field and in order to evaluate indoor UFP emissions accurately, it is vital to improve scientific understanding of the main parameters that influence particle number, surface area and mass emissions. The main purpose of this study was to characterise the particle emissions produced during grilling and frying as a function of the food, source, cooking temperature and type of oil. Emission factors, along with particle number concentrations and size distributions were determined in the size range 0.006-20 m using a Scanning Mobility Particle Sizer (SMPS) and an Aerodynamic Particle Sizer (APS). An infrared camera was used to measure the temperature field. Overall, increased emission factors were observed to be a function of increased cooking temperatures. Cooking fatty foods also produced higher particle emission factors than vegetables, mainly in terms of mass concentration, and particle emission factors also varied significantly according to the type of oil used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between multiple cameras viewing the same scene may be discovered automatically by finding corresponding points in the two views and then solving for the camera geometry. In camera networks with sparsely placed cameras, low resolution cameras or in scenes with few distinguishable features it may be difficult to find a sufficient number of reliable correspondences from which to compute geometry. This paper presents a method for extracting a larger number of correspondences from an initial set of putative correspondences without any knowledge of the scene or camera geometry. The method may be used to increase the number of correspondences and make geometry computations possible in cases where existing methods have produced insufficient correspondences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research has highlighted the relationship between vehicle speed and increased crash risk and severity. Evidence suggests that police speed enforcement, in particular speed camera operations, can be an effective tool for reducing traffic crashes. A quantitative survey of Queensland drivers (n = 852) was conducted to investigate the impact of police speed enforcement methods on self-reported speeding behaviour. Results indicate that visible enforcement was associated with significantly greater self-reported compliance than covert operations irrespective of the mobility of the approach, and the effects on behaviour were longer lasting. The mobility of operations appeared to be moderated the visibility of the approach. Specifically, increased mobility was associated with increase reported compliant behaviour, but only for covert operations, and increased longevity of reported compliant behaviour, but only for overt operations. The perceived effectiveness of various speed enforcement approaches are also analysed across a range of driving scenarios. Results are discussed in light of the small effect sizes. Recommendations for policy and future research are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To investigate associations between the diurnal variation in a range of corneal parameters, including anterior and posterior corneal topography, and regional corneal thickness. ----- Methods: Fifteen subjects had their corneas measured using a rotating Scheimpflug camera (Pentacam) every 3-7 hours over a 24-hour period. Anterior and posterior corneal axial curvature, pachymetry and anterior chamber depth were analysed. The best fitting corneal sphero-cylinder from the axial curvature, and the average corneal thickness for a series of different corneal regions were calculated. Intraocular pressure and axial length were also measured at each measurement session. Repeated measures ANOVA were used to investigate diurnal change in these parameters. Analysis of covariance was used to examine associations between the measured ocular parameters. ----- Results: Significant diurnal variation was found to occur in both the anterior and posterior corneal curvature and in the regional corneal thickness. Flattening of the anterior corneal best sphere was observed at the early morning measurement (p < 0.0001). The posterior cornea also underwent a significant steepening (p < 0.0001) and change in astigmatism 90/180° at this time. A significant swelling of the cornea (p < 0.0001) was also found to occur immediately after waking. Highly significant associations were found between the diurnal variation in corneal thickness and the changes in corneal curvature. ----- Conclusions: Significant diurnal variation occurs in the regional thickness and the shape of the anterior and posterior cornea. The largest changes in the cornea were typically evident upon waking. The observed non-uniform regional corneal thickness changes resulted in a steepening of the posterior cornea, and a flattening of the anterior cornea to occur at this time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To compare the repeatability within anterior corneal topography measurements and agreement between measurements with the Pentacam HR rotating Scheimpflug camera and with a previously validated Placido disk–based videokeratoscope (Medmont E300). ------ SETTING: Contact Lens and Visual Optics Laboratory, School of Optometry, Queensland University of Technology, Brisbane, Queensland, Australia. ----- METHODS: Normal eyes in 101 young adult subjects had corneal topography measured using the Scheimpflug camera (6 repeated measurements) and videokeratoscope (4 repeated measurements). The best-fitting axial power corneal spherocylinder was calculated and converted into power vectors. Corneal higher-order aberrations (HOAs) (up to the 8th Zernike order) were calculated using the corneal elevation data from each instrument. ----- RESULTS: Both instruments showed excellent repeatability for axial power spherocylinder measurements (repeatability coefficients <0.25 diopter; intraclass correlation coefficients >0.9) and good agreement for all power vectors. Agreement between the 2 instruments was closest when the mean of multiple measurements was used in analysis. For corneal HOAs, both instruments showed reasonable repeatability for most aberration terms and good correlation and agreement for many aberrations (eg, spherical aberration, coma, higher-order root mean square). For other aberrations (eg, trefoil and tetrafoil), the 2 instruments showed relatively poor agreement. ----- CONCLUSIONS: For normal corneas, the Scheimpflug system showed excellent repeatability and reasonable agreement with a previously validated videokeratoscope for the anterior corneal axial curvature best-fitting spherocylinder and several corneal HOAs. However, for certain aberrations with higher azimuthal frequencies, the Scheimpflug system had poor agreement with the videokeratoscope; thus, caution should be used when interpreting these corneal aberrations with the Scheimpflug system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The international focus on embracing daylighting for energy efficient lighting purposes and the corporate sector’s indulgence in the perception of workplace and work practice “transparency” has spurned an increase in highly glazed commercial buildings. This in turn has renewed issues of visual comfort and daylight-derived glare for occupants. In order to ascertain evidence, or predict risk, of these events; appraisals of these complex visual environments require detailed information on the luminances present in an occupant’s field of view. Conventional luminance meters are an expensive and time consuming method of achieving these results. To create a luminance map of an occupant’s visual field using such a meter requires too many individual measurements to be a practical measurement technique. The application of digital cameras as luminance measurement devices has solved this problem. With high dynamic range imaging, a single digital image can be created to provide luminances on a pixel-by-pixel level within the broad field of view afforded by a fish-eye lens: virtually replicating an occupant’s visual field and providing rapid yet detailed luminance information for the entire scene. With proper calibration, relatively inexpensive digital cameras can be successfully applied to the task of luminance measurements, placing them in the realm of tools that any lighting professional should own. This paper discusses how a digital camera can become a luminance measurement device and then presents an analysis of results obtained from post occupancy measurements from building assessments conducted by the Mobile Architecture Built Environment Laboratory (MABEL) project. This discussion leads to the important realisation that the placement of such tools in the hands of lighting professionals internationally will provide new opportunities for the lighting community in terms of research on critical issues in lighting such as daylight glare and visual quality and comfort.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Appearance-based mapping and localisation is especially challenging when separate processes of mapping and localisation occur at different times of day. The problem is exacerbated in the outdoors where continuous change in sun angle can drastically affect the appearance of a scene. We confront this challenge by fusing the probabilistic local feature based data association method of FAB-MAP with the pose cell filtering and experience mapping of RatSLAM. We evaluate the effectiveness of our amalgamation of methods using five datasets captured throughout the day from a single camera driven through a network of suburban streets. We show further results when the streets are re-visited three weeks later, and draw conclusions on the value of the system for lifelong mapping.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time Alone is the introductory image to the exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside an abandoned building which the author converted into a camera obscura. It depicts an inverted image of the outside environment and the text 'time' - which is constructed by torch-light within the building interior and during the photographic exposure. The image evokes isolation and the temporality of inhabitation within the remote farmlands of the Great Southern Region of Western Australia: the region of focus for all of the twelve works in Lightsite. Indeed the owner of this now-abandoned house passed away and was not found for a week - bringing poignancy to the central theme of this creative work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Five Minutes featured in the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. Five Minutes depicts an inverted image of the outside environment where two botanists stand. The light from this exterior passes though the pinhole camera's aperture and illuminates the internal scene which includes the ground of the site which lies inside the room along with another two botanists standing inside. The image evokes the connectively the botanists have with this landscape, a site which they are presently revegetating with endemic and indigenous plant species. By illuminating the botanists only with light projected from the landscape itself (through the agency of the pinhole camera's aperture) the inhabitant and their landscape are depicted as inseparable subjects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self Landscape is an introductory image of the author's exhibition Lightsite, which toured Western Australian galleries from February 2006 to November 2007. It is a five-minute-long exposure photographic image captured inside a purpose-built, room-sized pinhole camera which is demountable and does not have a floor. Self Landscape depicts an inverted image of the outside environment - a revegetated farmland in the Great Southern Region of Western Australia. The light from the exterior landscape passes though the pinhole camera's aperture and illuminates the internal scene which includes that part of the landscape upon which the floorless room is erected, along with the author who is standing inside. The text "Hotspot" appears where the author has scribed that word with a torch during the long exposure. The image evokes the temporality of light. Here, light itself is portrayed as the primary medium through which we both perceive and describe landscape. It is through the agency of light that we construct our connectivity to landscape.