185 resultados para Cameras.
Resumo:
In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.
Resumo:
Camera trapping is a scientific survey technique that involves the placement of heat-and motion-sensing automatic triggered cameras into the ecosystem to record images of animals for the purpose of studying wildlife. As technology continues to advance in sophistication, the use of camera trapping is becoming more widespread and is a crucial tool in the study of, and attempts to preserve, various species of animals, particularly those that are internationally endangered. However, whatever their value as an ecological device, camera traps also create a new risk of incidentally and accidentally capturing images of humans who venture into the area under surveillance. This article examines the current legal position in Australia in relation to such unintended invasions of privacy. It considers the current patchwork of statute and common laws that may provide a remedy in such circumstances. It also discusses the position that may prevail should the recommendations of either the Australian Law Reform Commission and/or New South Wales Law Reform Commission be adopted and a statutory cause of action protecting personal privacy be enacted.
Resumo:
Collisions between pedestrians and vehicles continue to be a major problem throughout the world. Pedestrians trying to cross roads and railway tracks without any caution are often highly susceptible to collisions with vehicles and trains. Continuous financial, human and other losses have prompted transport related organizations to come up with various solutions addressing this issue. However, the quest for new and significant improvements in this area is still ongoing. This work addresses this issue by building a general framework using computer vision techniques to automatically monitor pedestrian movements in such high-risk areas to enable better analysis of activity, and the creation of future alerting strategies. As a result of rapid development in the electronics and semi-conductor industry there is extensive deployment of CCTV cameras in public places to capture video footage. This footage can then be used to analyse crowd activities in those particular places. This work seeks to identify the abnormal behaviour of individuals in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM), Full-2D HMM and Spatial HMM to model the normal activities of people. The outliers of the model (i.e. those observations with insufficient likelihood) are identified as abnormal activities. Location features, flow features and optical flow textures are used as the features for the model. The proposed approaches are evaluated using the publicly available UCSD datasets, and we demonstrate improved performance using a Semi-2D Hidden Markov Model compared to other state of the art methods. Further we illustrate how our proposed methods can be applied to detect anomalous events at rail level crossings.
Resumo:
In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a "role-based" representation instead of one based on player "identity" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection/tracking (e.g. missed or false detections), we show that our compact representation can effectively "denoise" erroneous detections as well as enabe temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.
Resumo:
After first observing a person, the task of person re-identification involves recognising an individual at different locations across a network of cameras at a later time. Traditionally, this task has been performed by first extracting appearance features of an individual and then matching these features to the previous observation. However, identifying an individual based solely on appearance can be ambiguous, particularly when people wear similar clothing (i.e. people dressed in uniforms in sporting and school settings). This task is made more difficult when the resolution of the input image is small as is typically the case in multi-camera networks. To circumvent these issues, we need to use other contextual cues. In this paper, we use "group" information as our contextual feature to aid in the re-identification of a person, which is heavily motivated by the fact that people generally move together as a collective group. To encode group context, we learn a linear mapping function to assign each person to a "role" or position within the group structure. We then combine the appearance and group context cues using a weighted summation. We demonstrate how this improves performance of person re-identification in a sports environment over appearance based-features.
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
Highly sensitive infrared cameras can produce high-resolution diagnostic images of the temperature and vascular changes of breasts. Wavelet transform based features are suitable in extracting the texture difference information of these images due to their scale-space decomposition. The objective of this study is to investigate the potential of extracted features in differentiating between breast lesions by comparing the two corresponding pectoral regions of two breast thermograms. The pectoral regions of breastsare important because near 50% of all breast cancer is located in this region. In this study, the pectoral region of the left breast is selected. Then the corresponding pectoral region of the right breast is identified. Texture features based on the first and the second sets of statistics are extracted from wavelet decomposed images of the pectoral regions of two breast thermograms. Principal component analysis is used to reduce dimension and an Adaboost classifier to evaluate classification performance. A number of different wavelet features are compared and it is shown that complex non-separable 2D discrete wavelet transform features perform better than their real separable counterparts.
Resumo:
Safety concerns in the operation of autonomous aerial systems require safe-landing protocols be followed during situations where the a mission should be aborted due to mechanical or other failure. On-board cameras provide information that can be used in the determination of potential landing sites, which are continually updated and ranked to prevent injury and minimize damage. Pulse Coupled Neural Networks have been used for the detection of features in images that assist in the classification of vegetation and can be used to minimize damage to the aerial vehicle. However, a significant drawback in the use of PCNNs is that they are computationally expensive and have been more suited to off-line applications on conventional computing architectures. As heterogeneous computing architectures are becoming more common, an OpenCL implementation of a PCNN feature generator is presented and its performance is compared across OpenCL kernels designed for CPU, GPU and FPGA platforms. This comparison examines the compute times required for network convergence under a variety of images obtained during unmanned aerial vehicle trials to determine the plausibility for real-time feature detection.
Resumo:
In this paper we will examine passenger actions and activities at the security screening points of Australian domestic and international airports. Our findings and analysis provide a more complete understanding of the current airport passenger security screening experience. Data in this paper is comprised of field studies conducted at two Australian airports, one domestic and one international. Video data was collected by cameras situated either side of the security screening point. A total of one hundred and ninety-six passengers were observed. Two methods of analysis are used. First, the activities of passengers are coded and analysed to reveal the common activities at domestic and international security regimes and between quiet and busy periods. Second, observation of passenger activities is used to reveal uncommon aspects. The results show that passengers do more at security screening that being passively scanned. Passengers queue, unpack the required items from their bags and from their pockets, walk through the metal-detector, re-pack and occasionally return to be re-screened. For each of these activities, passengers must understand the procedures at the security screening point and must co-ordinate various actions and objects in time and space. Through this coordination passengers are active participants in making the security checkpoint function – they are co-producers of the security screening process.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
In outdoor environments shadows are common. These typically strong visual features cause considerable change in the appearance of a place, and therefore confound vision-based localisation approaches. In this paper we describe how to convert a colour image of the scene to a greyscale invariant image where pixel values are a function of underlying material property not lighting. We summarise the theory of shadow invariant images and discuss the modelling and calibration issues which are important for non-ideal off-the-shelf colour cameras. We evaluate the technique with a commonly used robotic camera and an autonomous car operating in an outdoor environment, and show that it can outperform the use of ordinary greyscale images for the task of visual localisation.
Resumo:
This paper presents a new multi-scale place recognition system inspired by the recent discovery of overlapping, multi-scale spatial maps stored in the rodent brain. By training a set of Support Vector Machines to recognize places at varying levels of spatial specificity, we are able to validate spatially specific place recognition hypotheses against broader place recognition hypotheses without sacrificing localization accuracy. We evaluate the system in a range of experiments using cameras mounted on a motorbike and a human in two different environments. At 100% precision, the multiscale approach results in a 56% average improvement in recall rate across both datasets. We analyse the results and then discuss future work that may lead to improvements in both robotic mapping and our understanding of sensory processing and encoding in the mammalian brain.
Resumo:
Cognitive impairment and physical disability are common in Parkinson’s disease (PD). As a result diet can be difficult to measure. This study aimed to evaluate the use of a photographic dietary record (PhDR) in people with PD. During a 12-week nutrition intervention study, 19 individuals with PD kept 3-day PhDRs on three occasions using point-and-shoot digital cameras. Details on food items present in the PhDRs and those not photographed were collected retrospectively during an interview. Following the first use of the PhDR method, the photographer completed a questionnaire (n=18). In addition, the quality of the PhDRs was evaluated at each time point. The person with PD was the sole photographer in 56% of the cases, with the remainder by the carer or combination of person with PD and the carer. The camera was rated as easy to use by 89%, keeping a PhDR was considered acceptable by 94% and none would rather use a “pen and paper” method. Eighty-three percent felt confident to use the camera again to record intake. Of the photos captured (n=730), 89% were of adequate quality (items visible, in-focus), while only 21% could be used alone (without interview information) to assess intake. Over the study, 22% of eating/drinking occasions were not photographed. PhDRs were considered an easy and acceptable method to measure intake among individuals with PD and their carers. The majority of PhDRs were of adequate quality, however in order to quantify intake the interview was necessary to obtain sufficient detail and capture missing items.
Resumo:
Access to dietetic care is important in chronic disease management and innovative technologies assists in this purpose. Photographic dietary records (PhDR) using mobile phones or cameras are valid and convenient for patients. Innovations in providing dietary interventions via telephone and computer can also inform dietetic practice. Three studies are presented. A mobile phone method was validated by comparing energy intake (EI) to a weighed food record and a measure of energy expenditure (EE) obtained using the doubly labelled water technique in 10 adults with T2 diabetes. The level of agreement between mean (±sd) energy intake mobile phone (8.2±1.7 MJ) and weighed record (8.5±1.6 MJ) was high (p=0.392), however EI/EE for both methods gave similar levels of under-reporting (0.69 and 0.72). All subjects preferred using the mobile phone vs. weighed record. Nineteen individuals with Parkinsons disease kept 3-day PhDRs on three occasions using point-and-shoot digital cameras over a 12 week period. The camera was rated as easy to use by 89%, keeping a PhDR was considered acceptable by 94% and none would rather use a “pen and paper” method. Eighty-three percent felt confident to use the camera again to record intake. An interactive, automated telephone system designed to coach people with T2 diabetes to adopt and maintain diabetes self-care behaviours, including nutrition, showed trends for improvements in total fat, saturated fat and vegetable intake of the intervention group compared to control participants over 6 months. Innovative technologies are acceptable to patients with chronic conditions and can be incorporated into dietetic care.