990 resultados para Visual surveillance
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Office of Driver and Pedestrian Research, Washington, D.C.
Resumo:
Inspired by human visual cognition mechanism, this paper first presents a scene classification method based on an improved standard model feature. Compared with state-of-the-art efforts in scene classification, the newly proposed method is more robust, more selective, and of lower complexity. These advantages are demonstrated by two sets of experiments on both our own database and standard public ones. Furthermore, occlusion and disorder problems in scene classification in video surveillance are also first studied in this paper. © 2010 IEEE.
Resumo:
This project looks at the ways Northeastern Ontario citizens in rural communities regulate their private property through traditional and contemporary surveillance means. Through art and objects, this project allows viewers the opportunity to experience surveillance in rural areas through visual and creative ways that encourage interaction and critique. This project defines organic surveillance by looking at the ways ruralists in Markstay Ontario practice surveillance and deterrence which is influenced by characteristics of land, risks and other determining factors such as psychology, resourcefulness, sustainability, technology and private property. Organic surveillance argues that surveillance and deterrence is prevalent far beyond datamining, GPS tracking and social media. Surveillance and deterrence as methods of survival are found everywhere, even in the farthest, most “wild” and forested areas.
Resumo:
The police use both subjective (i.e. police staff) and automated (e.g. face recognition systems) methods for the completion of visual tasks (e.g person identification). Image quality for police tasks has been defined as the image usefulness, or image suitability of the visual material to satisfy a visual task. It is not necessarily affected by any artefact that may affect the visual image quality (i.e. decrease fidelity), as long as these artefacts do not affect the relevant useful information for the task. The capture of useful information will be affected by the unconstrained conditions commonly encountered by CCTV systems such as variations in illumination and high compression levels. The main aim of this thesis is to investigate aspects of image quality and video compression that may affect the completion of police visual tasks/applications with respect to CCTV imagery. This is accomplished by investigating 3 specific police areas/tasks utilising: 1) the human visual system (HVS) for a face recognition task, 2) automated face recognition systems, and 3) automated human detection systems. These systems (HVS and automated) were assessed with defined scene content properties, and video compression, i.e. H.264/MPEG-4 AVC. The performance of imaging systems/processes (e.g. subjective investigations, performance of compression algorithms) are affected by scene content properties. No other investigation has been identified that takes into consideration scene content properties to the same extend. Results have shown that the HVS is more sensitive to compression effects in comparison to the automated systems. In automated face recognition systems, `mixed lightness' scenes were the most affected and `low lightness' scenes were the least affected by compression. In contrast the HVS for the face recognition task, `low lightness' scenes were the most affected and `medium lightness' scenes the least affected. For the automated human detection systems, `close distance' and `run approach' are some of the most commonly affected scenes. Findings have the potential to broaden the methods used for testing imaging systems for security applications.
Resumo:
Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.
Resumo:
Bactrocera tryoni (Froggatt) is Australia's major horticultural insect pest, yet monitoring females remains logistically difficult. We trialled the ‘Ladd trap’ as a potential female surveillance or monitoring tool. This trap design is used to trap and monitor fruit flies in countries other (e.g. USA) than Australia. The Ladd trap consists of a flat yellow panel (a traditional ‘sticky trap’), with a three dimensional red sphere (= a fruit mimic) attached in the middle. We confirmed, in field-cage trials, that the combination of yellow panel and red sphere was more attractive to B. tryoni than the two components in isolation. In a second set of field-cage trials, we showed that it was the red-yellow contrast, rather than the three dimensional effect, which was responsible for the trap's effectiveness, with B. tryoni equally attracted to a Ladd trap as to a two-dimensional yellow panel with a circular red centre. The sex ratio of catches was approximately even in the field-cage trials. In field trials, we tested the traditional red-sphere Ladd trap against traps for which the sphere was painted blue, black or yellow. The colour of sphere did not significantly influence trap efficiency in these trials, despite the fact the yellow-panel/yellow-sphere presented no colour contrast to the flies. In 6 weeks of field trials, over 1500 flies were caught, almost exactly two-thirds of them being females. Overall, flies were more likely to be caught on the yellow panel than the sphere; but, for the commercial Ladd trap, proportionally more females were caught on the red sphere versus the yellow panel than would be predicted based on relative surface area of each component, a result also seen the field-cage trial. We determined that no modification of the trap was more effective than the commercially available Ladd trap and so consider that product suitable for more extensive field testing as a B. tryoni research and monitoring tool.