873 resultados para visual surveillance system
Resumo:
We have developed digital image registration program for a MC 68000 based fundus image processing system (FIPS). FIPS not only is capable of executing typical image processing algorithms in spatial as well as Fourier domain, the execution time for many operations has been made much quicker by using a hybrid of "C", Fortran and MC6000 assembly languages.
Resumo:
This report provides an evaluation of the current available evidence-base for identification and surveillance of product-related injuries in children in Queensland. While the focal population was children in Queensland, the identification of information needs and data sources for product safety surveillance has applicability nationally for all age groups. The report firstly summarises the data needs of product safety regulators regarding product-related injury in children, describing the current sources of information informing product safety policy and practice, and documenting the priority product surveillance areas affecting children which have been a focus over recent years in Queensland. Health data sources in Queensland which have the potential to inform product safety surveillance initiatives were evaluated in terms of their ability to address the information needs of product safety regulators. Patterns in product-related injuries in children were analysed using routinely available health data to identify areas for future intervention, and the patterns in product-related injuries in children identified in health data were compared to those identified by product safety regulators. Recommendations were made for information system improvements and improved access to and utilisation of health data for more proactive approaches to product safety surveillance in the future.
Resumo:
Approximately 20 years have passed now since the NTSB issued its original recommendation to expedite development, certification and production of low-cost proximity warning and conflict detection systems for general aviation [1]. While some systems are in place (TCAS [2]), ¡¨see-and-avoid¡¨ remains the primary means of separation between light aircrafts sharing the national airspace. The requirement for a collision avoidance or sense-and-avoid capability onboard unmanned aircraft has been identified by leading government, industry and regulatory bodies as one of the most significant challenges facing the routine operation of unmanned aerial systems (UAS) in the national airspace system (NAS) [3, 4]. In this thesis, we propose and develop a novel image-based collision avoidance system to detect and avoid an upcoming conflict scenario (with an intruder) without first estimating or filtering range. The proposed collision avoidance system (CAS) uses relative bearing ƒÛ and angular-area subtended ƒê , estimated from an image, to form a test statistic AS C . This test statistic is used in a thresholding technique to decide if a conflict scenario is imminent. If deemed necessary, the system will command the aircraft to perform a manoeuvre based on ƒÛ and constrained by the CAS sensor field-of-view. Through the use of a simulation environment where the UAS is mathematically modelled and a flight controller developed, we show that using Monte Carlo simulations a probability of a Mid Air Collision (MAC) MAC RR or a Near Mid Air Collision (NMAC) RiskRatio can be estimated. We also show the performance gain this system has over a simplified version (bearings-only ƒÛ ). This performance gain is demonstrated in the form of a standard operating characteristic curve. Finally, it is shown that the proposed CAS performs at a level comparable to current manned aviations equivalent level of safety (ELOS) expectations for Class E airspace. In some cases, the CAS may be oversensitive in manoeuvring the owncraft when not necessary, but this constitutes a more conservative and therefore safer, flying procedures in most instances.
Resumo:
Video surveillance systems using Closed Circuit Television (CCTV) cameras, is one of the fastest growing areas in the field of security technologies. However, the existing video surveillance systems are still not at a stage where they can be used for crime prevention. The systems rely heavily on human observers and are therefore limited by factors such as fatigue and monitoring capabilities over long periods of time. This work attempts to address these problems by proposing an automatic suspicious behaviour detection which utilises contextual information. The utilisation of contextual information is done via three main components: a context space model, a data stream clustering algorithm, and an inference algorithm. The utilisation of contextual information is still limited in the domain of suspicious behaviour detection. Furthermore, it is nearly impossible to correctly understand human behaviour without considering the context where it is observed. This work presents experiments using video feeds taken from CAVIAR dataset and a camera mounted on one of the buildings Z-Block) at the Queensland University of Technology, Australia. From these experiments, it is shown that by exploiting contextual information, the proposed system is able to make more accurate detections, especially of those behaviours which are only suspicious in some contexts while being normal in the others. Moreover, this information gives critical feedback to the system designers to refine the system.
Resumo:
This paper investigates the use of visual artifacts to represent a complex adaptive system (CAS). The integrated master schedule (IMS) is one of those visuals widely used in complex projects for scheduling, budgeting, and project management. In this paper, we discuss how the IMS outperforms the traditional timelines and acts as a ‘multi-level and poly-temporal boundary object’ that visually represents the CAS. We report the findings of a case study project on the way the IMS mapped interactions, interdependencies, constraints and fractal patterns in a complex project. Finally, we discuss how the IMS was utilised as a complex boundary object by eliciting commitment and development of shared mental models, and facilitating negotiation through the layers of multiple interpretations from stakeholders.
Resumo:
In this paper, we present a new algorithm for boosting visual template recall performance through a process of visual expectation. Visual expectation dynamically modifies the recognition thresholds of learnt visual templates based on recently matched templates, improving the recall of sequences of familiar places while keeping precision high, without any feedback from a mapping backend. We demonstrate the performance benefits of visual expectation using two 17 kilometer datasets gathered in an outdoor environment at two times separated by three weeks. The visual expectation algorithm provides up to a 100% improvement in recall. We also combine the visual expectation algorithm with the RatSLAM SLAM system and show how the algorithm enables successful mapping
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
Having a good automatic anomalous human behaviour detection is one of the goals of smart surveillance systems’ domain of research. The automatic detection addresses several human factor issues underlying the existing surveillance systems. To create such a detection system, contextual information needs to be considered. This is because context is required in order to correctly understand human behaviour. Unfortunately, the use of contextual information is still limited in the automatic anomalous human behaviour detection approaches. This paper proposes a context space model which has two benefits: (a) It provides guidelines for the system designers to select information which can be used to describe context; (b)It enables a system to distinguish between different contexts. A comparative analysis is conducted between a context-based system which employs the proposed context space model and a system which is implemented based on one of the existing approaches. The comparison is applied on a scenario constructed using video clips from CAVIAR dataset. The results show that the context-based system outperforms the other system. This is because the context space model allows the system to considering knowledge learned from the relevant context only.
Resumo:
Rapid prototyping environments can speed up the research of visual control algorithms. We have designed and implemented a software framework for fast prototyping of visual control algorithms for Micro Aerial Vehicles (MAV). We have applied a combination of a proxy-based network communication architecture and a custom Application Programming Interface. This allows multiple experimental configurations, like drone swarms or distributed processing of a drone's video stream. Currently, the framework supports a low-cost MAV: the Parrot AR.Drone. Real tests have been performed on this platform and the results show comparatively low figures of the extra communication delay introduced by the framework, while adding new functionalities and flexibility to the selected drone. This implementation is open-source and can be downloaded from www.vision4uav.com/?q=VC4MAV-FW
Resumo:
The time consuming and labour intensive task of identifying individuals in surveillance video is often challenged by poor resolution and the sheer volume of stored video. Faces or identifying marks such as tattoos are often too coarse for direct matching by machine or human vision. Object tracking and super-resolution can then be combined to facilitate the automated detection and enhancement of areas of interest. The object tracking process enables the automatic detection of people of interest, greatly reducing the amount of data for super-resolution. Smaller regions such as faces can also be tracked. A number of instances of such regions can then be utilized to obtain a super-resolved version for matching. Performance improvement from super-resolution is demonstrated using a face verification task. It is shown that there is a consistent improvement of approximately 7% in verification accuracy, using both Eigenface and Elastic Bunch Graph Matching approaches for automatic face verification, starting from faces with an eye to eye distance of 14 pixels. Visual improvement in image fidelity from super-resolved images over low-resolution and interpolated images is demonstrated on a small database. Current research and future directions in this area are also summarized.
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
In this paper we use a sequence-based visual localization algorithm to reveal surprising answers to the question, how much visual information is actually needed to conduct effective navigation? The algorithm actively searches for the best local image matches within a sliding window of short route segments or 'sub-routes', and matches sub-routes by searching for coherent sequences of local image matches. In contract to many existing techniques, the technique requires no pre-training or camera parameter calibration. We compare the algorithm's performance to the state-of-the-art FAB-MAP 2.0 algorithm on a 70 km benchmark dataset. Performance matches or exceeds the state of the art feature-based localization technique using images as small as 4 pixels, fields of view reduced by a factor of 250, and pixel bit depths reduced to 2 bits. We present further results demonstrating the system localizing in an office environment with near 100% precision using two 7 bit Lego light sensors, as well as using 16 and 32 pixel images from a motorbike race and a mountain rally car stage. By demonstrating how little image information is required to achieve localization along a route, we hope to stimulate future 'low fidelity' approaches to visual navigation that complement probabilistic feature-based techniques.
Resumo:
Maternal deaths have been a critical issue for women living in rural and remote areas. The need to travel long distances, the shortage of primary care providers such as physicians, specialists and nurses, and the closing of small hospitals have been problems identified in many rural areas. Some research work has been undertaken and a few techniques have been developed to remotely measure the physiological condition of pregnant women through sophisticated ultrasound equipment. There are numerous ways to reduce maternal deaths, and an important step is to select the right approaches to achieving this reduction. One such approach is the provision of decision support systems in rural and remote areas. Decision support systems (DSSs) have already shown a great potential in many health fields. This thesis proposes an ingenious decision support system (iDSS) based on the methodology of survey instruments and identification of significant variables to be used in iDSS using statistical analysis. A survey was undertaken with pregnant women and factorial experimental design was chosen to acquire sample size. Variables with good reliability in any one of the statistical techniques such as Chi-square, Cronbach’s á and Classification Tree were incorporated in the iDSS. The decision support system was developed with significant variables such as: Place of residence, Seeing the same doctor, Education, Tetanus injection, Baby weight, Previous baby born, Place of birth, Assisted delivery, Pregnancy parity, Doctor visits and Occupation. The ingenious decision support system was implemented with Visual Basic as front end and Microsoft SQL server management as backend. Outcomes of the ingenious decision support system include advice on Symptoms, Diet and Exercise to pregnant women. On conditional system was sent and validated by the gynaecologist. Another outcome of ingenious decision support system was to provide better pregnancy health awareness and reduce long distance travel, especially for women in rural areas. The proposed system has qualities such as usefulness, accuracy and accessibility.
Resumo:
Person re-identification involves recognising individuals in different locations across a network of cameras and is a challenging task due to a large number of varying factors such as pose (both subject and camera) and ambient lighting conditions. Existing databases do not adequately capture these variations, making evaluations of proposed techniques difficult. In this paper, we present a new challenging multi-camera surveillance database designed for the task of person re-identification. This database consists of 150 unscripted sequences of subjects travelling in a building environment though up to eight camera views, appearing from various angles and in varying illumination conditions. A flexible XML-based evaluation protocol is provided to allow a highly configurable evaluation setup, enabling a variety of scenarios relating to pose and lighting conditions to be evaluated. A baseline person re-identification system consisting of colour, height and texture models is demonstrated on this database.