81 resultados para discrete event systems
Resumo:
Novel computer vision techniques have been developed to automatically detect unusual events in crowded scenes from video feeds of surveillance cameras. The research is useful in the design of the next generation intelligent video surveillance systems. Two major contributions are the construction of a novel machine learning model for multiple instance learning through compressive sensing, and the design of novel feature descriptors in the compressed video domain.
Resumo:
Due to the popularity of security cameras in public places, it is of interest to design an intelligent system that can efficiently detect events automatically. This paper proposes a novel algorithm for multi-person event detection. To ensure greater than real-time performance, features are extracted directly from compressed MPEG video. A novel histogram-based feature descriptor that captures the angles between extracted particle trajectories is proposed, which allows us to capture motion patterns of multi-person events in the video. To alleviate the need for fine-grained annotation, we propose the use of Labelled Latent Dirichlet Allocation, a “weakly supervised” method that allows the use of coarse temporal annotations which are much simpler to obtain. This novel system is able to run at approximately ten times real-time, while preserving state-of-theart detection performance for multi-person events on a 100-hour real-world surveillance dataset (TRECVid SED).
Resumo:
This paper presents a low-bandwidth multi-robot communication system designed to serve as a backup communication channel in the event a robot suffers a network device fault. While much research has been performed in the area of distributing network communication across multiple robots within a system, individual robots are still susceptible to hardware failure. In the past, such robots would simply be removed from service, and their tasks re-allocated to other members. However, there are times when a faulty robot might be crucial to a mission, or be able to contribute in a less communication intensive area. By allowing robots to encode and decode messages into unique sequences of DTMF symbols, called words, our system is able to facilitate continued low-bandwidth communication between robots without access to network communication. Our results have shown that the system is capable of permitting robots to negotiate task initiation and termination, and is flexible enough to permit a pair of robots to perform a simple turn taking task.
Resumo:
Abnormal event detection has attracted a lot of attention in the computer vision research community during recent years due to the increased focus on automated surveillance systems to improve security in public places. Due to the scarcity of training data and the definition of an abnormality being dependent on context, abnormal event detection is generally formulated as a data-driven approach where activities are modeled in an unsupervised fashion during the training phase. In this work, we use a Gaussian mixture model (GMM) to cluster the activities during the training phase, and propose a Gaussian mixture model based Markov random field (GMM-MRF) to estimate the likelihood scores of new videos in the testing phase. Further-more, we propose two new features: optical acceleration, and the histogram of optical flow gradients; to detect the presence of any abnormal objects and speed violations in the scene. We show that our proposed method outperforms other state of the art abnormal event detection algorithms on publicly available UCSD dataset.
Resumo:
DNA double-strand breaks (DSBs) are particularly lethal and genotoxic lesions, that can arise either by endogenous (physiological or pathological) processes or by exogenous factors, particularly ionizing radiation and radiomimetic compounds. Phosphorylation of the H2A histone variant, H2AX, at the serine-139 residue, in the highly conserved C-terminal SQEY motif, forming γH2AX, is an early response to DNA double-strand breaks1. This phosphorylation event is mediated by the phosphatidyl-inosito 3-kinase (PI3K) family of proteins, ataxia telangiectasia mutated (ATM), DNA-protein kinase catalytic subunit and ATM and RAD3-related (ATR)2. Overall, DSB induction results in the formation of discrete nuclear γH2AX foci which can be easily detected and quantitated by immunofluorescence microscopy2. Given the unique specificity and sensitivity of this marker, analysis of γH2AX foci has led to a wide range of applications in biomedical research, particularly in radiation biology and nuclear medicine. The quantitation of γH2AX foci has been most widely investigated in cell culture systems in the context of ionizing radiation-induced DSBs. Apart from cellular radiosensitivity, immunofluorescence based assays have also been used to evaluate the efficacy of radiation-modifying compounds. In addition, γH2AX has been used as a molecular marker to examine the efficacy of various DSB-inducing compounds and is recently being heralded as important marker of ageing and disease, particularly cancer3. Further, immunofluorescence-based methods have been adapted to suit detection and quantitation of γH2AX foci ex vivo and in vivo4,5. Here, we demonstrate a typical immunofluorescence method for detection and quantitation of γH2AX foci in mouse tissues.
Resumo:
Today’s information systems log vast amounts of data. These collections of data (implicitly) describe events (e.g. placing an order or taking a blood test) and, hence, provide information on the actual execution of business processes. The analysis of such data provides an excellent starting point for business process improvement. This is the realm of process mining, an area which has provided a repertoire of many analysis techniques. Despite the impressive capabilities of existing process mining algorithms, dealing with the abundance of data recorded by contemporary systems and devices remains a challenge. Of particular importance is the capability to guide the meaningful interpretation of “oceans of data” by process analysts. To this end, insights from the field of visual analytics can be leveraged. This article proposes an approach where process states are reconstructed from event logs and visualised in succession, leading to an animated history of a process. This approach is customisable in how a process state, partially defined through a collection of activity instances, is visualised: one can select a map and specify a projection of events on this map based on the properties of the events. This paper describes a comprehensive implementation of the proposal. It was realised using the open-source process mining framework ProM. Moreover, this paper also reports on an evaluation of the approach conducted with Suncorp, one of Australia’s largest insurance companies.
Resumo:
Human resources are often responsible for the execution of business processes. In order to evaluate resource performance and identify best practices as well as opportunities for improvement, managers need objective information about resource behaviours. Companies often use information systems to support their processes and these systems record information about process execution in event logs. We present a framework for analysing and evaluating resource behaviour through mining such event logs. The framework provides a method for extracting descriptive information about resource skills, utilisation, preferences, productivity and collaboration patterns; a method for analysing relationships between different resource behaviours and outcomes; and a method for evaluating the overall resource productivity, tracking its changes over time and comparing it with the productivity of other resources. To demonstrate the applicability of our framework we apply it to analyse behaviours of employees in an Australian company and evaluate its usefulness by a survey among managers in industry.
Resumo:
Oscillations of neural activity may bind widespread cortical areas into a neural representation that encodes disparate aspects of an event. In order to test this theory we have turned to data collected from complex partial epilepsy (CPE) patients with chronically implanted depth electrodes. Data from regions critical to word and face information processing was analyzed using spectral coherence measurements. Similar analyses of intracranial EEG (iEEG) during seizure episodes display HippoCampal Formation (HCF)—NeoCortical (NC) spectral coherence patterns that are characteristic of specific seizure stages (Klopp et al. 1996). We are now building a computational memory model to examine whether spatio-temporal patterns of human iEEG spectral coherence emerge in a computer simulation of HCF cellular distribution, membrane physiology and synaptic connectivity. Once the model is reasonably scaled it will be used as a tool to explore neural parameters that are critical to memory formation and epileptogenesis.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
This paper addresses the problem of identifying and explaining behavioral differences between two business process event logs. The paper presents a method that, given two event logs, returns a set of statements in natural language capturing behavior that is present or frequent in one log, while absent or infrequent in the other. This log delta analysis method allows users to diagnose differences between normal and deviant executions of a process or between two versions or variants of a process. The method relies on a novel approach to losslessly encode an event log as an event structure, combined with a frequency-enhanced technique for differencing pairs of event structures. A validation of the proposed method shows that it accurately diagnoses typical change patterns and can explain differences between normal and deviant cases in a real-life log, more compactly and precisely than previously proposed methods.
Resumo:
Automatic Vehicle Identification Systems are being increasingly used as a new source of travel information. As in the last decades these systems relied on expensive new technologies, few of them were scattered along a networks making thus Travel-Time and Average Speed estimation their main objectives. However, as their price dropped, the opportunity of building dense AVI networks arose, as in Brisbane where more than 250 Bluetooth detectors are now installed. As a consequence this technology represents an effective means to acquire accurate time dependant Origin Destination information. In order to obtain reliable estimations, however, a number of issues need to be addressed. Some of these problems stem from the structure of a network made out of isolated detectors itself while others are inherent of Bluetooth technology (overlapping detection area, missing detections,\...). The aim of this paper is threefold: First, after having presented the level of details that can be reached with a network of isolated detectors we present how we modelled Brisbane's network, keeping only the information valuable for the retrieval of trip information. Second, we give an overview of the issues inherent to the Bluetooth technology and we propose a method for retrieving the itineraries of the individual Bluetooth vehicles. Last, through a comparison with Brisbane Transport Strategic Model results, we highlight the opportunities and the limits of Bluetooth detectors networks. The aim of this paper is twofold. We first give a comprehensive overview of the aforementioned issues. Further, we propose a methodology that can be followed, in order to cleanse, correct and aggregate Bluetooth data. We postulate that the methods introduced by this paper are the first crucial steps that need to be followed in order to compute accurate Origin-Destination matrices in urban road networks.
Resumo:
With organisations facing significant challenges to remain competitive, Business Process Improvement (BPI) initiatives are often conducted to improve the efficiency and effectiveness of their business processes, focussing on time, cost, and quality improvements. Event logs which contain a detailed record of business operations over a certain time period, recorded by an organisation's information systems, are the first step towards initiating evidence-based BPI activities. Given an (original) event log as a starting point, an approach to explore better ways to execute a business process was developed, resulting in an improved (perturbed) event log. Identifying the differences between the original event log and the perturbed event log can provide valuable insights, helping organisations to improve their processes. However, there is a lack of automated techniques to detect the differences between two event logs. Therefore, this research aims to develop visualisation techniques to provide targeted analysis of resource reallocation and activity rescheduling. The differences between two event logs are first identified. The changes between the two event logs are conceptualised and realised with a number of visualisations. With the proposed visualisations, analysts will then be able to identify the changes related to resource and time, resulting in a more efficient business process. Ultimately, analysts can make use of this comparative information to initiate evidence-based BPI activities.
Resumo:
Companies standardise and automate their business processes in order to improve process eff ciency and minimise operational risks. However, it is di fficult to eliminate all process risks during the process design stage due to the fact that processes often run in complex and changeable environments and rely on human resources. Timely identification of process risks is crucial in order to insure the achievement of process goals. Business processes are often supported by information systems that record information about their executions in event logs. In this article we present an approach and a supporting tool for the evaluation of the overall process risk and for the prediction of process outcomes based on the analysis of information recorded in event logs. It can help managers evaluate the overall risk exposure of their business processes, track the evolution of overall process risk, identify changes and predict process outcomes based on the current value of overall process risk. The approach was implemented and validated using synthetic event logs and through a case study with a real event log.
Resumo:
Background At Queensland University of Technology (QUT), the Bachelor of Radiation Therapy course evaluation has previously suffered from low online survey participation rates. A communal instantaneous feedback event using an audience response system (ARS) was evaluated as a potential solution to this problem. The aims of the project were to determine the extent to which this feedback event could be facilitated by ARS technology and to evaluate the impact the technology made on student satisfaction and engagement. Methods Students were invited to a timetabled session to provide feedback on individual study units and the course overall. They provided quantitative Likert-style responses to prompts for each unit and the course using an ARS as well as anonymous typed qualitative comments. Data collection was performed live so students were able to view collective class responses. This prompted further discussion and enabled a prospective action plan to be developed. To inform future ARS use, students were asked for their opinions on the feedback method. Results Despite technological difficulties, student evaluation indicated that all responders enjoyed the session and the opportunity to view the combined responses. All students felt that useful feedback was generated and that this method should be used in the future. The student attendance and response rates were high, and it was clear that the session had led to the development of some insightful qualitative feedback comments. Conclusions: An ARS contributed well to the collection of course feedback in a communal and interactive environment. Students found it enjoyable to use, and it helped to stimulate useful qualitative comments