339 resultados para Rainfall event classification
Resumo:
The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
A review of 291 catalogued particles on the bases of particle size, shape, bulk chemistry, and texture is used to establish a reliable taxonomy. Extraterrestrial materials occur in three defined categories: spheres, aggregates and fragments. Approximately 76% of aggregates are of probable extraterrestrial origin, whereas spheres contain the smallest amount of extraterrestrial material (approx 43%). -B.M.
Resumo:
Process mining encompasses the research area which is concerned with knowledge discovery from information system event logs. Within the process mining research area, two prominent tasks can be discerned. First of all, process discovery deals with the automatic construction of a process model out of an event log. Secondly, conformance checking focuses on the assessment of the quality of a discovered or designed process model in respect to the actual behavior as captured in event logs. Hereto, multiple techniques and metrics have been developed and described in the literature. However, the process mining domain still lacks a comprehensive framework for assessing the goodness of a process model from a quantitative perspective. In this study, we describe the architecture of an extensible framework within ProM, allowing for the consistent, comparative and repeatable calculation of conformance metrics. For the development and assessment of both process discovery as well as conformance techniques, such a framework is considered greatly valuable.
Resumo:
This item provides supplementary materials for the paper mentioned in the title, specifically a range of organisms used in the study. The full abstract for the main paper is as follows: Next Generation Sequencing (NGS) technologies have revolutionised molecular biology, allowing clinical sequencing to become a matter of routine. NGS data sets consist of short sequence reads obtained from the machine, given context and meaning through downstream assembly and annotation. For these techniques to operate successfully, the collected reads must be consistent with the assumed species or species group, and not corrupted in some way. The common bacterium Staphylococcus aureus may cause severe and life-threatening infections in humans,with some strains exhibiting antibiotic resistance. In this paper, we apply an SVM classifier to the important problem of distinguishing S. aureus sequencing projects from alternative pathogens, including closely related Staphylococci. Using a sequence k-mer representation, we achieve precision and recall above 95%, implicating features with important functional associations.
Resumo:
Free association norms indicate that words are organized into semantic/associative neighborhoods within a larger network of words and links that bind the net together. We present evidence indicating that memory for a recent word event can depend on implicitly and simultaneously activating related words in its neighborhood. Processing a word during encoding primes its network representation as a function of the density of the links in its neighborhood. Such priming increases recall and recognition and can have long lasting effects when the word is processed in working memory. Evidence for this phenomenon is reviewed in extralist cuing, primed free association, intralist cuing, and single-item recognition tasks. The findings also show that when a related word is presented to cue the recall of a studied word, the cue activates it in an array of related words that distract and reduce the probability of its selection. The activation of the semantic network produces priming benefits during encoding and search costs during retrieval. In extralist cuing recall is a negative function of cue-to-distracter strength and a positive function of neighborhood density, cue-to-target strength, and target-to cue strength. We show how four measures derived from the network can be combined and used to predict memory performance. These measures play different roles in different tasks indicating that the contribution of the semantic network varies with the context provided by the task. We evaluate spreading activation and quantum-like entanglement explanations for the priming effect produced by neighborhood density.
Resumo:
In this paper, we propose an approach which attempts to solve the problem of surveillance event detection, assuming that we know the definition of the events. To facilitate the discussion, we first define two concepts. The event of interest refers to the event that the user requests the system to detect; and the background activities are any other events in the video corpus. This is an unsolved problem due to many factors as listed below: 1) Occlusions and clustering: The surveillance scenes which are of significant interest at locations such as airports, railway stations, shopping centers are often crowded, where occlusions and clustering of people are frequently encountered. This significantly affects the feature extraction step, and for instance, trajectories generated by object tracking algorithms are usually not robust under such a situation. 2) The requirement for real time detection: The system should process the video fast enough in both of the feature extraction and the detection step to facilitate real time operation. 3) Massive size of the training data set: Suppose there is an event that lasts for 1 minute in a video with a frame rate of 25fps, the number of frames for this events is 60X25 = 1500. If we want to have a training data set with many positive instances of the event, the video is likely to be very large in size (i.e. hundreds of thousands of frames or more). How to handle such a large data set is a problem frequently encountered in this application. 4) Difficulty in separating the event of interest from background activities: The events of interest often co-exist with a set of background activities. Temporal groundtruth typically very ambiguous, as it does not distinguish the event of interest from a wide range of co-existing background activities. However, it is not practical to annotate the locations of the events in large amounts of video data. This problem becomes more serious in the detection of multi-agent interactions, since the location of these events can often not be constrained to within a bounding box. 5) Challenges in determining the temporal boundaries of the events: An event can occur at any arbitrary time with an arbitrary duration. The temporal segmentation of events is difficult and ambiguous, and also affected by other factors such as occlusions.
Resumo:
Bridges are currently rated individually for maintenance and repair action according to the structural conditions of their elements. Dealing with thousands of bridges and the many factors that cause deterioration, makes this rating process extremely complicated. The current simplified but practical methods are not accurate enough. On the other hand, the sophisticated, more accurate methods are only used for a single or particular bridge type. It is therefore necessary to develop a practical and accurate rating system for a network of bridges. The first most important step in achieving this aim is to classify bridges based on the differences in nature and the unique characteristics of the critical factors and the relationship between them, for a network of bridges. Critical factors and vulnerable elements will be identified and placed in different categories. This classification method will be used to develop a new practical rating method for a network of railway bridges based on criticality and vulnerability analysis. This rating system will be more accurate and economical as well as improve the safety and serviceability of railway bridges.
Resumo:
Greater than 750 individual particles have now been selected from collection flags housed in the JSC Cosmic Dust Curatorial Facility and most have been documented in the Cosmic Dust Catalogs [1]. As increasing numbers of particles are placed in Cosmic Dust Collections, and a greater diversity of particles are introduced to the stratosphere through natural and man-made processes (e.g. decaying orbits of space debris [2]), there is an even greater need for a classification scheme to encompass all stratospheric particles rather than only extraterrestrial particles. The fundamental requirements for a suitable classification scheme have been outlined in earlier communications [3,4]. A quantitative survey of particles on collection flag W7017 indicates that there is some bias in the number of samples selected within a given category for the Cosmic Dust Catalog [5]. However, the sample diversity within this selection is still appropriate for the development of a reliable classification scheme. In this paper, we extend the earlier works on stratospheric particle classification to include particles collected during the period May 1981 to November 1983.
Resumo:
Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.
Resumo:
Background: Studies on the relationship between performance and design of the throwing frame have been limited and therefore require further investigation. Objectives: The specific objectives were to provide benchmark information about performance and whole body positioning of male athletes in F30s classes. Study Design: Descriptive analysis. Methods: A total of 48 attempts performed by 12 stationary discus throwers in F33 and F34 classes during seated discus throwing event of 2002 International Paralympic Committee Athletics World Championships were analysed in this study. The whole body positioning included overall throwing posture (i.e. number of points of contact between the thrower and the frame, body position, throwing orientation and throwing side) and lower limb placements (i.e. seating arrangements, points of contact on the both feet, type of attachment of both legs and feet). Results: Three (25%), five (42%), one (8%) and three (25%) athletes used from three to six points of contact, respectively. Seven (58%) and five (42%) athletes threw from a standing or a seated position, respectively. A straddle, a stool or a chair was used by six (50%), four (33%) or two (17%) throwers, respectively. Conclusions: This study provides key information for a better understanding of the interaction between throwing technique of elite seated throwers and their throwing frame.