898 resultados para detection systems
Resumo:
In the chemical textile domain experts have to analyse chemical components and substances that might be harmful for their usage in clothing and textiles. Part of this analysis is performed searching opinions and reports people have expressed concerning these products in the Social Web. However, this type of information on the Internet is not as frequent for this domain as for others, so its detection and classification is difficult and time-consuming. Consequently, problems associated to the use of chemical substances in textiles may not be detected early enough, and could lead to health problems, such as allergies or burns. In this paper, we propose a framework able to detect, retrieve, and classify subjective sentences related to the chemical textile domain, that could be integrated into a wider health surveillance system. We also describe the creation of several datasets with opinions from this domain, the experiments performed using machine learning techniques and different lexical resources such as WordNet, and the evaluation focusing on the sentiment classification, and complaint detection (i.e., negativity). Despite the challenges involved in this domain, our approach obtains promising results with an F-score of 65% for polarity classification and 82% for complaint detection.
Resumo:
Feature selection is an important and active issue in clustering and classification problems. By choosing an adequate feature subset, a dataset dimensionality reduction is allowed, thus contributing to decreasing the classification computational complexity, and to improving the classifier performance by avoiding redundant or irrelevant features. Although feature selection can be formally defined as an optimisation problem with only one objective, that is, the classification accuracy obtained by using the selected feature subset, in recent years, some multi-objective approaches to this problem have been proposed. These either select features that not only improve the classification accuracy, but also the generalisation capability in case of supervised classifiers, or counterbalance the bias toward lower or higher numbers of features that present some methods used to validate the clustering/classification in case of unsupervised classifiers. The main contribution of this paper is a multi-objective approach for feature selection and its application to an unsupervised clustering procedure based on Growing Hierarchical Self-Organising Maps (GHSOMs) that includes a new method for unit labelling and efficient determination of the winning unit. In the network anomaly detection problem here considered, this multi-objective approach makes it possible not only to differentiate between normal and anomalous traffic but also among different anomalies. The efficiency of our proposals has been evaluated by using the well-known DARPA/NSL-KDD datasets that contain extracted features and labelled attacks from around 2 million connections. The selected feature sets computed in our experiments provide detection rates up to 99.8% with normal traffic and up to 99.6% with anomalous traffic, as well as accuracy values up to 99.12%.
Resumo:
Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.
Resumo:
Society, as we know it today, is completely dependent on computer networks, Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Moreover, and unconsciously, all services and distributed systems require network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect or improve the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we don’t consider network management systems during the development stage of main distributed systems, then there could be serious consequences or even total failures in the development of the distributed systems. It is necessary, therefore, to consider the management of the systems within the design of distributed systems and systematize their conception to minimize the impact of the management of networks within the project of distributed systems. In this paper, we present a formalization method of the conceptual modelling for design of a network management system through the use of formal modelling tools, thus allowing from the definition of processes to identify those responsible for these. Finally we will propose a use case to design a conceptual model intrusion detection system in network.
Resumo:
The explosive growth of the traffic in computer systems has made it clear that traditional control techniques are not adequate to provide the system users fast access to network resources and prevent unfair uses. In this paper, we present a reconfigurable digital hardware implementation of a specific neural model for intrusion detection. It uses a specific vector of characterization of the network packages (intrusion vector) which is starting from information obtained during the access intent. This vector will be treated by the system. Our approach is adaptative and to detecting these intrusions by using a complex artificial intelligence method known as multilayer perceptron. The implementation have been developed and tested into a reconfigurable hardware (FPGA) for embedded systems. Finally, the Intrusion detection system was tested in a real-world simulation to gauge its effectiveness and real-time response.
Resumo:
Software-based techniques offer several advantages to increase the reliability of processor-based systems at very low cost, but they cause performance degradation and an increase of the code size. To meet constraints in performance and memory, we propose SETA, a new control-flow software-only technique that uses assertions to detect errors affecting the program flow. SETA is an independent technique, but it was conceived to work together with previously proposed data-flow techniques that aim at reducing performance and memory overheads. Thus, SETA is combined with such data-flow techniques and submitted to a fault injection campaign. Simulation and neutron induced SEE tests show high fault coverage at performance and memory overheads inferior to the state-of-the-art.
Resumo:
Object tracking with subpixel accuracy is of fundamental importance in many fields since it provides optimal performance at relatively low-cost. Although there are many theoretical proposals that lead to resolution increments of several orders of magnitude, in practice, this resolution is limited by the imaging systems. In this paper we propose and demonstrate through numerical models a realistic limit for subpixel accuracy. The final result is that maximum achievable resolution enhancement is connected with the dynamic range of the image, i.e. the detection limit is 1/2^(nr.bits). Results here presented may help to proper design of superresolution experiments in microscopy, surveillance, defense and other fields.
Resumo:
Object tracking with subpixel accuracy is of fundamental importance in many fields since it provides optimal performance at relatively low cost. Although there are many theoretical proposals that lead to resolution increments of several orders of magnitude, in practice this resolution is limited by the imaging systems. In this paper we propose and demonstrate through simple numerical models a realistic limit for subpixel accuracy. The final result is that maximum achievable resolution enhancement is connected with the dynamic range of the image, i.e., the detection limit is 1/2∧(nr.bits). The results here presented may aid in proper design of superresolution experiments in microscopy, surveillance, defense, and other fields.
Resumo:
Federal Highway Administration, Office of Implementation, Washington, D.C.
Resumo:
Transportation Department, Office of University Research, Washington, D.C.
Resumo:
Federal Aviation Administration, Atlantic City International Airport, N.J.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Office of Research and Development, Washington, D.C.
Resumo:
Federal Transit Administration, Washington, D.C.
Resumo:
The use of modulated temperature differential scanning calorimetry (MTDSC) has provided further insight into the gelatinisation process since it allows the detection of glass transition during gelatinisation process. It was found in this work that the glass transition overlapped with the gelatinisation peak temperature for all maize starch formulations studied. Systematic investigation on maize starch gelatinisation over a range of water-glycerol concentrations with MTDSC revealed that the addition of glycerol increased the gelatinisation onset temperature with an extent that depended on the water content in the system. Furthermore, the addition of glycerol promoted starch gelatinisation at low water content (0.4 g water/g dry starch) and the enthalpy of gelatinisation varied with glycerol concentration (0.73-19.61 J/g dry starch) depending on the water content and starch type. The validities of published gelatinisation models were explored. These models failed to explain the glass transition phenomena observed during the course of gelatinisation and failed to describe the gelatinisation behaviour observed over the water-glycerol concentrations range investigated. A hypothesis for the mechanisms involved during gelatinisation was proposed based on the side chain liquid crystalline polymer model for starch structure and the concept that the order-disorder transition in starch requires that the hydrogen bonds (the major structural element in the granule packing) to be broken before the collapse of order (helix-coil transition) can take place. (C) 2004 Elsevier Ltd. All rights reserved.