934 resultados para Detection process
Resumo:
Intrusion detection is a critical component of security information systems. The intrusion detection process attempts to detect malicious attacks by examining various data collected during processes on the protected system. This paper examines the anomaly-based intrusion detection based on sequences of system calls. The point is to construct a model that describes normal or acceptable system activity using the classification trees approach. The created database is utilized as a basis for distinguishing the intrusive activity from the legal one using string metric algorithms. The major results of the implemented simulation experiments are presented and discussed as well.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.
Resumo:
A micro gas sensor has been developed by our group for the detection of organo-phosphate vapors using an aqueous oxime solution. The analyte diffuses from the high flow rate gas stream through a porous membrane to the low flow rate aqueous phase. It reacts with the oxime PBO (1-Phenyl-1,2,3,-butanetrione 2-oxime) to produce cyanide ions, which are then detected electrochemically from the change in solution potential. Previous work on this oxime based electrochemistry indicated that the optimal buffer pH for the aqueous solution was approximately 10. A basic environment is needed for the oxime anion to form and the detection reaction to take place. At this specific pH, the potential response of the sensor to an analyte (such as acetic anhydride) is maximized. However, sensor response slowly decreases as the aqueous oxime solution ages, by as much as 80% in first 24 hours. The decrease in sensor response is due to cyanide which is produced during the oxime degradation process, as evidenced by the cyanide selective electrode. Solid phase micro-extraction carried out on the oxime solution found several other possible degradation products, including acetic acid, N-hydroxy benzamide, benzoic acid, benzoyl cyanide, 1-Phenyl 1,3-butadione, 2-isonitrosoacetophenone and an imine derived from the oxime. It was concluded that degradation occurred through nucleophilic attack by a hydroxide or oxime anion to produce cyanide, as well as a nitrogen atom rearrangement similar to Beckmann rearrangement. The stability of the oxime in organic solvents is most likely due to the lack of water, and specifically hydroxide ions. The reaction between oxime and organo-phosphate to produce cyanide ions requires hydroxide ions, and therefore pure organic solvents are not compatible with the current micro-sensor electrochemistry. By combining a concentrated organic oxime solution with the basic aqueous buffer just prior to being used in the detection process, oxime degradation can be avoided while preserving the original electrochemical detection scheme. Based on beaker cell experiments with selective cyanide sensitive electrodes, ethanol was chosen as the best organic solvent due to its stabilizing effect on the oxime, minimal interference with the aqueous electrochemistry, and compatibility with the current microsensor material (PMMA). Further studies showed that ethanol had a small effect on micro-sensor performance by reducing the rate of cyanide production and decreasing the overall response time. To avoid incomplete mixing of the aqueous and organic solutions, they were pre-mixed externally at a 10:1 ratio, respectively. To adapt the microsensor design to allow for mixing to take place within the device, a small serpentine channel component was fabricated with the same dimensions and material as the original sensor. This allowed for seamless integration of the microsensor with the serpentine mixing channel. Mixing in the serpentine microchannel takes place via diffusion. Both detector potential response and diffusional mixing improve with increased liquid residence time, and thus decreased liquid flowrate. Micromixer performance was studies at a 10:1 aqueous buffer to organic solution flow rate ratio, for a total rate of 5.5 μL/min. It was found that the sensor response utilizing the integrated micromixer was nearly identical to the response when the solutions were premixed and fed at the same rate.
Resumo:
This paper presents a prototype tracking system for tracking people in enclosed indoor environments where there is a high rate of occlusions. The system uses a stereo camera for acquisition, and is capable of disambiguating occlusions using a combination of depth map analysis, a two step ellipse fitting people detection process, the use of motion models and Kalman filters and a novel fit metric, based on computationally simple object statistics. Testing shows that our fit metric outperforms commonly used position based metrics and histogram based metrics, resulting in more accurate tracking of people.
Resumo:
Current IEEE 802.11 wireless networks are vulnerable to session hijacking attacks as the existing standards fail to address the lack of authentication of management frames and network card addresses, and rely on loosely coupled state machines. Even the new WLAN security standard - IEEE 802.11i does not address these issues. In our previous work, we proposed two new techniques for improving detection of session hijacking attacks that are passive, computationally inexpensive, reliable, and have minimal impact on network performance. These techniques utilise unspoofable characteristics from the MAC protocol and the physical layer to enhance confidence in the intrusion detection process. This paper extends our earlier work and explores usability, robustness and accuracy of these intrusion detection techniques by applying them to eight distinct test scenarios. A correlation engine has also been introduced to maintain the false positives and false negatives at a manageable level. We also explore the process of selecting optimum thresholds for both detection techniques. For the purposes of our experiments, Snort-Wireless open source wireless intrusion detection system was extended to implement these new techniques and the correlation engine. Absence of any false negatives and low number of false positives in all eight test scenarios successfully demonstrated the effectiveness of the correlation engine and the accuracy of the detection techniques.
Resumo:
In today's technological age, fraud has become more complicated, and increasingly more difficult to detect, especially when it is collusive in nature. Different fraud surveys showed that the median loss from collusive fraud is much greater than fraud perpetrated by a single person. Despite its prevalence and potentially devastating effects, collusion is commonly overlooked as an organizational risk. Internal auditors often fail to proactively consider collusion in their fraud assessment and detection efforts. In this paper, we consider fraud scenarios with collusion. We present six potentially collusive fraudulent behaviors and show their detection process in an ERP system. We have enhanced our fraud detection framework to utilize aggregation of different sources of logs in order to detect communication and have further enhanced it to render it system-agnostic thus achieving portability and making it generally applicable to all ERP systems.
Resumo:
Probabilistic topic models have recently been used for activity analysis in video processing, due to their strong capacity to model both local activities and interactions in crowded scenes. In those applications, a video sequence is divided into a collection of uniform non-overlaping video clips, and the high dimensional continuous inputs are quantized into a bag of discrete visual words. The hard division of video clips, and hard assignment of visual words leads to problems when an activity is split over multiple clips, or the most appropriate visual word for quantization is unclear. In this paper, we propose a novel algorithm, which makes use of a soft histogram technique to compensate for the loss of information in the quantization process; and a soft cut technique in the temporal domain to overcome problems caused by separating an activity into two video clips. In the detection process, we also apply a soft decision strategy to detect unusual events.We show that the proposed soft decision approach outperforms its hard decision counterpart in both local and global activity modelling.
Resumo:
This paper presents two novel concepts to enhance the accuracy of damage detection using the Modal Strain Energy based Damage Index (MSEDI) with the presence of noise in the mode shape data. Firstly, the paper presents a sequential curve fitting technique that reduces the effect of noise on the calculation process of the MSEDI, more effectively than the two commonly used curve fitting techniques; namely, polynomial and Fourier’s series. Secondly, a probability based Generalized Damage Localization Index (GDLI) is proposed as a viable improvement to the damage detection process. The study uses a validated ABAQUS finite-element model of a reinforced concrete beam to obtain mode shape data in the undamaged and damaged states. Noise is simulated by adding three levels of random noise (1%, 3%, and 5%) to the mode shape data. Results show that damage detection is enhanced with increased number of modes and samples used with the GDLI.
Resumo:
The modern structural diagnosis process is rely on vibration characteristics to assess safer serviceability level of the structure. This paper examines the potential of change in flexibility method to use in damage detection process and two main practical constraints associated with it. The first constraint addressed in this paper is reduction in number of data acquisition points due to limited number of sensors. Results conclude that accuracy of the change in flexibility method is influenced by the number of data acquisition points/sensor locations in real structures. Secondly, the effect of higher modes on damage detection process has been studied. This addresses the difficulty of extracting higher order modal data with available sensors. Four damage indices have been presented to identify their potential of damage detection with respect to different locations and severity of damage. A simply supported beam with two degrees of freedom at each node is considered only for a single damage cases throughout the paper.
Resumo:
Damage assessment (damage detection, localization and quantification) in structures and appropriate retrofitting will enable the safe and efficient function of the structures. In this context, many Vibration Based Damage Identification Techniques (VBDIT) have emerged with potential for accurate damage assessment. VBDITs have achieved significant research interest in recent years, mainly due to their non-destructive nature and ability to assess inaccessible and invisible damage locations. Damage Index (DI) methods are also vibration based, but they are not based on the structural model. DI methods are fast and inexpensive compared to the model-based methods and have the ability to automate the damage detection process. DI method analyses the change in vibration response of the structure between two states so that the damage can be identified. Extensive research has been carried out to apply the DI method to assess damage in steel structures. Comparatively, there has been very little research interest in the use of DI methods to assess damage in Reinforced Concrete (RC) structures due to the complexity of simulating the predominant damage type, the flexural crack. Flexural cracks in RC beams distribute non- linearly and propagate along all directions. Secondary cracks extend more rapidly along the longitudinal and transverse directions of a RC structure than propagation of existing cracks in the depth direction due to stress distribution caused by the tensile reinforcement. Simplified damage simulation techniques (such as reductions in the modulus or section depth or use of rotational spring elements) that have been extensively used with research on steel structures, cannot be applied to simulate flexural cracks in RC elements. This highlights a big gap in knowledge and as a consequence VBDITs have not been successfully applied to damage assessment in RC structures. This research will address the above gap in knowledge and will develop and apply a modal strain energy based DI method to assess damage in RC flexural members. Firstly, this research evaluated different damage simulation techniques and recommended an appropriate technique to simulate the post cracking behaviour of RC structures. The ABAQUS finite element package was used throughout the study with properly validated material models. The damaged plasticity model was recommended as the method which can correctly simulate the post cracking behaviour of RC structures and was used in the rest of this study. Four different forms of Modal Strain Energy based Damage Indices (MSEDIs) were proposed to improve the damage assessment capability by minimising the numbers and intensities of false alarms. The developed MSEDIs were then used to automate the damage detection process by incorporating programmable algorithms. The developed algorithms have the ability to identify common issues associated with the vibration properties such as mode shifting and phase change. To minimise the effect of noise on the DI calculation process, this research proposed a sequential order of curve fitting technique. Finally, a statistical based damage assessment scheme was proposed to enhance the reliability of the damage assessment results. The proposed techniques were applied to locate damage in RC beams and slabs on girder bridge model to demonstrate their accuracy and efficiency. The outcomes of this research will make a significant contribution to the technical knowledge of VBDIT and will enhance the accuracy of damage assessment in RC structures. The application of the research findings to RC flexural members will enable their safe and efficient performance.
Resumo:
Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.
Resumo:
A victim of phishing emails could be subjected to money loss and identity theft. This paper investigates the different types of phishing email victims, with the goal of increasing such victims' defences. To obtain this kind of information, an experiment which involves sending a phishing email to participants is conducted. Quantitative and qualitative methods are also used to collect users' information. A model for detecting deception has been employed to understand victims' behaviour. This paper reports the qualitative results. The findings suggest that victims of phishing emails do not always exhibit the same vulnerability. The cause of being a victim is a result of three weaknesses in the detection process: (1) lack of knowledge; (2) weak confirmation channel, and; (3) victims' high propensity towards risk-taking. Therefore, it is suggested that users be provided with suitable confirmation channels and be more risk averse in their behaviour so that they would not fall victim to phishing emails.
Resumo:
Ever since Cox et. al published their paper, “A Secure, Robust Watermark for Multimedia” in 1996 [6], there has been tremendous progress in multimedia watermarking. The same pattern re-emerged with Agrawal and Kiernan publishing their work “Watermarking Relational Databases” in 2001 [1]. However, little attention has been given to primitive data collections with only a handful works of research known to the authors [11, 10]. This is primarily due to the absence of an attribute that differentiates marked items from unmarked item during insertion and detection process. This paper presents a distribution-independent, watermarking model that is secure against secondary-watermarking in addition to conventional attacks such as data addition, deletion and distortion. The low false positives and high capacity provide additional strength to the scheme. These claims are backed by experimental results provided in the paper.
Resumo:
In the field of vibration-based damage detection of concrete structures efficient damage models are needed to better understand changes in the vibration properties of cracked structures. These models should quantitatively replicate the damage mechanisms in concrete and easily be used as damage detection tools. In this paper, the flexural cracking behaviour of plain concrete prisms subject to monotonic and cyclic loading regimes under displacement control is tested experimentally and modelled numerically. Four-point bending tests on simply supported un-notched prisms are conducted, where the cracking process is monitored using a digital image correlation system. A numerical model, with a single crack at midspan, is presented where the cracked zone is modelled using the fictitious crack approach and parts outside that zone are treated in a linear-elastic manner. The model considers crack initiation, growth and closure by adopting cyclic constitutive laws. A multi-variate Newton-Raphson iterative solver is used to solve the non-linear equations to ensure equilibrium and compatibility at the interface of the cracked zone. The numerical results agree well with the experiments for both loading scenarios. The model shows good predictions of the degradation of stiffness with increasing load. It also approximates the crack-mouth-opening-displacement when compared with the experimental data of the digital image correlation system. The model is found to be computationally efficient as it runs full analysis for cyclic loading in less than 2. min, and it can therefore be used within the damage detection process. © 2013 Elsevier Ltd.