889 resultados para intrusion detection system (IDS)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We seek to both detect and segment objects in images. To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a learning based approach for detecting classes of objects and patterns with variable image appearance but highly predictable image boundaries. It consists of two parts. In part one, we introduce our object and pattern detection approach using a concrete human face detection example. The approach first builds a distribution-based model of the target pattern class in an appropriate feature space to describe the target's variable image appearance. It then learns from examples a similarity measure for matching new patterns against the distribution-based target model. The approach makes few assumptions about the target pattern class and should therefore be fairly general, as long as the target class has predictable image boundaries. Because our object and pattern detection approach is very much learning-based, how well a system eventually performs depends heavily on the quality of training examples it receives. The second part of this thesis looks at how one can select high quality examples for function approximation learning tasks. We propose an {em active learning} formulation for function approximation, and show for three specific approximation function classes, that the active example selection strategy learns its target with fewer data samples than random sampling. We then simplify the original active learning formulation, and show how it leads to a tractable example selection paradigm, suitable for use in many object and pattern detection problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information representation is a critical issue in machine vision. The representation strategy in the primitive stages of a vision system has enormous implications for the performance in subsequent stages. Existing feature extraction paradigms, like edge detection, provide sparse and unreliable representations of the image information. In this thesis, we propose a novel feature extraction paradigm. The features consist of salient, simple parts of regions bounded by zero-crossings. The features are dense, stable, and robust. The primary advantage of the features is that they have abstract geometric attributes pertaining to their size and shape. To demonstrate the utility of the feature extraction paradigm, we apply it to passive navigation. We argue that the paradigm is applicable to other early vision problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report describes the implementation of a theory of edge detection, proposed by Marr and Hildreth (1979). According to this theory, the image is first processed independently through a set of different size filters, whose shape is the Laplacian of a Gaussian, ***. Zero-crossings in the output of these filters mark the positions of intensity changes at different resolutions. Information about these zero-crossings is then used for deriving a full symbolic description of changes in intensity in the image, called the raw primal sketch. The theory is closely tied with early processing in the human visual systems. In this report, we first examine the critical properties of the initial filters used in the edge detection process, both from a theoretical and practical standpoint. The implementation is then used as a test bed for exploring aspects of the human visual system; in particular, acuity and hyperacuity. Finally, we present some preliminary results concerning the relationship between zero-crossings detected at different resolutions, and some observations relevant to the process by which the human visual system integrates descriptions of intensity changes obtained at different resolutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce "BU-MIA," a Medical Image Analysis system that integrates various advanced chest image analysis methods for detection, estimation, segmentation, and registration. BU-MIA evaluates repeated computed tomography (CT) scans of the same patient to facilitate identification and evaluation of pulmonary nodules for interval growth. It provides a user-friendly graphical user interface with a number of interaction tools for development, evaluation, and validation of chest image analysis methods. The structures that BU-MIA processes include the thorax, lungs, and trachea, pulmonary structures, such as lobes, fissures, nodules, and vessels, and bones, such as sternum, vertebrae, and ribs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An automated system for detection of head movements is described. The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary. Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common design of an object recognition system has two steps, a detection step followed by a foreground within-class classification step. For example, consider face detection by a boosted cascade of detectors followed by face ID recognition via one-vs-all (OVA) classifiers. Another example is human detection followed by pose recognition. Although the detection step can be quite fast, the foreground within-class classification process can be slow and becomes a bottleneck. In this work, we formulate a filter-and-refine scheme, where the binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the FRGC V2 data set, hand shape detection and parameter estimation on a hand data set and vehicle detection and view angle estimation on a multi-view vehicle data set. On all data sets, our approach has comparable accuracy and is at least five times faster than the brute force approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color, or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A human-computer interface (HCI) system designed for use by people with severe disabilities is presented. People that are severely paralyzed or afflicted with diseases such as ALS (Lou Gehrig's disease) or multiple sclerosis are unable to move or control any parts of their bodies except for their eyes. The system presented here detects the user's eye blinks and analyzes the pattern and duration of the blinks, using them to provide input to the computer in the form of a mouse click. After the automatic initialization of the system occurs from the processing of the user's involuntary eye blinks in the first few seconds of use, the eye is tracked in real time using correlation with an online template. If the user's depth changes significantly or rapid head movement occurs, the system is automatically reinitialized. There are no lighting requirements nor offline templates needed for the proper functioning of the system. The system works with inexpensive USB cameras and runs at a frame rate of 30 frames per second. Extensive experiments were conducted to determine both the system's accuracy in classifying voluntary and involuntary blinks, as well as the system's fitness in varying environment conditions, such as alternative camera placements and different lighting conditions. These experiments on eight test subjects yielded an overall detection accuracy of 95.3%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate head tilt detection has a large potential to aid people with disabilities in the use of human-computer interfaces and provide universal access to communication software. We show how it can be utilized to tab through links on a web page or control a video game with head motions. It may also be useful as a correction method for currently available video-based assistive technology that requires upright facial poses. Few of the existing computer vision methods that detect head rotations in and out of the image plane with reasonable accuracy can operate within the context of a real-time communication interface because the computational expense that they incur is too great. Our method uses a variety of metrics to obtain a robust head tilt estimate without incurring the computational cost of previous methods. Our system runs in real time on a computer with a 2.53 GHz processor, 256 MB of RAM and an inexpensive webcam, using only 55% of the processor cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network system, NAVITE, for incremental trajectory generation and obstacle avoidance is presented. Unlike other approaches, the system is effective in unstructured environments. Multimodal inforrnation from visual and range data is used for obstacle detection and to eliminate uncertainty in the measurements. Optimal paths are computed without explicitly optimizing cost functions, therefore reducing computational expenses. Simulations of a planar mobile robot (including the dynamic characteristics of the plant) in obstacle-free and object avoidance trajectories are presented. The system can be extended to incorporate global map information into the local decision-making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research work in this thesis included the sensitive and selective separation of biological substance by capillary electrophoresis with a boron doped diamond electrode for amperometric detection. Chapter 1 introduced the capillary electrophoresis and electrochemical detection. It included the different modes of capillary electrophoresis, polyelectrolyte multilayers coating for open tubular capillary electrochromatography, different modes of electrochemical detection and carbon based electrodes. Chapter 2 showed the synthesized and electropolymerized N-acetyltyramine with a negatively charged sulfobutylether-β-cyclodextrin on a boron doped diamond (BDD) electrode followed by the electropolymerzation of pyrrole to form a stable and permselective film for selective dopamine detection. For comparison, a glassy carbon (GC) electrode with a combined electropolymerized permselective film of polytyramine and polypyrrole-1-propionic acid was used for selective detection of dopamine. The detection limit of dopamine was improved from 100 nM at a GC electrode to 5 nM at a BDD electrode. Chapter 3 showed field-amplified sample stacking using a fused silica capillary coated with gold nanoparticles embedded in poly(diallyldimethylammonium) chloride, which has been investigated for the electrophoretic separation of indoxyl sulphate, homovanillic acid and vanillylmandelic acid. The detection limit of the three analytes obtained by using a boron doped diamond electrode was around 75 nM, which was significantly below their normal physiological levels in biological fluids. This combined separation and detection scheme was applied to the direct analysis of these analytes and other interfereing chemicals including uric and ascorbic acids in urine samples without off-line sample treatment or preconcentration. Chapter 4 showed the selective detection of Pseudomonas Quinolone Signal, PQS for quorum sensing from its precursor HHQ, using a simply boron doped diamond electrode. Furthermore, by combining poly(diallyldimethylammonium) chloride modified fused silica capillary with a BDD electrode for amperometric detection, PQS was separated from HHQ and other analogues. The detection limit of PQS was as low as 65 nM. Different P. aeruginosa mutant strains were studied. Chapter 5 showed the separation of aminothiols by layer-by-layer coating of silica capillary with a boron doped diamond electrode. The capillary was layer-by-layer coated with the polycation poly(diallyldimethylammonium) chloride and negatively charged silica nanoparticles. All the aminothiols was separated and detected using a BDD electrode in an acidic electrolyte. It was a novel scheme for the separation and detection of glutathione reduced and oxidized forms, which is important for estimated overstressed level in the human system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lidar is an optical remote sensing instrument that can measure atmospheric parameters. A Raman lidar instrument (UCLID) was established at University College Cork to contribute to the European lidar network, EARLINET. System performance tests were carried out to ensure strict data quality assurance for submission to the EARLINET database. Procedures include: overlap correction, telecover test, Rayleigh test and zero bin test. Raman backscatter coefficients, extinction coefficients and lidar ratio were measured from April 2010 to May 2011 and February 2012 to June 2012. Statistical analysis of the profiles over these periods provided new information about the typical atmospheric scenarios over Southern Ireland in terms of aerosol load in the lower troposphere, the planetary boundary layer (PBL) height, aerosol optical density (AOD) at 532 nm and lidar ratio values. The arithmetic average of the PBL height was found to be 608 ± 138 m with a median of 615 m, while average AOD at 532 nm for clean marine air masses was 0.119 ± 0.023 and for polluted air masses was 0.170 ± 0.036. The lidar ratio showed a seasonal dependence with lower values found in winter and autumn (20 ± 5 sr) and higher during spring and winter (30 ± 12 sr). Detection of volcanic particles from the eruption of the volcano Eyjafjallajökull in Iceland was measured between 21 April and 7 May 2010. The backscatter coefficient of the ash layer varied between 2.5 Mm-1sr-1 and 3.5 Mm-1sr-1, and estimation of the AOD at 532 nm was found to be between 0.090 and 0.215. Several aerosol loads due to Saharan dust particles were detected in Spring 2011 and 2012. Lidar ratio of the dust layers were determine to be between 45 and 77 sr and AOD at 532 nm during the dust events range between 0.84 to 0.494.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hazard perception has been found to correlate with crash involvement, and has thus been suggested as the most likely source of any skill gap between novice and experienced drivers. The most commonly used method for measuring hazard perception is to evaluate the perception-reaction time to filmed traffic events. It can be argued that this method lacks ecological validity and may be of limited value in predicting the actions drivers’ will take to hazards encountered. The first two studies of this thesis compare novice and experienced drivers’ performance on a hazard detection test, requiring discrete button press responses, with their behaviour in a more dynamic driving environment, requiring hazard handling ability. Results indicate that the hazard handling test is more successful at identifying experience-related differences in response time to hazards. Hazard detection test scores were strongly related to performance on a driver theory test, implying that traditional hazard perception tests may be focusing more on declarative knowledge of driving than on the procedural knowledge required to successfully avoid hazards while driving. One in five Irish drivers crash within a year of passing their driving test. This suggests that the current driver training system does not fully prepare drivers for the dangers they will encounter. Thus, the third and fourth studies in this thesis focus on the development of two simulator-based training regimes. In the third study participants receive intensive training on the molar elements of driving i.e. speed and distance evaluation. The fourth study focuses on training higher order situation awareness skills, including perception, comprehension and projection. Results indicate significant improvement in aspects of speed, distance and situation awareness across training days. However, neither training programme leads to significant improvements in hazard handling performance, highlighting the difficulties of applying learning to situations not previously encountered.