934 resultados para Detection process
Resumo:
The study presents a multi-layer genetic algorithm (GA) approach using correlation-based methods to facilitate damage determination for through-truss bridge structures. To begin, the structure’s damage-suspicious elements are divided into several groups. In the first GA layer, the damage is initially optimised for all groups using correlation objective function. In the second layer, the groups are combined to larger groups and the optimisation starts over at the normalised point of the first layer result. Then the identification process repeats until reaching the final layer where one group includes all structural elements and only minor optimisations are required to fine tune the final result. Several damage scenarios on a complicated through-truss bridge example are nominated to address the proposed approach’s effectiveness. Structural modal strain energy has been employed as the variable vector in the correlation function for damage determination. Simulations and comparison with the traditional single-layer optimisation shows that the proposed approach is efficient and feasible for complicated truss bridge structures when the measurement noise is taken into account.
Resumo:
Background: Right-to-left shunting via a patent foramen ovale (PFO) has a recognized association with embolic events in younger patients. The use of agitated saline contrast imaging (ASCi) for detecting atrial shunting is well documented, however optimal technique is not well described. The purpose of this study is to assess the efficacy and safety of ASCi via TTE for assessment of right-to-left atrial communication in a large cohort of patients. Method: A retrospective review was undertaken of 1162 consecutive transthoracic (TTE) ASCi studies, of which 195 had also undergone clinically indicated transesophageal (TEE) echo. ASCi shunt results were compared with color flow imaging (CFI) and the role of provocative maneuvers (PM) assessed. Results: 403 TTE studies (35%) had paradoxical shunting seen during ASCi. Of these, 48% were positive with PM only. There was strong agreement between TTE ASCi and reported TEE findings (99% sensitivity, 85% specificity), with six false positive and two false negative results. In hindsight, the latter were likely due to suboptimal right atrial opacification, and the former due to transpulmonary shunting. TTE CFI was found to be insensitive (22%) for the detection of a PFO compared with TTE ASCi. Conclusions: TTE ASCi is minimally invasive and highly accurate for the detection of right-to-left atrial communication when PM are used. TTE CFI was found to be insensitive for PFO screening. It is recommended that TTE ASCi should be considered the initial diagnostic tool for the detection of PFO in clinical practice. A dedicated protocol should be followed to ensure adequate agitated saline contrast delivery and performance of provocative maneuvers.
Resumo:
Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.
Resumo:
The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Background subtraction is a fundamental low-level processing task in numerous computer vision applications. The vast majority of algorithms process images on a pixel-by-pixel basis, where an independent decision is made for each pixel. A general limitation of such processing is that rich contextual information is not taken into account. We propose a block-based method capable of dealing with noise, illumination variations, and dynamic backgrounds, while still obtaining smooth contours of foreground objects. Specifically, image sequences are analyzed on an overlapping block-by-block basis. A low-dimensional texture descriptor obtained from each block is passed through an adaptive classifier cascade, where each stage handles a distinct problem. A probabilistic foreground mask generation approach then exploits block overlaps to integrate interim block-level decisions into final pixel-level foreground segmentation. Unlike many pixel-based methods, ad-hoc postprocessing of foreground masks is not required. Experiments on the difficult Wallflower and I2R datasets show that the proposed approach obtains on average better results (both qualitatively and quantitatively) than several prominent methods. We furthermore propose the use of tracking performance as an unbiased approach for assessing the practical usefulness of foreground segmentation methods, and show that the proposed approach leads to considerable improvements in tracking accuracy on the CAVIAR dataset.
Resumo:
In this paper, we propose an approach which attempts to solve the problem of surveillance event detection, assuming that we know the definition of the events. To facilitate the discussion, we first define two concepts. The event of interest refers to the event that the user requests the system to detect; and the background activities are any other events in the video corpus. This is an unsolved problem due to many factors as listed below: 1) Occlusions and clustering: The surveillance scenes which are of significant interest at locations such as airports, railway stations, shopping centers are often crowded, where occlusions and clustering of people are frequently encountered. This significantly affects the feature extraction step, and for instance, trajectories generated by object tracking algorithms are usually not robust under such a situation. 2) The requirement for real time detection: The system should process the video fast enough in both of the feature extraction and the detection step to facilitate real time operation. 3) Massive size of the training data set: Suppose there is an event that lasts for 1 minute in a video with a frame rate of 25fps, the number of frames for this events is 60X25 = 1500. If we want to have a training data set with many positive instances of the event, the video is likely to be very large in size (i.e. hundreds of thousands of frames or more). How to handle such a large data set is a problem frequently encountered in this application. 4) Difficulty in separating the event of interest from background activities: The events of interest often co-exist with a set of background activities. Temporal groundtruth typically very ambiguous, as it does not distinguish the event of interest from a wide range of co-existing background activities. However, it is not practical to annotate the locations of the events in large amounts of video data. This problem becomes more serious in the detection of multi-agent interactions, since the location of these events can often not be constrained to within a bounding box. 5) Challenges in determining the temporal boundaries of the events: An event can occur at any arbitrary time with an arbitrary duration. The temporal segmentation of events is difficult and ambiguous, and also affected by other factors such as occlusions.
Resumo:
Securing IT infrastructures of our modern lives is a challenging task because of their increasing complexity, scale and agile nature. Monolithic approaches such as using stand-alone firewalls and IDS devices for protecting the perimeter cannot cope with complex malwares and multistep attacks. Collaborative security emerges as a promising approach. But, research results in collaborative security are not mature, yet, and they require continuous evaluation and testing. In this work, we present CIDE, a Collaborative Intrusion Detection Extension for the network security simulation platform ( NeSSi 2 ). Built-in functionalities include dynamic group formation based on node preferences, group-internal communication, group management and an approach for handling the infection process for malware-based attacks. The CIDE simulation environment provides functionalities for easy implementation of collaborating nodes in large-scale setups. We evaluate the group communication mechanism on the one hand and provide a case study and evaluate our collaborative security evaluation platform in a signature exchange scenario on the other.
Resumo:
This paper presents a formal methodology for attack modeling and detection for networks. Our approach has three phases. First, we extend the basic attack tree approach 1 to capture (i) the temporal dependencies between components, and (ii) the expiration of an attack. Second, using the enhanced attack trees (EAT) we build a tree automaton that accepts a sequence of actions from input stream if there is a traverse of an attack tree from leaves to the root node. Finally, we show how to construct an enhanced parallel automaton (EPA) that has each tree automaton as a subroutine and can process the input stream by considering multiple trees simultaneously. As a case study, we show how to represent the attacks in IEEE 802.11 and construct an EPA for it.
Resumo:
Monitoring and estimation of marine populations is of paramount importance for the conservation and management of sea species. Regular surveys are used to this purpose followed often by a manual counting process. This paper proposes an algorithm for automatic detection of dugongs from imagery taken in aerial surveys. Our algorithm exploits the fact that dugongs are rare in most images, therefore we determine regions of interest partially based on color rarity. This simple observation makes the system robust to changes in illumination. We also show that by applying the extended-maxima transform on red-ratio images, submerged dugongs with very fuzzy edges can be detected. Performance figures obtained here are promising in terms of degree of confidence in the detection of marine species, but more importantly our approach represents a significant step in automating this type of surveys.
Resumo:
This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.
Resumo:
This paper presents an investigation into event detection in crowded scenes, where the event of interest co-occurs with other activities and only binary labels at the clip level are available. The proposed approach incorporates a fast feature descriptor from the MPEG domain, and a novel multiple instance learning (MIL) algorithm using sparse approximation and random sensing. MPEG motion vectors are used to build particle trajectories that represent the motion of objects in uniform video clips, and the MPEG DCT coefficients are used to compute a foreground map to remove background particles. Trajectories are transformed into the Fourier domain, and the Fourier representations are quantized into visual words using the K-Means algorithm. The proposed MIL algorithm models the scene as a linear combination of independent events, where each event is a distribution of visual words. Experimental results show that the proposed approaches achieve promising results for event detection compared to the state-of-the-art.
Resumo:
OBJECTIVES: To provide an overview of 1) traditional methods of skin cancer early detection, 2) current technologies for skin cancer detection, and 3) evolving practice models of early detection. DATA SOURCES: Peer-reviewed databased articles and reviews, scholarly texts, and Web-based resources. CONCLUSION: Early detection of skin cancer through established methods or newer technologies is critical for reducing both skin cancer mortality and the overall skin cancer burden. IMPLICATIONS FOR NURSING PRACTICE: A basic knowledge of recommended skin examination guidelines and risk factors for skin cancer, traditional methods to further examine lesions that are suspicious for skin cancer and evolving detection technologies can guide patient education and skin inspection decisions.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.