395 resultados para Intrusion Detection, Computer Security, Misuse
Resumo:
This paper presents a method of voice activity detection (VAD) for high noise scenarios, using a noise robust voiced speech detection feature. The developed method is based on the fusion of two systems. The first system utilises the maximum peak of the normalised time-domain autocorrelation function (MaxPeak). The second zone system uses a novel combination of cross-correlation and zero-crossing rate of the normalised autocorrelation to approximate a measure of signal pitch and periodicity (CrossCorr) that is hypothesised to be noise robust. The score outputs by the two systems are then merged using weighted sum fusion to create the proposed autocorrelation zero-crossing rate (AZR) VAD. Accuracy of AZR was compared to state of the art and standardised VAD methods and was shown to outperform the best performing system with an average relative improvement of 24.8% in half-total error rate (HTER) on the QUT-NOISE-TIMIT database created using real recordings from high-noise environments.
Resumo:
As organizations reach to higher levels of business process management maturity, they often find themselves maintaining repositories of hundreds or even thousands of process models, representing valuable knowledge about their operations. Over time, process model repositories tend to accumulate duplicate fragments (also called clones) as new process models are created or extended by copying and merging fragments from other models. This calls for methods to detect clones in process models, so that these clones can be refactored as separate subprocesses in order to improve maintainability. This paper presents an indexing structure to support the fast detection of clones in large process model repositories. The proposed index is based on a novel combination of a method for process model decomposition (specifically the Refined Process Structure Tree), with established graph canonization and string matching techniques. Experiments show that the algorithm scales to repositories with hundreds of models. The experimental results also show that a significant number of non-trivial clones can be found in process model repositories taken from industrial practice.
Resumo:
This work proposes to improve spoken term detection (STD) accuracy by optimising the Figure of Merit (FOM). In this article, the index takes the form of phonetic posterior-feature matrix. Accuracy is improved by formulating STD as a discriminative training problem and directly optimising the FOM, through its use as an objective function to train a transformation of the index. The outcome of indexing is then a matrix of enhanced posterior-features that are directly tailored for the STD task. The technique is shown to improve the FOM by up to 13% on held-out data. Additional analysis explores the effect of the technique on phone recognition accuracy, examines the actual values of the learned transform, and demonstrates that using an extended training data set results in further improvement in the FOM.
Resumo:
Detection of Region of Interest (ROI) in a video leads to more efficient utilization of bandwidth. This is because any ROIs in a given frame can be encoded in higher quality than the rest of that frame, with little or no degradation of quality from the perception of the viewers. Consequently, it is not necessary to uniformly encode the whole video in high quality. One approach to determine ROIs is to use saliency detectors to locate salient regions. This paper proposes a methodology for obtaining ground truth saliency maps to measure the effectiveness of ROI detection by considering the role of user experience during the labelling process of such maps. User perceptions can be captured and incorporated into the definition of salience in a particular video, taking advantage of human visual recall within a given context. Experiments with two state-of-the-art saliency detectors validate the effectiveness of this approach to validating visual saliency in video. This paper will provide the relevant datasets associated with the experiments.
Resumo:
This paper proposes a semi-supervised intelligent visual surveillance system to exploit the information from multi-camera networks for the monitoring of people and vehicles. Modules are proposed to perform critical surveillance tasks including: the management and calibration of cameras within a multi-camera network; tracking of objects across multiple views; recognition of people utilising biometrics and in particular soft-biometrics; the monitoring of crowds; and activity recognition. Recent advances in these computer vision modules and capability gaps in surveillance technology are also highlighted.
Resumo:
The ultimate goal of an authorisation system is to allocate each user the level of access they need to complete their job - no more and no less. This proves to be challenging in an organisational setting because on one hand employees need enough access to perform their tasks, while on the other hand more access will bring about an increasing risk of misuse - either intentionally, where an employee uses the access for personal benefit, or unintentionally through carelessness, losing the information or being socially engineered to give access to an adversary. With the goal of developing a more dynamic authorisation model, we have adopted a game theoretic framework to reason about the factors that may affect users’ likelihood to misuse a permission at the time of an access decision. Game theory provides a useful but previously ignored perspective in authorisation theory: the notion of the user as a self-interested player who selects among a range of possible actions depending on their pay-offs.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
Climate change effects are expected to substantially raise the average sea level. It is widely assumed that this raise will have a severe adverse impact on saltwater intrusion processes in coastal aquifers. In this study we hypothesize that a natural mechanism, identified as the “lifting process” has the potential to mitigate or in some cases completely reverse the adverse intrusion effects induced by sea-level rise. A detailed numerical study using the MODFLOW-family computer code SEAWAT, was completed to test this hypothesis and to understand the effects of this lifting process in both confined and unconfined systems. Our conceptual simulation results show that if the ambient recharge remains constant, the sea-level rise will have no long-term impact (i.e., it will not affect the steady-state salt wedge) on confined aquifers. Our transient confined flow simulations show a self-reversal mechanism where the wedge which will initially intrude into the formation due to the sea-level rise would be naturally driven back to the original position. In unconfined systems, the lifting process would have a lesser influence due to changes in the value of effective transmissivity. A detailed sensitivity analysis was also completed to understand the sensitivity of this self-reversal effect to various aquifer parameters.
Comparison of standard image segmentation methods for segmentation of brain tumors from 2D MR images
Resumo:
In the analysis of medical images for computer-aided diagnosis and therapy, segmentation is often required as a preliminary step. Medical image segmentation is a complex and challenging task due to the complex nature of the images. The brain has a particularly complicated structure and its precise segmentation is very important for detecting tumors, edema, and necrotic tissues in order to prescribe appropriate therapy. Magnetic Resonance Imaging is an important diagnostic imaging technique utilized for early detection of abnormal changes in tissues and organs. It possesses good contrast resolution for different tissues and is, thus, preferred over Computerized Tomography for brain study. Therefore, the majority of research in medical image segmentation concerns MR images. As the core juncture of this research a set of MR images have been segmented using standard image segmentation techniques to isolate a brain tumor from the other regions of the brain. Subsequently the resultant images from the different segmentation techniques were compared with each other and analyzed by professional radiologists to find the segmentation technique which is the most accurate. Experimental results show that the Otsu’s thresholding method is the most suitable image segmentation method to segment a brain tumor from a Magnetic Resonance Image.
Resumo:
This paper uses dynamic computer simulation techniques to develop and apply a multi-criteria procedure using non-destructive vibration-based parameters for damage assessment in truss bridges. In addition to changes in natural frequencies, this procedure incorporates two parameters, namely the modal flexibility and the modal strain energy. Using the numerically simulated modal data obtained through finite element analysis of the healthy and damaged bridge models, algorithms based on modal flexibility and modal strain energy changes before and after damage are obtained and used as the indices for the assessment of structural health state. The application of the two proposed parameters to truss-type structures is limited in the literature. The proposed multi-criteria based damage assessment procedure is therefore developed and applied to truss bridges. The application of the approach is demonstrated through numerical simulation studies of a single-span simply supported truss bridge with eight damage scenarios corresponding to different types of deck and truss damage. Results show that the proposed multi-criteria method is effective in damage assessment in this type of bridge superstructure.
Resumo:
Given the serious nature of computer crime, and its global nature and implications, it is clear that there is a crucial need for a common understanding of such criminal activity internationally in order to deal with it effectively. Research into the extent to which legislation, international initiatives, and policy and procedures to combat and investigate computer crime are consistent globally is therefore of enormous importance. The challenge is to study, analyse, and compare the policies and practices of combating computer crime under different jurisdictions in order to identify the extent to which they are consistent with each other and with international guidelines; and the extent of their successes and limitations. The purpose ultimately is to identify areas where improvements are needed and what those improvements should be. This thesis examines approaches used for combating computer crime, including money laundering, in Australia, the UAE, the UK and the USA, four countries which represent a spectrum of economic development and culture. It does so in the context of the guidelines of international organizations such as the Council of Europe (CoE) and the Financial Action Task Force (FATF). In the case of the UAE, we examine also the cultural influences which differentiate it from the other three countries and which has necessarily been a factor in shaping its approaches for countering money laundering in particular. The thesis concludes that because of the transnational nature of computer crime there is a need internationally for further harmonisation of approaches for combating computer crime. The specific contributions of the thesis are as follows: „h Developing a new unified comprehensive taxonomy of computer crime based upon the dual characteristics of the role of the computer and the contextual nature of the crime „h Revealing differences in computer crime legislation in Australia, the UAE, the UK and the USA, and how they correspond to the CoE Convention on Cybercrime and identifying a new framework to develop harmonised computer crime or cybercrime legislation globally „h Identifying some important issues that continue to create problems for law enforcement agencies such as insufficient resources, coping internationally with computer crime legislation that differs between countries, having comprehensive documented procedures and guidelines for combating computer crime, and reporting and recording of computer crime offences as distinct from other forms of crime „h Completing the most comprehensive study currently available regarding the extent of money laundered in four such developed or fast developing countries „h Identifying that the UK and the USA are the most advanced with regard to anti-money laundering and combating the financing of terrorism (AML/CFT) systems among the four countries based on compliance with the FATF recommendations. In addition, the thesis has identified that local factors have affected how the UAE has implemented its financial and AML/CFT systems and reveals that such local and cultural factors should be taken into account when implementing or evaluating any country¡¦s AML/CFT system.
Resumo:
This paper describes an effective method for signal-authentication and spoofing detection for civilian GNSS receivers using the GPS L1 C/A and the Galileo E1-B Safety of Life service. The paper discusses various spoofing attack profiles and how the proposed method is able to detect these attacks. This method is relatively low-cost and can be suitable for numerous mass-market applications. This paper is the subject of a pending patent.
Resumo:
In dynamic and uncertain environments such as healthcare, where the needs of security and information availability are difficult to balance, an access control approach based on a static policy will be suboptimal regardless of how comprehensive it is. The uncertainty stems from the unpredictability of users’ operational needs as well as their private incentives to misuse permissions. In Role Based Access Control (RBAC), a user’s legitimate access request may be denied because its need has not been anticipated by the security administrator. Alternatively, even when the policy is correctly specified an authorised user may accidentally or intentionally misuse the granted permission. This paper introduces a novel approach to access control under uncertainty and presents it in the context of RBAC. By taking insights from the field of economics, in particular the insurance literature, we propose a formal model where the value of resources are explicitly defined and an RBAC policy (entailing those predictable access needs) is only used as a reference point to determine the price each user has to pay for access, as opposed to representing hard and fast rules that are always rigidly applied.
Resumo:
Distributed Denial-of-Service (DDoS) attacks continue to be one of the most pernicious threats to the delivery of services over the Internet. Not only are DDoS attacks present in many guises, they are also continuously evolving as new vulnerabilities are exploited. Hence accurate detection of these attacks still remains a challenging problem and a necessity for ensuring high-end network security. An intrinsic challenge in addressing this problem is to effectively distinguish these Denial-of-Service attacks from similar looking Flash Events (FEs) created by legitimate clients. A considerable overlap between the general characteristics of FEs and DDoS attacks makes it difficult to precisely separate these two classes of Internet activity. In this paper we propose parameters which can be used to explicitly distinguish FEs from DDoS attacks and analyse two real-world publicly available datasets to validate our proposal. Our analysis shows that even though FEs appear very similar to DDoS attacks, there are several subtle dissimilarities which can be exploited to separate these two classes of events.