883 resultados para Information security evaluation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Speaker attribution is the task of annotating a spoken audio archive based on speaker identities. This can be achieved using speaker diarization and speaker linking. In our previous work, we proposed an efficient attribution system, using complete-linkage clustering, for conducting attribution of large sets of two-speaker telephone data. In this paper, we build on our proposed approach to achieve a robust system, applicable to multiple recording domains. To do this, we first extend the diarization module of our system to accommodate multi-speaker (>2) recordings. We achieve this through using a robust cross-likelihood ratio (CLR) threshold stopping criterion for clustering, as opposed to the original stopping criterion of two speakers used for telephone data. We evaluate this baseline diarization module across a dataset of Australian broadcast news recordings, showing a significant lack of diarization accuracy without previous knowledge of the true number of speakers within a recording. We thus propose applying an additional pass of complete-linkage clustering to the diarization module, demonstrating an absolute improvement of 20% in diarization error rate (DER). We then evaluate our proposed multi-domain attribution system across the broadcast news data, demonstrating achievable attribution error rates (AER) as low as 17%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This project was a step forward in developing intrusion detection systems in distributed environments such as web services. It investigates a new approach of detection based on so-called "taint-marking" techniques and introduces a theoretical framework along with its implementation in the Linux kernel.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information privacy is a critical success/failure factor in information technology supported healthcare (eHealth). eHealth systems utilise electronic health records (EHR) as the main source of information, thus, implementing appropriate privacy preserving methods for EHRs is vital for the proliferation of eHealth. Whilst information privacy may be a fundamental requirement for eHealth consumers, healthcare professionals demand non-restricted access to patient information for improved healthcare delivery, thus, creating an environment where stakeholder requirements are contradictory. Therefore, there is a need to achieve an appropriate balance of requirements in order to build successful eHealth systems. Towards achieving this balance, a new genre of eHealth systems called Accountable-eHealth (AeH) systems has been proposed. In this paper, an access control model for EHRs is presented that can be utilised by AeH systems to create information usage policies that fulfil both stakeholders’ requirements. These policies are used to accomplish the aforementioned balance of requirements creating a satisfactory eHealth environment for all stakeholders. The access control model is validated using a Web based prototype as a proof of concept.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Discipline boundaries of science and technology education are inevitable. Often, such barriers are an obstacle to industry-based learning leading to preventable complexities. Industry-based learning is a complex scenario, rather than conventional learning, leading to the study of liquid learning, which is a timely concept to investigate learning without boundaries. Liquid learning consists of accountability, expectations and driven by outcomes with different learning choices. Liquid learning is a significant phenomenon requiring awareness in the science and technology education. This paper aims to discuss some practical issues when designing industry-based learning without boundaries. A case study approach is reviewed and presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With increasing interest shown by Universities in workplace learning, especially in STEM disciplines, an issue has arisen amongst educators and industry partners regarding authentic assessment tasks for work integrated learning (WIL) subjects. This paper describes the use of a matrix, which is also available as a decision-tree, based on the features of the WIL experience, in order to facilitate the selection of appropriate assessment strategies. The matrix divides the WIL experiences into seven categories, based on such factors as: the extent to which the experience is compulsory, required for membership of a professional body or elective; whether the student is undertaking a project, or embedding in a professional culture; and other key aspects of the WIL experience. One important variable is linked to the fundamental purpose of the assessment. This question revolves around the focus of the assessment: whether on the person (student development); the process (professional conduct/language); or the product (project, assignment, literature review, report, software). The matrix has been trialed at QUT in the Faculty of Science and Technology, and also at the University of Surrey, UK, and has proven to have good applicability in both universities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A5/1 is a shift register based stream cipher which provides privacy for the GSM system. In this paper, we analyse the loading of the secret key and IV during the initialisation process of A5/1. We demonstrate the existence of weak key-IV pairs in the A5/1 cipher due to this loading process; these weak key-IV pairs may generate one, two or three registers containing all-zero values, which may lead in turn to weak keystream sequences. In the case where two or three registers contain only zeros, we describe a distinguisher which leads to a complete decryption of the affected messages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trivium is a bit-based stream cipher in the final portfolio of the eSTREAM project. In this paper, we apply the algebraic attack approach of Berbain et al. to Trivium-like ciphers and perform new analyses on them. We demonstrate a new algebraic attack on Bivium-A. This attack requires less time and memory than previous techniques to recover Bivium-A's initial state. Though our attacks on Bivium-B, Trivium and Trivium-N are worse than exhaustive keysearch, the systems of equations which are constructed are smaller and less complex compared to previous algebraic analyses. We also answer an open question posed by Berbain et al. on the feasibility of applying their technique on Trivium-like ciphers. Factors which can affect the complexity of our attack on Trivium-like ciphers are discussed in detail. Analysis of Bivium-B and Trivium-N are omitted from this manuscript. The full paper is available on the IACR ePrint Archive.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The notion of plaintext awareness ( PA ) has many applications in public key cryptography: it offers unique, stand-alone security guarantees for public key encryption schemes, has been used as a sufficient condition for proving indistinguishability against adaptive chosen-ciphertext attacks ( IND-CCA ), and can be used to construct privacy-preserving protocols such as deniable authentication. Unlike many other security notions, plaintext awareness is very fragile when it comes to differences between the random oracle and standard models; for example, many implications involving PA in the random oracle model are not valid in the standard model and vice versa. Similarly, strategies for proving PA of schemes in one model cannot be adapted to the other model. Existing research addresses PA in detail only in the public key setting. This paper gives the first formal exploration of plaintext awareness in the identity-based setting and, as initial work, proceeds in the random oracle model. The focus is laid mainly on identity-based key encapsulation mechanisms (IB-KEMs), for which the paper presents the first definitions of plaintext awareness, highlights the role of PA in proof strategies of IND-CCA security, and explores relationships between PA and other security properties. On the practical side, our work offers the first, highly efficient, general approach for building IB-KEMs that are simultaneously plaintext-aware and IND-CCA -secure. Our construction is inspired by the Fujisaki-Okamoto (FO) transform, but demands weaker and more natural properties of its building blocks. This result comes from a new look at the notion of γ -uniformity that was inherent in the original FO transform. We show that for IB-KEMs (and PK-KEMs), this assumption can be replaced with a weaker computational notion, which is in fact implied by one-wayness. Finally, we give the first concrete IB-KEM scheme that is PA and IND-CCA -secure by applying our construction to a popular IB-KEM and optimizing it for better performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a vulnerability within the generic object oriented substation event (GOOSE) communication protocol. It describes an exploit of the vulnerability and proposes a number of attack variants. The attacks sends GOOSE frames containing higher status numbers to the receiving intelligent electronic device (IED). This prevents legitimate GOOSE frames from being processed and effectively causes a hijacking of the communication channel, which can be used to implement a denial–of–service (DoS) or manipulate the subscriber (unless a status number roll-over occurs). The authors refer to this attack as a poisoning of the subscriber. A number of GOOSE poisoning attacks are evaluated experimentally on a test bed and demonstrated to be successful.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Modicon Communication Bus (Modbus) protocol is one of the most commonly used protocols in industrial control systems. Modbus was not designed to provide security. This paper confirms that the Modbus protocol is vulnerable to flooding attacks. These attacks involve injection of commands that result in disrupting the normal operation of the control system. This paper describes a set of experiments that shows that an anomaly-based change detection algorithm and signature-based Snort threshold module are capable of detecting Modbus flooding attacks. In comparing these intrusion detection techniques, we find that the signature-based detection requires a carefully selected threshold value, and that the anomaly-based change detection algorithm may have a short delay before detecting the attacks depending on the parameters used. In addition, we also generate a network traffic dataset of flooding attacks on the Modbus control system protocol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes techniques to improve the performance of i-vector based speaker verification systems when only short utterances are available. Short-length utterance i-vectors vary with speaker, session variations, and the phonetic content of the utterance. Well established methods such as linear discriminant analysis (LDA), source-normalized LDA (SN-LDA) and within-class covariance normalisation (WCCN) exist for compensating the session variation but we have identified the variability introduced by phonetic content due to utterance variation as an additional source of degradation when short-duration utterances are used. To compensate for utterance variations in short i-vector speaker verification systems using cosine similarity scoring (CSS), we have introduced a short utterance variance normalization (SUVN) technique and a short utterance variance (SUV) modelling approach at the i-vector feature level. A combination of SUVN with LDA and SN-LDA is proposed to compensate the session and utterance variations and is shown to provide improvement in performance over the traditional approach of using LDA and/or SN-LDA followed by WCCN. An alternative approach is also introduced using probabilistic linear discriminant analysis (PLDA) approach to directly model the SUV. The combination of SUVN, LDA and SN-LDA followed by SUV PLDA modelling provides an improvement over the baseline PLDA approach. We also show that for this combination of techniques, the utterance variation information needs to be artificially added to full-length i-vectors for PLDA modelling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper analyses the probabilistic linear discriminant analysis (PLDA) speaker verification approach with limited development data. This paper investigates the use of the median as the central tendency of a speaker’s i-vector representation, and the effectiveness of weighted discriminative techniques on the performance of state-of-the-art length-normalised Gaussian PLDA (GPLDA) speaker verification systems. The analysis within shows that the median (using a median fisher discriminator (MFD)) provides a better representation of a speaker when the number of representative i-vectors available during development is reduced, and that further, usage of the pair-wise weighting approach in weighted LDA and weighted MFD provides further improvement in limited development conditions. Best performance is obtained using a weighted MFD approach, which shows over 10% improvement in EER over the baseline GPLDA system on mismatched and interview-interview conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to various regulations that require data and operations to reside in specific geographic locations. Thus, cloud users may want to be sure that their stored data have not been relocated into unknown geographic regions that may compromise the security of their stored data. Albeshri et al. (2012) combined proof of storage (POS) protocols with distance-bounding protocols to address this problem. However, their scheme involves unnecessary delay when utilising typical POS schemes due to computational overhead at the server side. The aim of this paper is to improve the basic GeoProof protocol by reducing the computation overhead at the server side. We show how this can maintain the same level of security while achieving more accurate geographic assurance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2013 evaluation campaign, which consisted of four activities addressing three themes: searching professional and user generated data (Social Book Search track); searching structured or semantic data (Linked Data track); and focused retrieval (Snippet Retrieval and Tweet Contextualization tracks). INEX 2013 was an exciting year for INEX in which we consolidated the collaboration with (other activities in) CLEF and for the second time ran our workshop as part of the CLEF labs in order to facilitate knowledge transfer between the evaluation forums. This paper gives an overview of all the INEX 2013 tracks, their aims and task, the built test-collections, and gives an initial analysis of the results