664 resultados para Information Security, Safe Behavior, Users’ behavior, Brazilian users, threats
Resumo:
Information privacy requirements of patients and information requirements of healthcare providers (HCP) are competing concerns. Reaching a balance between these requirements have proven difficult but is crucial for the success of eHealth systems. The traditional approaches to information management have been preventive measures which either allow or deny access to information. We believe that this approach is inappropriate for a domain such as healthcare. We contend that introducing information accountability (IA) to eHealth systems can reach the aforementioned balance without the need for rigid information control. IA is a fairly new concept to computer science, hence; there are no unambiguously accepted principles as yet. But the concept delivers promising advantages to information management in a robust manner. Accountable-eHealth (AeH) systems are eHealth systems which use IA principles as the measure for privacy and information management. AeH systems face three main impediments; technological, social and ethical and legal. In this paper, we present the AeH model and focus on the legal aspects of AeH systems in Australia. We investigate current legislation available in Australia regarding health information management and identify future legal requirements if AeH systems are to be implemented in Australia.
Resumo:
This paper investigates the use of mel-frequency deltaphase (MFDP) features in comparison to, and in fusion with, traditional mel-frequency cepstral coefficient (MFCC) features within joint factor analysis (JFA) speaker verification. MFCC features, commonly used in speaker recognition systems, are derived purely from the magnitude spectrum, with the phase spectrum completely discarded. In this paper, we investigate if features derived from the phase spectrum can provide additional speaker discriminant information to the traditional MFCC approach in a JFA based speaker verification system. Results are presented which provide a comparison of MFCC-only, MFDPonly and score fusion of the two approaches within a JFA speaker verification approach. Based upon the results presented using the NIST 2008 Speaker Recognition Evaluation (SRE) dataset, we believe that, while MFDP features alone cannot compete with MFCC features, MFDP can provide complementary information that result in improved speaker verification performance when both approaches are combined in score fusion, particularly in the case of shorter utterances.
Resumo:
This paper presents a model for generating a MAC tag by injecting the input message directly into the internal state of a nonlinear filter generator. This model generalises a similar model for unkeyed hash functions proposed by Nakano et al. We develop a matrix representation for the accumulation phase of our model and use it to analyse the security of the model against man-in-the-middle forgery attacks based on collisions in the final register contents. The results of this analysis show that some conclusions of Nakano et al regarding the security of their model are incorrect. We also use our results to comment on several recent MAC proposals which can be considered as instances of our model and specify choices of options within the model which should prevent the type of forgery discussed here. In particular, suitable initialisation of the register and active use of a secure nonlinear filter will prevent an attacker from finding a collision in the final register contents which could result in a forged MAC.
Resumo:
Control Objectives for Information and related Technology (COBIT) has grown to be one of the most significant IT Governance (ITG) frameworks available and also the best suited for audit, as it provides comprehensive guidance around IT processes and related business goals. However, given the constraints of both time and resources within which the Australian public sector is forced to operate, implementing an audit framework the size of COBIT in its entirety is often considered too large a task. As an alternative to full implementation it is not uncommon for the public sector to “cherry pick” controls from the framework in an effort to reduce its size. This paper reports on research undertaken to evaluate the potential to use an optimised sub-set of COBIT 5 for ITG audit in Australian public sector organisations. A survey methodology was employed to determine the control-objectives considered to be the most important to a selection of public sector organisations. Twelve control-objectives were identified as being most important to Queensland public sector organisations. As ten of these were also identified by previous studies, it appears possible to derive an optimised sub-set from COBIT 5 that would be both enduring and relevant across geographical and organisational contexts.
Resumo:
Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.
Resumo:
The use of Trusted Platform Module (TPM) is be- coming increasingly popular in many security sys- tems. To access objects protected by TPM (such as cryptographic keys), several cryptographic proto- cols, such as the Object Specific Authorization Pro- tocol (OSAP), can be used. Given the sensitivity and the importance of those objects protected by TPM, the security of this protocol is vital. Formal meth- ods allow a precise and complete analysis of crypto- graphic protocols such that their security properties can be asserted with high assurance. Unfortunately, formal verification of these protocols are limited, de- spite the abundance of formal tools that one can use. In this paper, we demonstrate the use of Coloured Petri Nets (CPN) - a type of formal technique, to formally model the OSAP. Using this model, we then verify the authentication property of this protocol us- ing the state space analysis technique. The results of analysis demonstrates that as reported by Chen and Ryan the authentication property of OSAP can be violated.
Resumo:
This paper presents a novel technique for segmenting an audio stream into homogeneous regions according to speaker identities, background noise, music, environmental and channel conditions. Audio segmentation is useful in audio diarization systems, which aim to annotate an input audio stream with information that attributes temporal regions of the audio into their specific sources. The segmentation method introduced in this paper is performed using the Generalized Likelihood Ratio (GLR), computed between two adjacent sliding windows over preprocessed speech. This approach is inspired by the popular segmentation method proposed by the pioneering work of Chen and Gopalakrishnan, using the Bayesian Information Criterion (BIC) with an expanding search window. This paper will aim to identify and address the shortcomings associated with such an approach. The result obtained by the proposed segmentation strategy is evaluated on the 2002 Rich Transcription (RT-02) Evaluation dataset, and a miss rate of 19.47% and a false alarm rate of 16.94% is achieved at the optimal threshold.
Resumo:
This paper proposes the use of Bayesian approaches with the cross likelihood ratio (CLR) as a criterion for speaker clustering within a speaker diarization system, using eigenvoice modeling techniques. The CLR has previously been shown to be an effective decision criterion for speaker clustering using Gaussian mixture models. Recently, eigenvoice modeling has become an increasingly popular technique, due to its ability to adequately represent a speaker based on sparse training data, as well as to provide an improved capture of differences in speaker characteristics. The integration of eigenvoice modeling into the CLR framework to capitalize on the advantage of both techniques has also been shown to be beneficial for the speaker clustering task. Building on that success, this paper proposes the use of Bayesian methods to compute the conditional probabilities in computing the CLR, thus effectively combining the eigenvoice-CLR framework with the advantages of a Bayesian approach to the diarization problem. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 33.5% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.
Resumo:
A Delay Tolerant Network (DTN) is one where nodes can be highly mobile, with long message delay times forming dynamic and fragmented networks. Traditional centralised network security is difficult to implement in such a network, therefore distributed security solutions are more desirable in DTN implementations. Establishing effective trust in distributed systems with no centralised Public Key Infrastructure (PKI) such as the Pretty Good Privacy (PGP) scheme usually requires human intervention. Our aim is to build and compare different de- centralised trust systems for implementation in autonomous DTN systems. In this paper, we utilise a key distribution model based on the Web of Trust principle, and employ a simple leverage of common friends trust system to establish initial trust in autonomous DTN’s. We compare this system with two other methods of autonomously establishing initial trust by introducing a malicious node and measuring the distribution of malicious and fake keys. Our results show that the new trust system not only mitigates the distribution of fake malicious keys by 40% at the end of the simulation, but it also improved key distribution between nodes. This paper contributes a comparison of three de-centralised trust systems that can be employed in autonomous DTN systems.
Resumo:
Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped $\mathcal{L}1$-norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors".
Resumo:
This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.
Resumo:
Information technology (IT) has been playing a powerful role in creating a competitive advantage for organisations over the past decades. This role has become proportionally greater over time as expectations for IT investments to drive business opportunities keep on rising. However, this reliance on IT has also raised concerns about regulatory compliance, governance and security. IT governance (ITG) audit leverages the skills of IS/IT auditors to ensure that IT initiatives are in line with the business strategies. ITG audit emerged as part of performance audit to provide an assessment of the effective implementation of ITG. This research attempts to empirically examine the ITG audit challenges in the public sector. Based on literature and Delphi research, this paper provides insights regarding the impact of, and required effort to address these challenges. The authors also present the ten major ITG audit challenges facing Australian public sector organisations today.
Resumo:
Due to increased complexity, scale, and functionality of information and telecommunication (IT) infrastructures, every day new exploits and vulnerabilities are discovered. These vulnerabilities are most of the time used by ma¬licious people to penetrate these IT infrastructures for mainly disrupting business or stealing intellectual pro¬perties. Current incidents prove that it is not sufficient anymore to perform manual security tests of the IT infra¬structure based on sporadic security audits. Instead net¬works should be continuously tested against possible attacks. In this paper we present current results and challenges towards realizing automated and scalable solutions to identify possible attack scenarios in an IT in¬frastructure. Namely, we define an extensible frame¬work which uses public vulnerability databases to identify pro¬bable multi-step attacks in an IT infrastructure, and pro¬vide recommendations in the form of patching strategies, topology changes, and configuration updates.
Resumo:
This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.