866 resultados para Information Security, Safe Behavior, Users’ behavior, Brazilian users, threats
Resumo:
In this paper, cognitive load analysis via acoustic- and CAN-Bus-based driver performance metrics is employed to assess two different commercial speech dialog systems (SDS) during in-vehicle use. Several metrics are proposed to measure increases in stress, distraction and cognitive load and we compare these measures with statistical analysis of the speech recognition component of each SDS. It is found that care must be taken when designing an SDS as it may increase cognitive load which can be observed through increased speech response delay (SRD), changes in speech production due to negative emotion towards the SDS, and decreased driving performance on lateral control tasks. From this study, guidelines are presented for designing systems which are to be used in vehicular environments.
Resumo:
SITDRM 1 is a privacy protection system that protects private data through the enforcement of MPEG REL licenses provided by consumers. Direct issuing of licenses by consumers has several usability problems that will be mentioned in this paper. Further, we will describe how SITDRM incorporates P3P language to provide a consumer-centered privacy protection system.
Resumo:
Innovation Management (IM) in most knowledge based firms is used on an adhoc basis where senior managers use this term to leverage competitive edge without understanding its true meaning and how its robust application in organisation impacts organisational performance. There have been attempts in the manufacturing industry to harness the innovative potential of the business and apprehend its use as a point of difference to improve financial and non financial outcomes. However further work is required to innovatively extrapolate the lessons learnt to introduce incremental and/or radical innovation to knowledge based firms. An international structural engineering firm has been proactive in exploring and implementing this idea and has forged an alliance with the Queensland University of Technology to start the Innovation Management Program (IMP). The aim was to develop a permanent and sustainable program with which innovation can be woven through the fabric of the organisation. There was an intention to reinforce the firms’ vision and reinvigorate ideas and create new options that help in its realisation. This paper outlines the need for innovation in knowledge based firms and how this consulting engineering firm reacted to this exigency. The development of the Innovation Management Program, its different themes (and associated projects) and how they integrate to form a holistic model is also discussed. The model is designed around the need of providing professional qualification improvement opportunities for staff, setting-up organised, structured & easily accessible knowledge repositories to capture tacit and explicit knowledge and implement efficient project management strategies with a view to enhance client satisfaction. A Delphi type workshop is used to confirm the themes and projects. Some of the individual projects and their expected outcomes are also discussed. A questionnaire and interviews were used to collect data to select appropriate candidates responsible for leading these projects. Following an in-depth analysis of preliminary research results, some recommendations on the selection process will also be presented.
Resumo:
Public key cryptography, and with it,the ability to compute digital signatures, have made it possible for electronic commerce to flourish. It is thus unsurprising that the proposed Australian NECS will also utilise digital signatures in its system so as to provide a fully automated process from the creation of electronic land title instrument to the digital signing, and electronic lodgment of these instruments. This necessitates an analysis of the fraud risks raised by the usage of digital signatures because a compromise of the integrity of digital signatures will lead to a compromise of the Torrens system itself. This article will show that digital signatures may in fact offer greater security against fraud than handwritten signatures; but to achieve this, digital signatures require an infrastructure whereby each component is properly implemented and managed.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
This research investigates wireless intrusion detection techniques for detecting attacks on IEEE 802.11i Robust Secure Networks (RSNs). Despite using a variety of comprehensive preventative security measures, the RSNs remain vulnerable to a number of attacks. Failure of preventative measures to address all RSN vulnerabilities dictates the need for a comprehensive monitoring capability to detect all attacks on RSNs and also to proactively address potential security vulnerabilities by detecting security policy violations in the WLAN. This research proposes novel wireless intrusion detection techniques to address these monitoring requirements and also studies correlation of the generated alarms across wireless intrusion detection system (WIDS) sensors and the detection techniques themselves for greater reliability and robustness. The specific outcomes of this research are: A comprehensive review of the outstanding vulnerabilities and attacks in IEEE 802.11i RSNs. A comprehensive review of the wireless intrusion detection techniques currently available for detecting attacks on RSNs. Identification of the drawbacks and limitations of the currently available wireless intrusion detection techniques in detecting attacks on RSNs. Development of three novel wireless intrusion detection techniques for detecting RSN attacks and security policy violations in RSNs. Development of algorithms for each novel intrusion detection technique to correlate alarms across distributed sensors of a WIDS. Development of an algorithm for automatic attack scenario detection using cross detection technique correlation. Development of an algorithm to automatically assign priority to the detected attack scenario using cross detection technique correlation.
Resumo:
The relationship between multiple cameras viewing the same scene may be discovered automatically by finding corresponding points in the two views and then solving for the camera geometry. In camera networks with sparsely placed cameras, low resolution cameras or in scenes with few distinguishable features it may be difficult to find a sufficient number of reliable correspondences from which to compute geometry. This paper presents a method for extracting a larger number of correspondences from an initial set of putative correspondences without any knowledge of the scene or camera geometry. The method may be used to increase the number of correspondences and make geometry computations possible in cases where existing methods have produced insufficient correspondences.
Resumo:
This paper examines performances that defy established representations of disease, deformity and bodily difference. Historically, the ‘deformed’ body has been cast – onstage and in sideshows – as flawed, an object of pity, or an example of the human capacity to overcome. Such representations define the boundaries of the ‘normal’ body by displaying its Other. They bracket the ‘abnormal’ body off as an example of deviance from the ‘norm’, thus, paradoxically, decreasing the social and symbolic visibility (and agency) of disabled people. Yet, in contemporary theory and culture, these representations are reappropriated – by disabled artists, certainly, but also as what Carrie Sandahl has called a ‘master trope’ for representing a range of bodily differences. In this paper, I investigate this phenomenon. I analyse French Canadian choreographer Marie Chouinard’s bODY rEMIX/gOLDBERG vARIATIONS, in which 10 able-bodied dancers are reborn as bizarre biotechnical mutants via the use of crutches, walkers, ballet shoes and barres as prosthetic pseudo-organs. These bodies defy boundaries, defy expectations, develop new modes of expression, and celebrate bodily difference. The self-inflicted pain dancers experience during training is cast as a ‘disablement’ that is ultimately ‘enabling’. I ask what effect encountering able bodies celebrating ‘dis’ or ‘diff’ ability has on audiences. Do we see the emergence of a once-repressed Other, no longer silenced, censored or negated? Or does using ‘disability’ to express the dancers’ difference and self-determination usurp a ‘trope’ by which disabled people themselves might speak back to the dominant culture, creating further censorship?
Resumo:
The problem of impostor dataset selection for GMM-based speaker verification is addressed through the recently proposed data-driven background dataset refinement technique. The SVM-based refinement technique selects from a candidate impostor dataset those examples that are most frequently selected as support vectors when training a set of SVMs on a development corpus. This study demonstrates the versatility of dataset refinement in the task of selecting suitable impostor datasets for use in GMM-based speaker verification. The use of refined Z- and T-norm datasets provided performance gains of 15% in EER in the NIST 2006 SRE over the use of heuristically selected datasets. The refined datasets were shown to generalise well to the unseen data of the NIST 2008 SRE.
Resumo:
A data-driven background dataset refinement technique was recently proposed for SVM based speaker verification. This method selects a refined SVM background dataset from a set of candidate impostor examples after individually ranking examples by their relevance. This paper extends this technique to the refinement of the T-norm dataset for SVM-based speaker verification. The independent refinement of the background and T-norm datasets provides a means of investigating the sensitivity of SVM-based speaker verification performance to the selection of each of these datasets. Using refined datasets provided improvements of 13% in min. DCF and 9% in EER over the full set of impostor examples on the 2006 SRE corpus with the majority of these gains due to refinement of the T-norm dataset. Similar trends were observed for the unseen data of the NIST 2008 SRE.
Resumo:
This paper presents Scatter Difference Nuisance Attribute Projection (SD-NAP) as an enhancement to NAP for SVM-based speaker verification. While standard NAP may inadvertently remove desirable speaker variability, SD-NAP explicitly de-emphasises this variability by incorporating a weighted version of the between-class scatter into the NAP optimisation criterion. Experimental evaluation of SD-NAP with a variety of SVM systems on the 2006 and 2008 NIST SRE corpora demonstrate that SD-NAP provides improved verification performance over standard NAP in most cases, particularly at the EER operating point.
Resumo:
This work presents an extended Joint Factor Analysis model including explicit modelling of unwanted within-session variability. The goals of the proposed extended JFA model are to improve verification performance with short utterances by compensating for the effects of limited or imbalanced phonetic coverage, and to produce a flexible JFA model that is effective over a wide range of utterance lengths without adjusting model parameters such as retraining session subspaces. Experimental results on the 2006 NIST SRE corpus demonstrate the flexibility of the proposed model by providing competitive results over a wide range of utterance lengths without retraining and also yielding modest improvements in a number of conditions over current state-of-the-art.
Resumo:
This paper presents a novel approach of estimating the confidence interval of speaker verification scores. This approach is utilised to minimise the utterance lengths required in order to produce a confident verification decision. The confidence estimation method is also extended to address both the problem of high correlation in consecutive frame scores, and robustness with very limited training samples. The proposed technique achieves a drastic reduction in the typical data requirements for producing confident decisions in an automatic speaker verification system. When evaluated on the NIST 2005 SRE, the early verification decision method demonstrates that an average of 5–10 seconds of speech is sufficient to produce verification rates approaching those achieved previously using an average in excess of 100 seconds of speech.