996 resultados para program verification
Resumo:
There is little evidence that workshops alone have a lasting impact on the day-to-day practice of participants. The current paper examined a strategy to increase generalization and maintenance of skills in the natural environment using pseudo-patients and immediate performance feedback to reinforce skills acquisition. A random half of pharmacies (N=30) took part in workshop training aimed at optimizing consumers' use of nonprescription analgesic products. Pharmacies in the training group also received performance feedback on their adherence to the recommended protocol. Feedback occurred immediately after a pseudo-patient visit in which confederates posed as purchasers of analgesics, and combined positive and corrective elements. Trained pharmacists were significantly more accurate at identifying people who misused the medication (P<0.001). The trained pharmacists were more likely than controls to use open-ended questions (P<0.001), assess readiness to change problematic use (P <0.001), and to deliver a brief intervention that was tailored to the person's commitment to alter his/her usage (P <0.001). Participants responded to the feedback positively. Results were consistent with the hypothesis that when workshop is combined with on-site performance feedback, it enhances practitioners' adherence to protocols in the natural setting.
Resumo:
The problem of impostor dataset selection for GMM-based speaker verification is addressed through the recently proposed data-driven background dataset refinement technique. The SVM-based refinement technique selects from a candidate impostor dataset those examples that are most frequently selected as support vectors when training a set of SVMs on a development corpus. This study demonstrates the versatility of dataset refinement in the task of selecting suitable impostor datasets for use in GMM-based speaker verification. The use of refined Z- and T-norm datasets provided performance gains of 15% in EER in the NIST 2006 SRE over the use of heuristically selected datasets. The refined datasets were shown to generalise well to the unseen data of the NIST 2008 SRE.
Resumo:
A data-driven background dataset refinement technique was recently proposed for SVM based speaker verification. This method selects a refined SVM background dataset from a set of candidate impostor examples after individually ranking examples by their relevance. This paper extends this technique to the refinement of the T-norm dataset for SVM-based speaker verification. The independent refinement of the background and T-norm datasets provides a means of investigating the sensitivity of SVM-based speaker verification performance to the selection of each of these datasets. Using refined datasets provided improvements of 13% in min. DCF and 9% in EER over the full set of impostor examples on the 2006 SRE corpus with the majority of these gains due to refinement of the T-norm dataset. Similar trends were observed for the unseen data of the NIST 2008 SRE.
Resumo:
This work presents an extended Joint Factor Analysis model including explicit modelling of unwanted within-session variability. The goals of the proposed extended JFA model are to improve verification performance with short utterances by compensating for the effects of limited or imbalanced phonetic coverage, and to produce a flexible JFA model that is effective over a wide range of utterance lengths without adjusting model parameters such as retraining session subspaces. Experimental results on the 2006 NIST SRE corpus demonstrate the flexibility of the proposed model by providing competitive results over a wide range of utterance lengths without retraining and also yielding modest improvements in a number of conditions over current state-of-the-art.
Resumo:
This paper presents a novel approach of estimating the confidence interval of speaker verification scores. This approach is utilised to minimise the utterance lengths required in order to produce a confident verification decision. The confidence estimation method is also extended to address both the problem of high correlation in consecutive frame scores, and robustness with very limited training samples. The proposed technique achieves a drastic reduction in the typical data requirements for producing confident decisions in an automatic speaker verification system. When evaluated on the NIST 2005 SRE, the early verification decision method demonstrates that an average of 5–10 seconds of speech is sufficient to produce verification rates approaching those achieved previously using an average in excess of 100 seconds of speech.
Resumo:
Tzeng et al. proposed a new threshold multi-proxy multi-signature scheme with threshold verification. In their scheme, a subset of original signers authenticates a designated proxy group to sign on behalf of the original group. A message m has to be signed by a subset of proxy signers who can represent the proxy group. Then, the proxy signature is sent to the verifier group. A subset of verifiers in the verifier group can also represent the group to authenticate the proxy signature. Subsequently, there are two improved schemes to eliminate the security leak of Tzeng et al.’s scheme. In this paper, we have pointed out the security leakage of the three schemes and further proposed a novel threshold multi-proxy multi-signature scheme with threshold verification.
Resumo:
The term self-selected (i.e., individual or comfortable walking pace or speed) is commonly used in the literature (Frost, Dowling, Bar-Or, & Dyson, 1997; Jeng, Liao, Lai, & Hou, 1997; Wergel-Kolmert & Wohlfart, 1999; Maltais, Bar-Or, Pienynowski, & Galea, 2003; Browning & Kram, 2005; Browning, Baker, Herron, & Kram, 2006; Hills, Byrne, Wearing, & Armstrong, 2006) and is identified as the most efficient walking speed, with increased efficiency defined by lower oxygen uptake (VO^sub 2^) per unit mechanical work (Hoyt & Taylor, 1981; Taylor, Heglund, & Maloiy, 1982; Hreljac, 1993). [...] assessing individual and group differences in metabolic energy expenditure using oxygen uptake requires individuals to be comfortable with, and able to accommodate to, the equipment.
Resumo:
Privacy enhancing protocols (PEPs) are a family of protocols that allow secure exchange and management of sensitive user information. They are important in preserving users’ privacy in today’s open environment. Proof of the correctness of PEPs is necessary before they can be deployed. However, the traditional provable security approach, though well established for verifying cryptographic primitives, is not applicable to PEPs. We apply the formal method of Coloured Petri Nets (CPNs) to construct an executable specification of a representative PEP, namely the Private Information Escrow Bound to Multiple Conditions Protocol (PIEMCP). Formal semantics of the CPN specification allow us to reason about various security properties of PIEMCP using state space analysis techniques. This investigation provides us with preliminary insights for modeling and verification of PEPs in general, demonstrating the benefit of applying the CPN-based formal approach to proving the correctness of PEPs.
Resumo:
The consistently high failure rate in Queensland University of Technology’s introductory programming subject reflects a similar dilemma facing other universities worldwide. Experiments were conducted to quantify the effectiveness of collaborative learning on introductory level programming students over a number of semesters, replicating previous studies in this area. A selection of workshops in the introductory programming subject required students to problem-solve and program in pairs, mimicking the eXtreme Programming concept of pair programming. The failure rate for the subject fell from what had been an average of 30% since 2003 (with a high of 41% in 2006), to just 5% for those students who worked consistently in pairs.
Resumo:
Our students come from diverse backgrounds. They need flexibility in their learning, and opportunities to review aspects of curriculum they are less confident with. An online teaching and learning programme called the Histology Challenge has been developed to supplement learning experiences offered in several first year anatomy and anatomy & physiology units at QUT. The programme is designed to be integrated with the existing Blackboard sites. The Histology Challenge emphasises the foundation concept that a complex system, such as the human body, can be better understood by examining its simpler components. The tutorial allows students to examine the cells and tissues which ultimately determine structural and functional properties of body organs. The program is interactive, asking students to make decisions and choices, demonstrating an integrated understanding of systemic and cellular aspects. It provides users with the ability to progress at their own pace and to test their understanding and knowledge. For the developer the learning activity can be easily controlled and modified via the use of text files. There are several key elements of this programme, designed to promote specific aspects of student learning. Minimum text is used, instead there is a strong emphasis on instructive artwork and original, high quality histology images presented within a framework that reinforces learning and promotes problem solving skills.
Resumo:
In this research the reliability and availability of fiberboard pressing plant is assessed and a cost-based optimization of the system using the Monte- Carlo simulation method is performed. The woodchip and pulp or engineered wood industry in Australia and around the world is a lucrative industry. One such industry is hardboard. The pressing system is the main system, as it converts the wet pulp to fiberboard. The assessment identified the pressing system has the highest downtime throughout the plant plus it represents the bottleneck in the process. A survey in the late nineties revealed there are over one thousand plants around the world, with the pressing system being a common system among these plants. No work has been done to assess or estimate the reliability of such a pressing system; therefore this assessment can be used for assessing any plant of this type.
Resumo:
The cascading appearance-based (CAB) feature extraction technique has established itself as the state of the art in extracting dynamic visual speech features for speech recognition. In this paper, we will focus on investigating the effectiveness of this technique for the related speaker verification application. By investigating the speaker verification ability of each stage of the cascade we will demonstrate that the same steps taken to reduce static speaker and environmental information for the speech recognition application also provide similar improvements for speaker recognition. These results suggest that visual speaker recognition can improve considerable when conducted solely through a consideration of the dynamic speech information rather than the static appearance of the speaker's mouth region.