948 resultados para Error threshold


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Chemistry Discipline Network has recently completed two distinct mapping exercises. The first is a snapshot of chemistry taught at 12 institutions around Australia in 2011. There were many similarities but also important differences in the content taught and assessed at different institutions. There were also significant differences in delivery, particularly laboratory contact hours, as well as forms and weightings of assessment. The second mapping exercise mapped the chemistry degrees at three institutions to the Threshold Learning Outcomes for chemistry. Importantly, some of the TLOs were addressed by multiple units at all institutions, while others were not met, or were met at an introductory level only. The exercise also exposed some challenges in using the TLOs as currently written.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary care. METHODS Eleven nominal group interviews of patients and primary health care professionals were held in Auckland, New Zealand, during late 2007. Group members reported and helped to classify types of potential error by patients. We synthesized the ideas that emerged from the nominal groups into a taxonomy of patient error. RESULTS Our taxonomy is a 3-level system encompassing 70 potential types of patient error. The first level classifies 8 categories of error into 2 main groups: action errors and mental errors. The action errors, which result in part or whole from patient behavior, are attendance errors, assertion errors, and adherence errors. The mental errors, which are errors in patient thought processes, comprise memory errors, mindfulness errors, misjudgments, and—more distally—knowledge deficits and attitudes not conducive to health. CONCLUSION The taxonomy is an early attempt to understand and recognize how patients may err and what clinicians should aim to influence so they can help patients act safely. This approach begins to balance perspectives on error but requires further research. There is a need to move beyond seeing patient, clinician, and system errors as separate categories of error. An important next step may be research that attempts to understand how patients, clinicians, and systems interact to cocreate and reduce errors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speaker attribution is the task of annotating a spoken audio archive based on speaker identities. This can be achieved using speaker diarization and speaker linking. In our previous work, we proposed an efficient attribution system, using complete-linkage clustering, for conducting attribution of large sets of two-speaker telephone data. In this paper, we build on our proposed approach to achieve a robust system, applicable to multiple recording domains. To do this, we first extend the diarization module of our system to accommodate multi-speaker (>2) recordings. We achieve this through using a robust cross-likelihood ratio (CLR) threshold stopping criterion for clustering, as opposed to the original stopping criterion of two speakers used for telephone data. We evaluate this baseline diarization module across a dataset of Australian broadcast news recordings, showing a significant lack of diarization accuracy without previous knowledge of the true number of speakers within a recording. We thus propose applying an additional pass of complete-linkage clustering to the diarization module, demonstrating an absolute improvement of 20% in diarization error rate (DER). We then evaluate our proposed multi-domain attribution system across the broadcast news data, demonstrating achievable attribution error rates (AER) as low as 17%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To examine between eye differences in corneal higher order aberrations and topographical characteristics in a range of refractive error groups. Methods: One hundred and seventy subjects were recruited including; 50 emmetropic isometropes, 48 myopic isometropes (spherical equivalent anisometropia ≤ 0.75 D), 50 myopic anisometropes (spherical equivalent anisometropia ≥ 1.00 D) and 22 keratoconics. The corneal topography of each eye was captured using the E300 videokeratoscope (Medmont, Victoria, Australia) and analyzed using custom written software. All left eye data were rotated about the vertical midline to account for enantiomorphism. Corneal height data were used to calculate the corneal wavefront error using a ray tracing procedure and fit with Zernike polynomials (up to and including the eighth radial order). The wavefront was centred on the line of sight by using the pupil offset value from the pupil detection function in the videokeratoscope. Refractive power maps were analysed to assess corneal sphero-cylindrical power vectors. Differences between the more myopic (or more advanced eye for keratoconics) and the less myopic (advanced) eye were examined. Results: Over a 6 mm diameter, the cornea of the more myopic eye was significantly steeper (refractive power vector M) compared to the fellow eye in both anisometropes (0.10 ± 0.27 D steeper, p = 0.01) and keratoconics (2.54 ± 2.32 D steeper, p < 0.001) while no significant interocular difference was observed for isometropic emmetropes (-0.03 ± 0.32 D) or isometropic myopes (0.02 ± 0.30 D) (both p > 0.05). In keratoconic eyes, the between eye difference in corneal refractive power was greatest inferiorly (associated with cone location). Similarly, in myopic anisometropes, the more myopic eye displayed a central region of significant inferior corneal steepening (0.15 ± 0.42 D steeper) relative to the fellow eye (p = 0.01). Significant interocular differences in higher order aberrations were only observed in the keratoconic group for; vertical trefoil C(3,-3), horizontal coma C(3,1) secondary astigmatism along 45 C(4, -2) (p < 0.05) and vertical coma C(3,-1) (p < 0.001). The interocular difference in vertical pupil decentration (relative to the corneal vertex normal) increased with between eye asymmetry in refraction (isometropia 0.00 ± 0.09, anisometropia 0.03 ± 0.15 and keratoconus 0.08 ± 0.16 mm) as did the interocular difference in corneal vertical coma C (3,-1) (isometropia -0.006 ± 0.142, anisometropia -0.037 ± 0.195 and keratoconus -1.243 ± 0.936 μm) but only reached statistical significance for pair-wise comparisons between the isometropic and keratoconic groups. Conclusions: There is a high degree of corneal symmetry between the fellow eyes of myopic and emmetropic isometropes. Interocular differences in corneal topography and higher order aberrations are more apparent in myopic anisometropes and keratoconics due to regional (primarily inferior) differences in topography and between eye differences in vertical pupil decentration relative to the corneal vertex normal. Interocular asymmetries in corneal optics appear to be associated with anisometropic refractive development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the theoretical framework of threshold concepts and its potential for LIS education. Threshold concepts are key ideas, often troublesome and counter-intuitive, that are critical to profound understanding of a domain. Once understood, they allow mastery of significant aspects of the domain, opening up new, previously inaccessible ways of thinking. The paper is developed in three parts. First, threshold concept theory is introduced and studies of its use in higher education are described, including emergent work related to LIS. Second, results of a recent study on learning experiences integral to learning to search are presented along with their implications for search expertise and search education, forming a case illustration of what threshold concept theory may contribute to this and other areas of LIS education. Third, the potential of threshold concept theory for LIS education is discussed. The paper concludes that threshold concept theory has much to offer LIS education, particularly for researching critical concepts and competencies, and considerations for a research agenda are put forth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter describes an innovative method of curriculum design that is based on combining phenomenographic research, and the associated variation theory of learning, with the notion of disciplinary threshold concepts to focus specialised design attention on the most significant and difficult parts of the curriculum. The method involves three primary stages: (i) identification of disciplinary concepts worthy of intensive curriculum design attention, using the criteria for threshold concepts; (ii) action research into variation in students’ understandings/misunderstandings of those concepts, using phenomenography as the research approach; (iii) design of learning activities to address the poorer understandings identified in the second stage, using variation theory as a guiding framework. The curriculum design method is inherently theory and evidence based. It was developed and trialed during a two-year project funded by the Australian Learning and Teaching Council, using physics and law disciplines as case studies. Disciplinary teachers’ perceptions of the impact of the method on their teaching and understanding of student learning were profound. Attempts to measure the impact on student learning were less conclusive; teachers often unintentionally deviated from the design when putting it into practice for the first time. Suggestions for improved implementation of the method are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dengue virus (DENV) transmission in Australia is driven by weather factors and imported dengue fever (DF) cases. However, uncertainty remains regarding the threshold effects of high-order interactions among weather factors and imported DF cases and the impact of these factors on autochthonous DF. A time-series regression tree model was used to assess the threshold effects of natural temporal variations of weekly weather factors and weekly imported DF cases in relation to incidence of weekly autochthonous DF from 1 January 2000 to 31 December 2009 in Townsville and Cairns, Australia. In Cairns, mean weekly autochthonous DF incidence increased 16.3-fold when the 3-week lagged moving average maximum temperature was <32 °C, the 4-week lagged moving average minimum temperature was ≥24 °C and the sum of imported DF cases in the previous 2 weeks was >0. When the 3-week lagged moving average maximum temperature was ≥32 °C and the other two conditions mentioned above remained the same, mean weekly autochthonous DF incidence only increased 4.6-fold. In Townsville, the mean weekly incidence of autochthonous DF increased 10-fold when 3-week lagged moving average rainfall was ≥27 mm, but it only increased 1.8-fold when rainfall was <27 mm during January to June. Thus, we found different responses of autochthonous DF incidence to weather factors and imported DF cases in Townsville and Cairns. Imported DF cases may also trigger and enhance local outbreaks under favorable climate conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion techniques can be used in biometrics to achieve higher accuracy. When biometric systems are in operation and the threat level changes, controlling the trade-off between detection error rates can reduce the impact of an attack. In a fused system, varying a single threshold does not allow this to be achieved, but systematic adjustment of a set of parameters does. In this paper, fused decisions from a multi-part, multi-sample sequential architecture are investigated for that purpose in an iris recognition system. A specific implementation of the multi-part architecture is proposed and the effect of the number of parts and samples in the resultant detection error rate is analysed. The effectiveness of the proposed architecture is then evaluated under two specific cases of obfuscation attack: miosis and mydriasis. Results show that robustness to such obfuscation attacks is achieved, since lower error rates than in the case of the non-fused base system are obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To investigate the relationship between two assessments to quantify delayed onset muscle soreness [DOMS]: visual analog scale [VAS] and pressure pain threshold [PPT]. Methods: Thirty-one healthy young men [25.8 ± 5.5 years] performed 10 sets of six maximal eccentric contractions of the elbow flexors with their non-dominant arm. Before and one to four days after the exercise, muscle pain perceived upon palpation of the biceps brachii at three sites [5, 9 and 13 cm above the elbow crease] was assessed by VAS with a 100 mm line [0 = no pain, 100 = extremely painful], and PPT of the same sites was determined by an algometer. Changes in VAS and PPT over time were compared amongst three sites by a two-way repeated measures analysis of variance, and the relationship between VAS and PPT was analyzed using a Pearson product-moment correlation. Results: The VAS increased one to four days after exercise and peaked two days post-exercise, while the PPT decreased most one day post-exercise and remained below baseline for four days following exercise [p < 0.05]. No significant difference among the three sites was found for VAS [p = 0.62] or PPT [p = 0.45]. The magnitude of change in VAS did not significantly correlate with that of PPT [r = −0.20, p = 0.28]. Conclusion: These results suggest that the level of muscle pain is not region-specific, at least among the three sites investigated in the study, and VAS and PPT provide different information about DOMS, indicating that VAS and PPT represent different aspects of pain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2012 the New Zealand government spent $3.4 billion, or nearly $800 per person, on responses to crime via the justice system. Research shows that much of this spending does little to reduce the changes of re-offending. Relatively little money is spent on victims, the rehabilitation of offenders or to support the families of offenders. This book is based on papers presented at the Costs of Crime forum held by the Institute of Policy Studies in February 2011. It presents lessons from what is happening in Australia, Britain and the United States and focuses on how best to manage crime, respond to victims, and reduce offending in a cost-effective manner in a New Zealand context. It is clear that strategies are needed that are based on better research and a more informed approach to policy development. Such strategies must assist victims constructively while also reducing offending. Using public resources to lock as many people in our prisons as possible cannot be justified by the evidence and is fiscally unsustainable; nor does such an approach make society safer. To reduce the costs of crime we need to reinvest resources in effective strategies to build positive futures for those at risk and the communities needed to sustain them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.