848 resultados para Errors and omission
Resumo:
This paper presents the results of a study that specifically looks at the relationships between measured user capabilities and product demands in a sample of older and disabled users. An empirical study was conducted with 19 users performing tasks with four consumer products (a clock-radio, a mobile phone, a blender and a vacuum cleaner). The sensory, cognitive and motor capabilities of each user were measured using objective capability tests. The study yielded a rich dataset comprising capability measures, product demands, outcome measures (task times and errors), and subjective ratings of difficulty. Scatter plots were produced showing quantified product demands on user capabilities, together with subjective ratings of difficulty. The results are analysed in terms of the strength of correlations observed taking into account the limitations of the study sample. Directions for future research are also outlined. © 2011 Springer-Verlag.
Resumo:
With the intermediate-complexity Zebiak-Cane model, we investigate the 'spring predictability barrier' (SPB) problem for El Nino events by tracing the evolution of conditional nonlinear optimal perturbation (CNOP), where CNOP is superimposed on the El Nino events and acts as the initial error with the biggest negative effect on the El Nino prediction. We show that the evolution of CNOP-type errors has obvious seasonal dependence and yields a significant SPB, with the most severe occurring in predictions made before the boreal spring in the growth phase of El Nino. The CNOP-type errors can be classified into two types: one possessing a sea-surface-temperature anomaly pattern with negative anomalies in the equatorial central-western Pacific, positive anomalies in the equatorial eastern Pacific, and a thermocline depth anomaly pattern with positive anomalies along the Equator, and another with patterns almost opposite to those of the former type. In predictions through the spring in the growth phase of El Nino, the initial error with the worst effect on the prediction tends to be the latter type of CNOP error, whereas in predictions through the spring in the decaying phase, the initial error with the biggest negative effect on the prediction is inclined to be the former type of CNOP error. Although the linear singular vector (LSV)-type errors also have patterns similar to the CNOP-type errors, they cover a more localized area than the CNOP-type errors and cause a much smaller prediction error, yielding a less significant SPB. Random errors in the initial conditions are also superimposed on El Nino events to investigate the SPB. We find that, whenever the predictions start, the random errors neither exhibit an obvious season-dependent evolution nor yield a large prediction error, and thus may not be responsible for the SPB phenomenon for El Nino events. These results suggest that the occurrence of the SPB is closely related to particular initial error patterns. The two kinds of CNOP-type error are most likely to cause a significant SPB. They have opposite signs and, consequently, opposite growth behaviours, a result which may demonstrate two dynamical mechanisms of error growth related to SPB: in one case, the errors grow in a manner similar to El Nino; in the other, the errors develop with a tendency opposite to El Nino. The two types of CNOP error may be most likely to provide the information regarding the 'sensitive area' of El Nino-Southern Oscillation (ENSO) predictions. If these types of initial error exist in realistic ENSO predictions and if a target method or a data assimilation approach can filter them, the ENSO forecast skill may be improved. Copyright (C) 2009 Royal Meteorological Society
Resumo:
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has received little previous attention. We present a framework for computing motion strategies that are guaranteed to succeed in the presence of all three kinds of uncertainty. The motion strategies comprise sensor-based gross motions, compliant motions, and simple pushing motions.
Resumo:
M.H.Lee, Q. Meng and H. Holstein, ?Learning and Reuse of Experience in Behavior-Based Service Robots?, Seventh International Conference on Control, Automation, Robotics and Vision (ICARCV2002), pp1019-24, December 2002, Singapore
Resumo:
Background: Hospital clinicians are increasingly expected to practice evidence-based medicine (EBM) in order to minimize medical errors and ensure quality patient care, but experience obstacles to information-seeking. The introduction of a Clinical Informationist (CI) is explored as a possible solution. Aims: This paper investigates the self-perceived information needs, behaviour and skill levels of clinicians in two Irish public hospitals. It also explores clinicians perceptions and attitudes to the introduction of a CI into their clinical teams. Methods: A questionnaire survey approach was utilised for this study, with 22 clinicians in two hospitals. Data analysis was conducted using descriptive statistics. Results: Analysis showed that clinicians experience diverse information needs for patient care, and that barriers such as time constraints and insufficient access to resources hinder their information-seeking. Findings also showed that clinicians struggle to fit information-seeking into their working day, regularly seeking to answer patient-related queries outside of working hours. Attitudes towards the concept of a CI were predominantly positive. Conclusion: This paper highlights the factors that characterise and limit hospital clinicians information-seeking, and suggests the CI as a potentially useful addition to the clinical team, to help them to resolve their information needs for patient care.
Resumo:
There have been few genuine success stories about industrial use of formal methods. Perhaps the best known and most celebrated is the use of Z by IBM (in collaboration with Oxford University's Programming Research Group) during the development of CICS/ESA (version 3.1). This work was rewarded with the prestigious Queen's Award for Technological Achievement in 1992 and is especially notable for two reasons: 1) because it is a commercial, rather than safety- or security-critical, system and 2) because the claims made about the effectiveness of Z are quantitative as well as qualitative. The most widely publicized claims are: less than half the normal number of customer-reported errors and a 9% savings in the total development costs of the release. This paper provides an independent assessment of the effectiveness of using Z on CICS based on the set of public domain documents. Using this evidence, we believe that the case study was important and valuable, but that the quantitative claims have not been substantiated. The intellectual arguments and rationale for formal methods are attractive, but their widespread commercial use is ultimately dependent upon more convincing quantitative demonstrations of effectiveness. Despite the pioneering efforts of IBM and PRG, there is still a need for rigorous, measurement-based case studies to assess when and how the methods are most effective. We describe how future similar case studies could be improved so that the results are more rigorous and conclusive.
Resumo:
High-resolution UCLES/AAT spectra are presented for nine B-type supergiants in the SMC, chosen on the basis that they may show varying amounts of nuclear-synthetically processed material mixed to their surface. These spectra have been analysed using a new grid of approximately 12 000 non-LTE line blanketed tlusty model atmospheres to estimate atmospheric parameters and chemical composition. The abundance estimates for O, Mg and Si are in excellent agreement with those deduced from other studies, whilst the low estimate for C may reflect the use of the C II doublet at 4267 Å. The N estimates are approximately an order of magnitude greater than those found in unevolved B-type stars or H II regions but are consistent with the other estimates in AB-type supergiants. These results have been combined with results from a unified model atmosphere analysis of UVES/VLT spectra of B-type supergiants (Trundle et al. 2004, A&A, 417, 217) to discuss the evolutionary status of these objects. For two stars that are in common with those discussed by Trundle et al., we have undertaken a careful comparison in order to try to understand the relative importance of the different uncertainties present in such analyses, including observational errors and the use of static or unified models. We find that even for these relatively luminous supergiants, tlusty models yield atmospheric parameters and chemical compositions similar to those deduced from the unified code fastwind.
Resumo:
The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.
Resumo:
Many genetic studies have demonstrated an association between the 7-repeat (7r) allele of a 48-base pair variable number of tandem repeats (VNTR) in exon 3 of the DRD4 gene and the phenotype of attention deficit hyperactivity disorder (ADHD). Previous studies have shown inconsistent associations between the 7r allele and neurocognitive performance in children with ADHD. We investigated the performance of 128 children with and without ADHD on the Fixed and Random versions of the Sustained Attention to Response Task (SART). We employed timeseries analyses of reaction-time data to allow a fine-grained analysis of reaction time variability, a candidate endophenotype for ADHD. Children were grouped into either the 7r-present group (possessing at least one copy of the 7r allele) or the 7r-absent group. The ADHD group made significantly more commission errors and was significantly more variable in RT in terms of fast moment-to-moment variability than the control group, but no effect of genotype was found on these measures. Children with ADHD without the 7r allele made significantly more omission errors, were significantly more variable in the slow frequency domain and showed less sensitivity to the signal (d') than those children with ADHD the 7r and control children with or without the 7r. These results highlight the utility of time-series analyses of reaction time data for delineating the neuropsychological deficits associated with ADHD and the DRD4 VNTR. Absence of the 7-repeat allele in children with ADHD is associated with a neurocognitive profile of drifting sustained attention that gives rise to variable and inconsistent performance. (c) 2008 Wiley-Liss, Inc.
Resumo:
There is a growing literature examining the impact of research on informing policy, and of research and policy on practice. Research and policy do not have the same types of impact on practice but can be evaluated using similar approaches. Sometimes the literature provides a platform for methodological debate but mostly it is concerned with how research can link to improvements in the process and outcomes of education, how it can promote innovative policies and practice, and how it may be successfully disseminated. Whether research-informed or research-based, policy and its implementation is often assessed on such 'hard' indicators of impact as changes in the number of students gaining five or more A to C grades in national examinations or a percentage fall in the number of exclusions in inner city schools. Such measures are necessarily crude, with large samples smoothing out errors and disguising instances of significant success or failure. Even when 'measurable' in such a fashion, however, the impact of any educational change or intervention may require a period of years to become observable. This paper considers circumstances in which short-term change may be implausible or difficult to observe. It explores how impact is currently theorized and researched and promotes the concept of 'soft' indicators of impact in circumstances in which the pursuit of conventional quantitative and qualitative evidence is rendered impractical within a reasonable cost and timeframe. Such indicators are characterized by their avowedly subjective, anecdotal and impressionistic provenance and have particular importance in the context of complex community education issues where the assessment of any impact often faces considerable problems of access. These indicators include the testimonies of those on whom the research intervention or policy focuses (for example, students, adult learners), the formative effects that are often reported (for example, by head teachers, community leaders) and media coverage. The collation and convergence of a wide variety of soft indicators (Where there is smoke …) is argued to offer a credible means of identifying subtle processes that are often neglected as evidence of potential and actual impact (… there is fire).
Resumo:
From perspective of structure synthesis, certain special geometric constraints, such as joint axes intersecting at one point or perpendicular to each other, are necessary in realizing the end-effector motion of kinematically decoupled parallel manipulators (PMs) along individual motion axes. These requirements are difficult to achieve in the actual system due to assembly errors and manufacturing tolerances. Those errors that violate the geometric constraint requirements are termed “constraint errors”. The constraint errors usually are more troublesome than other manipulator errors because the decoupled motion characteristics of the manipulator may no longer exist and the decoupled kinematic models will be rendered useless due to these constraint errors. Therefore, identification and prevention of these constraint errors in initial design and manufacturing stage are of great significance. In this article, three basic types of constraint errors are identified, and an approach to evaluate the effects of constraint errors on decoupling characteristics of PMs is proposed. This approach is illustrated by a 6-DOF PM with decoupled translation and rotation. The results show that the proposed evaluation method is effective to guide design and assembly.
Resumo:
In three studies we looked at two typical misconceptions of probability: the representativeness heuristic, and the equiprobability bias. The literature on statistics education predicts that some typical errors and biases (e.g., the equiprobability bias) increase with education, whereas others decrease. This is in contrast with reasoning theorists’ prediction who propose that education reduces misconceptions in general. They also predict that students with higher cognitive ability and higher need for cognition are less susceptible to biases. In Experiments 1 and 2 we found that the equiprobability bias increased with statistics education, and it was negatively correlated with students’ cognitive abilities. The representativeness heuristic was mostly unaffected by education, and it was also unrelated to cognitive abilities. In Experiment 3 we demonstrated through an instruction manipulation (by asking participants to think logically vs. rely on their intuitions) that the reason for these differences was that these biases originated in different cognitive processes.
Resumo:
Objectives: Study objectives were to investigate the prevalence and causes of prescribing errors amongst foundation doctors (i.e. junior doctors in their first (F1) or second (F2) year of post-graduate training), describe their knowledge and experience of prescribing errors, and explore their self-efficacy (i.e. confidence) in prescribing.
Method: A three-part mixed-methods design was used, comprising: prospective observational study; semi-structured interviews and cross-sectional survey. All doctors prescribing in eight purposively selected hospitals in Scotland participated. All foundation doctors throughout Scotland participated in the survey. The number of prescribing errors per patient, doctor, ward and hospital, perceived causes of errors and a measure of doctors’ self-efficacy were established.
Results: 4710 patient charts and 44,726 prescribed medicines were reviewed. There were 3364 errors, affecting 1700 (36.1%) charts (overall error rate: 7.5%; F1:7.4%; F2:8.6%; consultants:6.3%). Higher error rates were associated with : teaching hospitals (p,0.001), surgical (p = ,0.001) or mixed wards (0.008) rather thanmedical ward, higher patient turnover wards (p,0.001), a greater number of prescribed medicines (p,0.001) and the months December and June (p,0.001). One hundred errors were discussed in 40 interviews. Error causation was multi-factorial; work environment and team factors were particularly noted. Of 548 completed questionnaires (national response rate of 35.4%), 508 (92.7% of respondents) reported errors, most of which (328 (64.6%) did not reach the patient. Pressure from other staff, workload and interruptions were cited as the main causes of errors. Foundation year 2 doctors reported greater confidence than year 1 doctors in deciding the most appropriate medication regimen.
Conclusions: Prescribing errors are frequent and of complex causation. Foundation doctors made more errors than other doctors, but undertook the majority of prescribing, making them a key target for intervention. Contributing causes included work environment, team, task, individual and patient factors. Further work is needed to develop and assess interventions that address these.
Resumo:
This study aims to evaluate the use of Varian radiotherapy dynamic treatment log (DynaLog) files to verify IMRT plan delivery as part of a routine quality assurance procedure. Delivery accuracy in terms of machine performance was quantified by multileaf collimator (MLC) position errors and fluence delivery accuracy for patients receiving intensity modulated radiation therapy (IMRT) treatment. The relationship between machine performance and plan complexity, quantified by the modulation complexity score (MCS) was also investigated. Actual MLC positions and delivered fraction of monitor units (MU), recorded every 50 ms during IMRT delivery, were extracted from the DynaLog files. The planned MLC positions and fractional MU were taken from the record and verify system MLC control file. Planned and delivered beam data were compared to determine leaf position errors with and without the overshoot effect. Analysis was also performed on planned and actual fluence maps reconstructed from the MLC control file and delivered treatment log files respectively. This analysis was performed for all treatment fractions for 5 prostate, 5 prostate and pelvic node (PPN) and 5 head and neck (H&N) IMRT plans, totalling 82 IMRT fields in ∼5500 DynaLog files. The root mean square (RMS) leaf position errors without the overshoot effect were 0.09, 0.26, 0.19 mm for the prostate, PPN and H&N plans respectively, which increased to 0.30, 0.39 and 0.30 mm when the overshoot effect was considered. Average errors were not affected by the overshoot effect and were 0.05, 0.13 and 0.17 mm for prostate, PPN and H&N plans respectively. The percentage of pixels passing fluence map gamma analysis at 3%/3 mm was 99.94 ± 0.25%, which reduced to 91.62 ± 11.39% at 1%/1 mm criterion. Leaf position errors, but not gamma passing rate, were directly related to plan complexity as determined by the MCS. Site specific confidence intervals for average leaf position errors were set at -0.03-0.12 mm for prostate and -0.02-0.28 mm for more complex PPN and H&N plans. For all treatment sites confidence intervals for RMS errors with the overshoot was set at 0-0.50 mm and for the percentage of pixels passing a gamma analysis at 1%/1 mm a confidence interval of 68.83% was set also for all treatment sites. This work demonstrates the successful implementation of treatment log files to validate IMRT deliveries and how dynamic log files can diagnose delivery errors not possible with phantom based QC. Machine performance was found to be directly related to plan complexity but this is not the dominant determinant of delivery accuracy.
Resumo:
The motivation for this study was to reduce physics workload relating to patient- specific quality assurance (QA). VMAT plan delivery accuracy was determined from analysis of pre- and on-treatment trajectory log files and phantom-based ionization chamber array measurements. The correlation in this combination of measurements for patient-specific QA was investigated. The relationship between delivery errors and plan complexity was investigated as a potential method to further reduce patient-specific QA workload. Thirty VMAT plans from three treatment sites - prostate only, prostate and pelvic node (PPN), and head and neck (H&N) - were retrospectively analyzed in this work. The 2D fluence delivery reconstructed from pretreatment and on-treatment trajectory log files was compared with the planned fluence using gamma analysis. Pretreatment dose delivery verification was also car- ried out using gamma analysis of ionization chamber array measurements compared with calculated doses. Pearson correlations were used to explore any relationship between trajectory log file (pretreatment and on-treatment) and ionization chamber array gamma results (pretreatment). Plan complexity was assessed using the MU/ arc and the modulation complexity score (MCS), with Pearson correlations used to examine any relationships between complexity metrics and plan delivery accu- racy. Trajectory log files were also used to further explore the accuracy of MLC and gantry positions. Pretreatment 1%/1 mm gamma passing rates for trajectory log file analysis were 99.1% (98.7%-99.2%), 99.3% (99.1%-99.5%), and 98.4% (97.3%-98.8%) (median (IQR)) for prostate, PPN, and H&N, respectively, and were significantly correlated to on-treatment trajectory log file gamma results (R = 0.989, p < 0.001). Pretreatment ionization chamber array (2%/2 mm) gamma results were also significantly correlated with on-treatment trajectory log file gamma results (R = 0.623, p < 0.001). Furthermore, all gamma results displayed a significant correlation with MCS (R > 0.57, p < 0.001), but not with MU/arc. Average MLC position and gantry angle errors were 0.001 ± 0.002 mm and 0.025° ± 0.008° over all treatment sites and were not found to affect delivery accuracy. However, vari- ability in MLC speed was found to be directly related to MLC position accuracy. The accuracy of VMAT plan delivery assessed using pretreatment trajectory log file fluence delivery and ionization chamber array measurements were strongly correlated with on-treatment trajectory log file fluence delivery. The strong corre- lation between trajectory log file and phantom-based gamma results demonstrates potential to reduce our current patient-specific QA. Additionally, insight into MLC and gantry position accuracy through trajectory log file analysis and the strong cor- relation between gamma analysis results and the MCS could also provide further methodologies to both optimize the VMAT planning and QA process.