173 resultados para Asymptotic Mean Squared Errors
Resumo:
Objective: To describe the use of a multifaceted strategy for recruiting general practitioners (GPs) and community pharmacists to talk about medication errors which have resulted in preventable drug-related admissions to hospital. This is a potentially sensitive subject with medicolegal implications. Setting: Four primary care trusts and one teaching hospital in the UK. Method: Letters were mailed to community pharmacists and general practitioners asking for provisional consent to be interviewed and permission to contact them again should a patient be admitted to hospital as a result of a medication error. In addition, GPs were asked for permission to approach their patients should they be admitted to hospital. A multifaceted approach to recruitment was used including gaining support for the study from professional defence agencies and local champions. Key findings: Eighty-five percent (310/385) of GPs and 62% (93/149) of community pharmacists responded to the letters. Eighty-five percent (266/310) of GPs who responded and 81% (75/93) of community pharmacists who responded gave provisional consent to participate in interviews. All GPs (14 out of 14) and community pharmacists (10 out of 10) who were subsequently asked to participate, when patients were admitted to hospital, agreed to be interviewed. Conclusion: The multifaceted approach to recruitment was associated with an impressive response when asking healthcare professionals to be interviewed about medication errors which have resulted in preventable drug-related morbidity.
Resumo:
Background: Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods: Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion: At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Resumo:
The potential of clarification questions (CQs) to act as a form of corrective input for young children's grammatical errors was examined. Corrective responses were operationalized as those occasions when child speech shifted from erroneous to correct (E -> C) contingent on a clarification question. It was predicted that E -> C sequences would prevail over shifts in the opposite direction (C -> E), as can occur in the case of nonerror-contingent CQs. This prediction was tested via a standard intervention paradigm, whereby every 60s a sequence of two clarification requests (either specific or general) was introduced into conversation with a total of 45 2- and 4-year-old children. For 10 categories of grammatical structure, E -> C sequences predominated over their C -> E counterparts, with levels of E -> C shifts increasing after two clarification questions. Children were also more reluctant to repeat erroneous forms than their correct counterparts, following the intervention of CQs. The findings provide support for Saxton's prompt hypothesis, which predicts that error-contingent CQs bear the potential to cue recall of previously acquired grammatical forms.
Resumo:
Purpose. Accommodation can mask hyperopia and reduce the accuracy of non-cycloplegic refraction. It is, therefore, important to minimize accommodation to obtain a measure of hyperopia as accurate as possible. To characterize the parameters required to measure the maximally hyperopic error using photorefraction, we used different target types and distances to determine which target was most likely to maximally relax accommodation and thus more accurately detect hyperopia in an individual. Methods. A PlusoptiX SO4 infra-red photorefractor was mounted in a remote haploscope which presented the targets. All participants were tested with targets at four fixation distances between 0.3 and 2 m containing all combinations of blur, disparity, and proximity/looming cues. Thirty-eight infants (6 to 44 weeks) were studied longitudinally, and 104 children [4 to 15 years (mean 6.4)] and 85 adults, with a range of refractive errors and binocular vision status, were tested once. Cycloplegic refraction data were available for a sub-set of 59 participants spread across the age range. Results. The maximally hyperopic refraction (MHR) found at any time in the session was most frequently found when fixating the most distant targets and those containing disparity and dynamic proximity/looming cues. Presence or absence of blur was less significant, and targets in which only single cues to depth were present were also less likely to produce MHR. MHR correlated closely with cycloplegic refraction (r = 0.93, mean difference 0.07 D, p = n.s., 95% confidence interval +/-<0.25 D) after correction by a calibration factor. Conclusions. Maximum relaxation of accommodation occurred for binocular targets receding into the distance. Proximal and disparity cues aid relaxation of accommodation to a greater extent than blur, and thus non-cycloplegic refraction targets should incorporate these cues. This is especially important in screening contexts with a brief opportunity to test for significant hyperopia. MHR in our laboratory was found to be a reliable estimation of cycloplegic refraction. (Optom Vis Sci 2009;86:1276-1286)
Resumo:
An increasing number of neuroscience experiments are using virtual reality to provide a more immersive and less artificial experimental environment. This is particularly useful to navigation and three-dimensional scene perception experiments. Such experiments require accurate real-time tracking of the observer's head in order to render the virtual scene. Here, we present data on the accuracy of a commonly used six degrees of freedom tracker (Intersense IS900) when it is moved in ways typical of virtual reality applications. We compared the reported location of the tracker with its location computed by an optical tracking method. When the tracker was stationary, the root mean square error in spatial accuracy was 0.64 mm. However, we found that errors increased over ten-fold (up to 17 mm) when the tracker moved at speeds common in virtual reality applications. We demonstrate that the errors we report here are predominantly due to inaccuracies of the IS900 system rather than the optical tracking against which it was compared. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Objective: To examine the interpretation of the verbal anchors used in the Borg rating of perceived exertion (RPE) scales in different clinical groups and a healthy control group. Design: Prospective experimental study. Setting: Rehabilitation center. Participants: Nineteen subjects with brain injury, 16 with chronic low back pain (CLBP), and 20 healthy controls. Interventions: Not applicable. Main Outcome Measures: Subjects used a visual analog scale (VAS) to rate their interpretation of the verbal anchors from the Borg RPE 6-20 and the newer 10-point category ratio scale. Results: All groups placed the verbal anchors in the order that they occur on the scales. There were significant within-group differences (P > .05) between VAS scores for 4 verbal anchors in the control group, 8 in the CLBP group, and 2 in the brain injury group. There was no significant difference in rating of each verbal anchor between the groups (P > .05). Conclusions: All subjects rated the verbal anchors in the order they occur on the scales, but there was less agreement in rating of each verbal anchor among subjects in the brain injury group. Clinicians should consider the possibility of small discrepancies in the meaning of the verbal anchors to subjects, particularly those recovering from brain injury, when they evaluate exercise perceptions.
Resumo:
This paper analyzes the performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under transmission errors. The idea of ErDCF is to use high data rate nodes to work as relays for the low data rate nodes. ErDCF achieves higher throughput and reduces energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF) in an ideal channel environment. However, there is a possibility that this expected gain may decrease in the presence of transmission errors. In this work, we modify the saturation throughput model of ErDCF to accurately reflect the impact of transmission errors under different rate combinations. It turns out that the throughput gain of ErDCF can still be maintained under reasonable link quality and distance.
Resumo:
This paper analyzes the performance of enhanced relay-enabled distributed coordination function (ErDCF) for wireless ad hoc networks under transmission errors. The idea of ErDCF is to use high data rate nodes to work as relays for the low data rate nodes. ErDCF achieves higher throughput and reduces energy consumption compared to IEEE 802.11 distributed coordination function (DCF) in an ideal channel environment. However, there is a possibility that this expected gain may decrease in the presence of transmission errors. In this work, we modify the saturation throughput model of ErDCF to accurately reflect the impact of transmission errors under different rate combinations. It turns out that the throughput gain of ErDCF can still be maintained under reasonable link quality and distance.
Resumo:
The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.
OFDM joint data detection and phase noise cancellation based on minimum mean square prediction error
Resumo:
This paper proposes a new iterative algorithm for orthogonal frequency division multiplexing (OFDM) joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the relatively less studied problem of "overfitting" such that the iterative approach may converge to a trivial solution. Specifically, we apply a hard-decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the PHN, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical Simulations are also given to verify the proposed algorithm. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The THz water content index of a sample is defined and advantages in using such metric in estimating a sample's relative water content are discussed. The errors from reflectance measurements performed at two different THz frequencies using a quasi-optical null-balance reflectometer are propagated to the errors in estimating the sample water content index.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
In this paper, we initiate the study of a class of Putnam-type equation of the form x(n-1) = A(1)x(n) + A(2)x(n-1) + A(3)x(n-2)x(n-3) + A(4)/B(1)x(n)x(n-1) + B(2)x(n-2) + B(3)x(n-3) + B-4 n = 0, 1, 2,..., where A(1), A(2), A(3), A(4), B-1, B-2, B-3, B-4 are positive constants with A(1) + A(2) + A(3) + A(4) = B-1 + B-2 + B-3 + B-4, x(-3), x(-2), x(-1), x(0) are positive numbers. A sufficient condition is given for the global asymptotic stability of the equilibrium point c = 1 of such equations. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we study the global stability of the difference equation x(n) = a + bx(n-1) + cx(n-1)(2)/d - x(n-2), n = 1,2,....., where a, b greater than or equal to 0 and c, d > 0. We show that one nonnegative equilibrium point of the equation is a global attractor with a basin that is determined by the parameters, and every positive Solution of the equation in the basin exponentially converges to the attractor. (C) 2003 Elsevier Inc. All rights reserved.