47 resultados para Trial and error
em CentAUR: Central Archive University of Reading - UK
Resumo:
Background: Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods: In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-eff ectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings: 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months’ follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0∙58, 95% CI 0∙38–0∙89); a β blocker if they had asthma (0∙73, 0∙58–0∙91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0∙51, 0∙34–0∙78). PINCER has a 95% probability of being cost eff ective if the decision-maker’s ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation: The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding: Patient Safety Research Portfolio, Department of Health, England.
Resumo:
Low-power medium access control (MAC) protocols used for communication of energy constraint wireless embedded devices do not cope well with situations where transmission channels are highly erroneous. Existing MAC protocols discard corrupted messages which lead to costly retransmissions. To improve transmission performance, it is possible to include an error correction scheme and transmit/receive diversity. It is possible to add redundant information to transmitted packets in order to recover data from corrupted packets. It is also possible to make use of transmit/receive diversity via multiple antennas to improve error resiliency of transmissions. Both schemes may be used in conjunction to further improve the performance. In this study, the authors show how an error correction scheme and transmit/receive diversity can be integrated in low-power MAC protocols. Furthermore, the authors investigate the achievable performance gains of both methods. This is important as both methods have associated costs (processing requirements; additional antennas and power) and for a given communication situation it must be decided which methods should be employed. The authors’ results show that, in many practical situations, error control coding outperforms transmission diversity; however, if very high reliability is required, it is useful to employ both schemes together.
Resumo:
Background Cognitive–behavioural therapy (CBT) for childhood anxiety disorders is associated with modest outcomes in the context of parental anxiety disorder. Objectives This study evaluated whether or not the outcome of CBT for children with anxiety disorders in the context of maternal anxiety disorders is improved by the addition of (i) treatment of maternal anxiety disorders, or (ii) treatment focused on maternal responses. The incremental cost-effectiveness of the additional treatments was also evaluated. Design Participants were randomised to receive (i) child cognitive–behavioural therapy (CCBT); (ii) CCBT with CBT to target maternal anxiety disorders [CCBT + maternal cognitive–behavioural therapy (MCBT)]; or (iii) CCBT with an intervention to target mother–child interactions (MCIs) (CCBT + MCI). Setting A NHS university clinic in Berkshire, UK. Participants Two hundred and eleven children with a primary anxiety disorder, whose mothers also had an anxiety disorder. Interventions All families received eight sessions of individual CCBT. Mothers in the CCBT + MCBT arm also received eight sessions of CBT targeting their own anxiety disorders. Mothers in the MCI arm received 10 sessions targeting maternal parenting cognitions and behaviours. Non-specific interventions were delivered to balance groups for therapist contact. Main outcome measures Primary clinical outcomes were the child’s primary anxiety disorder status and degree of improvement at the end of treatment. Follow-up assessments were conducted at 6 and 12 months. Outcomes in the economic analyses were identified and measured using estimated quality-adjusted life-years (QALYs). QALYS were combined with treatment, health and social care costs and presented within an incremental cost–utility analysis framework with associated uncertainty. Results MCBT was associated with significant short-term improvement in maternal anxiety; however, after children had received CCBT, group differences were no longer apparent. CCBT + MCI was associated with a reduction in maternal overinvolvement and more confident expectations of the child. However, neither CCBT + MCBT nor CCBT + MCI conferred a significant post-treatment benefit over CCBT in terms of child anxiety disorder diagnoses [adjusted risk ratio (RR) 1.18, 95% confidence interval (CI) 0.87 to 1.62, p = 0.29; adjusted RR CCBT + MCI vs. control: adjusted RR 1.22, 95% CI 0.90 to 1.67, p = 0.20, respectively] or global improvement ratings (adjusted RR 1.25, 95% CI 1.00 to 1.59, p = 0.05; adjusted RR 1.20, 95% CI 0.95 to 1.53, p = 0.13). CCBT + MCI outperformed CCBT on some secondary outcome measures. Furthermore, primary economic analyses suggested that, at commonly accepted thresholds of cost-effectiveness, the probability that CCBT + MCI will be cost-effective in comparison with CCBT (plus non-specific interventions) is about 75%. Conclusions Good outcomes were achieved for children and their mothers across treatment conditions. There was no evidence of a benefit to child outcome of supplementing CCBT with either intervention focusing on maternal anxiety disorder or maternal cognitions and behaviours. However, supplementing CCBT with treatment that targeted maternal cognitions and behaviours represented a cost-effective use of resources, although the high percentage of missing data on some economic variables is a shortcoming. Future work should consider whether or not effects of the adjunct interventions are enhanced in particular contexts. The economic findings highlight the utility of considering the use of a broad range of services when evaluating interventions with this client group. Trial registration Current Controlled Trials ISRCTN19762288. Funding This trial was funded by the Medical Research Council (MRC) and Berkshire Healthcare Foundation Trust and managed by the National Institute for Health Research (NIHR) on behalf of the MRC–NIHR partnership (09/800/17) and will be published in full in Health Technology Assessment; Vol. 19, No. 38.
Resumo:
Often, firms have no information on the specification of the true demand model they are faced with. It is, however, a well established fact that trial-and-error algorithms may be used by them in order to learn how to make optimal decisions. Using experimental methods, we identify a property of the information on past actions which helps the seller of two asymmetric demand substitutes to reach the optimal prices more precisely and faster. The property concerns the possibility of disaggregating changes in each product’s demand into client exit/entry and shift from one product to the other.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.
Resumo:
Background: Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods: Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion: At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Resumo:
We present and analyse a space–time discontinuous Galerkin method for wave propagation problems. The special feature of the scheme is that it is a Trefftz method, namely that trial and test functions are solution of the partial differential equation to be discretised in each element of the (space–time) mesh. The method considered is a modification of the discontinuous Galerkin schemes of Kretzschmar et al. (2014) and of Monk & Richter (2005). For Maxwell’s equations in one space dimension, we prove stability of the method, quasi-optimality, best approximation estimates for polynomial Trefftz spaces and (fully explicit) error bounds with high order in the meshwidth and in the polynomial degree. The analysis framework also applies to scalar wave problems and Maxwell’s equations in higher space dimensions. Some numerical experiments demonstrate the theoretical results proved and the faster convergence compared to the non-Trefftz version of the scheme.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
Garment information tracking is required for clean room garment management. In this paper, we present a camera-based robust system with implementation of Optical Character Reconition (OCR) techniques to fulfill garment label recognition. In the system, a camera is used for image capturing; an adaptive thresholding algorithm is employed to generate binary images; Connected Component Labelling (CCL) is then adopted for object detection in the binary image as a part of finding the ROI (Region of Interest); Artificial Neural Networks (ANNs) with the BP (Back Propagation) learning algorithm are used for digit recognition; and finally the system is verified by a system database. The system has been tested. The results show that it is capable of coping with variance of lighting, digit twisting, background complexity, and font orientations. The system performance with association to the digit recognition rate has met the design requirement. It has achieved real-time and error-free garment information tracking during the testing.
Resumo:
A predominance of small, dense low-density lipoprotein (LDL) is a major component of an atherogenic lipoprotein phenotype, and a common, but modifiable, source of increased risk for coronary heart disease in the free-living population. While much of the atherogenicity of small, dense LDL is known to arise from its structural properties, the extent to which an increase in the number of small, dense LDL particles (hyper-apoprotein B) contributes to this risk of coronary heart disease is currently unknown. This study reports a method for the recruitment of free-living individuals with an atherogenic lipoprotein phenotype for a fish-oil intervention trial, and critically evaluates the relationship between LDL particle number and the predominance of small, dense LDL. In this group, volunteers were selected through local general practices on the basis of a moderately raised plasma triacylglycerol (triglyceride) level (>1.5 mmol/l) and a low concentration of high-density-lipoprotein cholesterol (<1.1 mmol/l). The screening of LDL subclasses revealed a predominance of small, dense LDL (LDL subclass pattern B) in 62% of the cohort. As expected, subjects with LDL subclass pattern B were characterized by higher plasma triacylglycerol and lower high-density lipoprotein cholesterol (<1.1 mmol/l) levels and, less predictably, by lower LDL cholesterol and apoprotein B levels (P<0.05; LDL subclass A compared with subclass B). While hyper-apoprotein B was detected in only five subjects, the relative percentage of small, dense LDL-III in subjects with subclass B showed an inverse relationship with LDL apoprotein B (r=-0.57; P<0.001), identifying a subset of individuals with plasma triacylglycerol above 2.5 mmol/l and a low concentration of LDL almost exclusively in a small and dense form. These findings indicate that a predominance of small, dense LDL and hyper-apoprotein B do not always co-exist in free-living groups. Moreover, if coronary risk increases with increasing LDL particle number, these results imply that the risk arising from a predominance of small, dense LDL may actually be reduced in certain cases when plasma triacylglycerol exceeds 2.5 mmol/l.
Resumo:
Numerical weather prediction (NWP) centres use numerical models of the atmospheric flow to forecast future weather states from an estimate of the current state. Variational data assimilation (VAR) is used commonly to determine an optimal state estimate that miminizes the errors between observations of the dynamical system and model predictions of the flow. The rate of convergence of the VAR scheme and the sensitivity of the solution to errors in the data are dependent on the condition number of the Hessian of the variational least-squares objective function. The traditional formulation of VAR is ill-conditioned and hence leads to slow convergence and an inaccurate solution. In practice, operational NWP centres precondition the system via a control variable transform to reduce the condition number of the Hessian. In this paper we investigate the conditioning of VAR for a single, periodic, spatially-distributed state variable. We present theoretical bounds on the condition number of the original and preconditioned Hessians and hence demonstrate the improvement produced by the preconditioning. We also investigate theoretically the effect of observation position and error variance on the preconditioned system and show that the problem becomes more ill-conditioned with increasingly dense and accurate observations. Finally, we confirm the theoretical results in an operational setting by giving experimental results from the Met Office variational system.
Resumo:
Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.
Resumo:
Our group considered the desirability of including representations of uncertainty in the development of parameterizations. (By ‘uncertainty’ here we mean the deviation of sub-grid scale fluxes or tendencies in any given model grid box from truth.) We unanimously agreed that the ECWMF should attempt to provide a more physical basis for uncertainty estimates than the very effective but ad hoc methods being used at present. Our discussions identified several issues that will arise.