993 resultados para Systematic errors
Resumo:
View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.
Resumo:
Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.
Resumo:
This paper considers the effect of GARCH errors on the tests proposed byPerron (1997) for a unit root in the presence of a structural break. We assessthe impact of degeneracy and integratedness of the conditional varianceindividually and find that, apart from in the limit, the testing procedure isinsensitive to the degree of degeneracy but does exhibit an increasingover-sizing as the process becomes more integrated. When we consider the GARCHspecifications that we are likely to encounter in empirical research, we findthat the Perron tests are reasonably robust to the presence of GARCH and donot suffer from severe over-or under-rejection of a correct null hypothesis.
Resumo:
Background: Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods: In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-eff ectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings: 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months’ follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0∙58, 95% CI 0∙38–0∙89); a β blocker if they had asthma (0∙73, 0∙58–0∙91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0∙51, 0∙34–0∙78). PINCER has a 95% probability of being cost eff ective if the decision-maker’s ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation: The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding: Patient Safety Research Portfolio, Department of Health, England.
Resumo:
The incorporation of potentially catalytic groups in DNA is of interest for the in vitro selection of novel deoxyribozymes, A series of 10 C5-modified analogues of 2'-deoxyuridine triphosphate have been synthesised that possess side chains of differing flexibility and bearing a primary amino or imidazole functionality, For each series of nucleotide analogues differing degrees of flexibility of the C5 side chain was achieved through the use of alkynyl, alkenyl and alkyl moieties, The imidazole function was conjugated to these CS-amino-modified nucleotides using either imidazole 4-acetic acid or imidazole 4-acrylic acid (urocanic acid), The substrate properties of the nucleotides (fully replacing dTTP) with Taq polymerase during PCR have been investigated in order to evaluate their potential applications for in vitro selection experiments, 5-(3-Aminopropynyl)dUTP and 5-(E-3-aminopropenyl)dUTP and their imidazole 4-acetic acid- and urocanic acid-modified conjugates were found to be substrates, In contrast, C5-amino-modified dUTPs with alkane or Z-alkene linkers and their corresponding conjugates were not substrates, The incorporation of these analogues during PCR has been confirmed by inhibition of restriction enzyme digestion using XbaI and by mass spectrometry of the PCR products.
Resumo:
If secondary structure predictions are to be incorporated into fold recognition methods, an assessment of the effect of specific types of errors in predicted secondary structures on the sensitivity of fold recognition should be carried out. Here, we present a systematic comparison of different secondary structure prediction methods by measuring frequencies of specific types of error. We carry out an evaluation of the effect of specific types of error on secondary structure element alignment (SSEA), a baseline fold recognition method. The results of this evaluation indicate that missing out whole helix or strand elements, or predicting the wrong type of element, is more detrimental than predicting the wrong lengths of elements or overpredicting helix or strand. We also suggest that SSEA scoring is an effective method for assessing accuracy of secondary structure prediction and perhaps may also provide a more appropriate assessment of the “usefulness” and quality of predicted secondary structure, if secondary structure alignments are to be used in fold recognition.
Resumo:
Data assimilation aims to incorporate measured observations into a dynamical system model in order to produce accurate estimates of all the current (and future) state variables of the system. The optimal estimates minimize a variational principle and can be found using adjoint methods. The model equations are treated as strong constraints on the problem. In reality, the model does not represent the system behaviour exactly and errors arise due to lack of resolution and inaccuracies in physical parameters, boundary conditions and forcing terms. A technique for estimating systematic and time-correlated errors as part of the variational assimilation procedure is described here. The modified method determines a correction term that compensates for model error and leads to improved predictions of the system states. The technique is illustrated in two test cases. Applications to the 1-D nonlinear shallow water equations demonstrate the effectiveness of the new procedure.
Resumo:
Current state-of-the-art climate models fail to capture accurately the path of the Gulf Stream and North Atlantic Current. This leads to a warm bias near the North American coast, where the modelled Gulf Stream separates from the coast further north, and a cold anomaly to the east of the Grand Banks of Newfoundland, where the North Atlantic Current remains too zonal in this region. Using an atmosphere-only model forced with the sea surface temperature (SST) biases in the North Atlantic, we consider the impact they have on the mean state and the variability in the North Atlantic European region in winter. Our results show that the SST errors produce a mean sea-level pressure response that is similar in magnitude and pattern to the atmospheric circulation errors in the coupled climate model. The work also suggests that errors in the coupled model storm tracks and North Atlantic Oscillation, compared to reanalysis data, can also be explained partly by these SST errors. Our results suggest that both the error in the Gulf Stream separation location and the path of the North Atlantic Current around the Grand Banks play important roles in affecting the atmospheric circulation. Reducing these coupled model errors could improve significantly the representation of the large-scale atmospheric circulation of the North Atlantic and European region.
Resumo:
For data assimilation in numerical weather prediction, the initial forecast-error covariance matrix Pf is required. For variational assimilation it is particularly important to prescribe an accurate initial matrix Pf, since Pf is either static (in the 3D-Var case) or constant at the beginning of each assimilation window (in the 4D-Var case). At large scales the atmospheric flow is well approximated by hydrostatic balance and this balance is strongly enforced in the initial matrix Pf used in operational variational assimilation systems such as that of the Met Office. However, at convective scales this balance does not necessarily hold any more. Here we examine the extent to which hydrostatic balance is valid in the vertical forecast-error covariances for high-resolution models in order to determine whether there is a need to relax this balance constraint in convective-scale data assimilation. We use the Met Office Global and Regional Ensemble Prediction System (MOGREPS) and a 1.5 km resolution version of the Unified Model for a case study characterized by the presence of convective activity. An ensemble of high-resolution forecasts valid up to three hours after the onset of convection is produced. We show that at 1.5 km resolution hydrostatic balance does not hold for forecast errors in regions of convection. This indicates that in the presence of convection hydrostatic balance should not be enforced in the covariance matrix used for variational data assimilation at this scale. The results show the need to investigate covariance models that may be better suited for convective-scale data assimilation. Finally, we give a measure of the balance present in the forecast perturbations as a function of the horizontal scale (from 3–90 km) using a set of diagnostics. Copyright © 2012 Royal Meteorological Society and British Crown Copyright, the Met Office
Resumo:
Evolutionary developmental genetics brings together systematists, morphologists and developmental geneticists; it will therefore impact on each of these component disciplines. The goals and methods of phylogenetic analysis are reviewed here, and the contribution of evolutionary developmental genetics to morphological systematics, in terms of character conceptualisation and primary homology assessment, is discussed. Evolutionary developmental genetics, like its component disciplines phylogenetic systematics and comparative morphology, is concerned with homology concepts. Phylogenetic concepts of homology and their limitations are considered here, and the need for independent homology statements at different levels of biological organisation is evaluated. The role of systematics in evolutionary developmental genetics is outlined. Phylogenetic systematics and comparative morphology will suggest effective sampling strategies to developmental geneticists. Phylogenetic systematics provides hypotheses of character evolution (including parallel evolution and convergence), stimulating investigations into the evolutionary gains and losses of morphologies. Comparative morphology identifies those structures that are not easily amenable to typological categorisation, and that may be of particular interest in terms of developmental genetics. The concepts of latent homology and genetic recall may also prove useful in the evolutionary interpretation of developmental genetic data.
Resumo:
The turbulent mixing in thin ocean surface boundary layers (OSBL), which occupy the upper 100 m or so of the ocean, control the exchange of heat and trace gases between the atmosphere and ocean. Here we show that current parameterizations of this turbulent mixing lead to systematic and substantial errors in the depth of the OSBL in global climate models, which then leads to biases in sea surface temperature. One reason, we argue, is that current parameterizations are missing key surface-wave processes that force Langmuir turbulence that deepens the OSBL more rapidly than steady wind forcing. Scaling arguments are presented to identify two dimensionless parameters that measure the importance of wave forcing against wind forcing, and against buoyancy forcing. A global perspective on the occurrence of waveforced turbulence is developed using re-analysis data to compute these parameters globally. The diagnostic study developed here suggests that turbulent energy available for mixing the OSBL is under-estimated without forcing by surface waves. Wave-forcing and hence Langmuir turbulence could be important over wide areas of the ocean and in all seasons in the Southern Ocean. We conclude that surfacewave- forced Langmuir turbulence is an important process in the OSBL that requires parameterization.
Landscape, regional and global estimates of nitrogen flux from land to sea: errors and uncertainties
Resumo:
Regional to global scale modelling of N flux from land to ocean has progressed to date through the development of simple empirical models representing bulk N flux rates from large watersheds, regions, or continents on the basis of a limited selection of model parameters. Watershed scale N flux modelling has developed a range of physically-based approaches ranging from models where N flux rates are predicted through a physical representation of the processes involved, through to catchment scale models which provide a simplified representation of true systems behaviour. Generally, these watershed scale models describe within their structure the dominant process controls on N flux at the catchment or watershed scale, and take into account variations in the extent to which these processes control N flux rates as a function of landscape sensitivity to N cycling and export. This paper addresses the nature of the errors and uncertainties inherent in existing regional to global scale models, and the nature of error propagation associated with upscaling from small catchment to regional scale through a suite of spatial aggregation and conceptual lumping experiments conducted on a validated watershed scale model, the export coefficient model. Results from the analysis support the findings of other researchers developing macroscale models in allied research fields. Conclusions from the study confirm that reliable and accurate regional scale N flux modelling needs to take account of the heterogeneity of landscapes and the impact that this has on N cycling processes within homogenous landscape units.
Resumo:
We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.