980 resultados para Systematic errors


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid quantum mechanics/molecular mechanics (QM/MM) simulations provide a powerful tool for studying chemical reactions, especially in complex biochemical systems. In most works to date, the quantum region is kept fixed throughout the simulation and is defined in an ad hoc way based on chemical intuition and available computational resources. The simulation errors associated with a given choice of the quantum region are, however, rarely assessed in a systematic manner. Here we study the dependence of two relevant quantities on the QM region size: the force error at the center of the QM region and the free energy of a proton transfer reaction. Taking lysozyme as our model system, we find that in an apolar region the average force error rapidly decreases with increasing QM region size. In contrast, the average force error at the polar active site is considerably higher, exhibits large oscillations and decreases more slowly, and may not fall below acceptable limits even for a quantum region radius of 9.0 A. Although computation of free energies could only be afforded until 6.0 A, results were found to change considerably within these limits. These errors demonstrate that the results of QM/MM calculations are heavily affected by the definition of the QM region (not only its size), and a convergence test is proposed to be a part of setting up QM/MM simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systematic principal component analysis (PCA) methods are presented in this paper for reliable islanding detection for power systems with significant penetration of distributed generations (DGs), where synchrophasors recorded by Phasor Measurement Units (PMUs) are used for system monitoring. Existing islanding detection methods such as Rate-of-change-of frequency (ROCOF) and Vector Shift are fast for processing local information, however with the growth in installed capacity of DGs, they suffer from several drawbacks. Incumbent genset islanding detection cannot distinguish a system wide disturbance from an islanding event, leading to mal-operation. The problem is even more significant when the grid does not have sufficient inertia to limit frequency divergences in the system fault/stress due to the high penetration of DGs. To tackle such problems, this paper introduces PCA methods for islanding detection. Simple control chart is established for intuitive visualization of the transients. A Recursive PCA (RPCA) scheme is proposed as a reliable extension of the PCA method to reduce the false alarms for time-varying process. To further reduce the computational burden, the approximate linear dependence condition (ALDC) errors are calculated to update the associated PCA model. The proposed PCA and RPCA methods are verified by detecting abnormal transients occurring in the UK utility network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Medicines reconciliation is a way to identify and act on discrepancies in patients’ medical histories and it is found to play a key role in patient safety. This review focuses on discrepancies and medical errors that occurred at point of discharge from hospital. Studies were identified through the following electronic databases: PubMed, Sciences Direct, EMBASE, Google Scholar, Cochrane Reviews and CINAHL. Each of the six databases was screened from inception to end of January 2014. To determine eligibility of the studies; the title, abstract and full manuscript were screened to find 15 articles that meet the inclusion criteria. The median number of discrepancies across the articles was found to be 60%. In average patient had between 1.2–5.3 discrepancies when leaving the hospital. More studies also found a relation between the numbers of drugs a patient was on and the number of discrepancies. The variation in the number of discrepancies found in the 15 studies could be due to the fact that some studies excluded patient taking more than 5 drugs at admission. Medication reconciliation would be a way to avoid the high number of discrepancies that was found in this literature review and thereby increase patient safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To determine the prevalence and nature of prescribing errors in general practice; to explore the causes, and to identify defences against error. Methods: 1) Systematic reviews; 2) Retrospective review of unique medication items prescribed over a 12 month period to a 2% sample of patients from 15 general practices in England; 3) Interviews with 34 prescribers regarding 70 potential errors; 15 root cause analyses, and six focus groups involving 46 primary health care team members Results: The study involved examination of 6,048 unique prescription items for 1,777 patients. Prescribing or monitoring errors were detected for one in eight patients, involving around one in 20 of all prescription items. The vast majority of the errors were of mild to moderate severity, with one in 550 items being associated with a severe error. The following factors were associated with increased risk of prescribing or monitoring errors: male gender, age less than 15 years or greater than 64 years, number of unique medication items prescribed, and being prescribed preparations in the following therapeutic areas: cardiovascular, infections, malignant disease and immunosuppression, musculoskeletal, eye, ENT and skin. Prescribing or monitoring errors were not associated with the grade of GP or whether prescriptions were issued as acute or repeat items. A wide range of underlying causes of error were identified relating to the prescriber, patient, the team, the working environment, the task, the computer system and the primary/secondary care interface. Many defences against error were also identified, including strategies employed by individual prescribers and primary care teams, and making best use of health information technology. Conclusion: Prescribing errors in general practices are common, although severe errors are unusual. Many factors increase the risk of error. Strategies for reducing the prevalence of error should focus on GP training, continuing professional development for GPs, clinical governance, effective use of clinical computer systems, and improving safety systems within general practices and at the interface with secondary care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SST errors in the tropical Atlantic are large and systematic in current coupled general-circulation models. We analyse the growth of these errors in the region of the south-eastern tropical Atlantic in initialised decadal hindcasts integrations for three of the models participating in the Coupled Model Inter-comparison Project 5. A variety of causes for the initial bias development are identified, but a crucial involvement is found, in all cases considered, of ocean-atmosphere coupling for their maintenance. These involve an oceanic “bridge” between the Equator and the Benguela-Angola coastal seas which communicates sub-surface ocean anomalies and constitutes a coupling between SSTs in the south-eastern tropical Atlantic and the winds over the Equator. The resulting coupling between SSTs, winds and precipitation represents a positive feedback for warm SST errors in the south-eastern tropical Atlantic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent work has shown that both the amplitude of upper-level Rossby waves and the tropopause sharpness decrease with forecast lead time for several days in some operational weather forecast systems. In this contribution, the evolution of error growth in a case study of this forecast error type is diagnosed through analysis of operational forecasts and hindcast simulations. Potential vorticity (PV) on the 320-K isentropic surface is used to diagnose Rossby waves. The Rossby-wave forecast error in the operational ECMWF high-resolution forecast is shown to be associated with errors in the forecast of a warm conveyor belt (WCB) through trajectory analysis and an error metric for WCB outflows. The WCB forecast error is characterised by an overestimation of WCB amplitude, a location of the WCB outflow regions that is too far to the southeast, and a resulting underestimation of the magnitude of the negative PV anomaly in the outflow. Essentially the same forecast error development also occurred in all members of the ECMWF Ensemble Prediction System and the Met Office MOGREPS-15 suggesting that in this case model error made an important contribution to the development of forecast error in addition to initial condition error. Exploiting this forecast error robustness, a comparison was performed between the realised flow evolution, proxied by a sequence of short-range simulations, and a contemporaneous forecast. Both the proxy to the realised flow and the contemporaneous forecast a were produced with the Met Office Unified Model enhanced with tracers of diabatic processes modifying potential temperature and PV. Clear differences were found in the way potential temperature and PV are modified in the WCB between proxy and forecast. These results demonstrate that differences in potential temperature and PV modification in the WCB can be responsible for forecast errors in Rossby waves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In numerical weather prediction, parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known; and the parameterisations themselves are also approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation (DA), such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential DA methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to pre-determined functional forms of missing physics or parameterisations that are based upon prior information. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. Furthermore, it is shown how the method depends on the quality of the DA results. The results indicate that this new method is a powerful tool in systematic model improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to discuss the estimation of the systematic risk in capital asset pricing models with heavy-tailed error distributions to explain the asset returns. Diagnostic methods for assessing departures from the model assumptions as well as the influence of observations on the parameter estimates are also presented. It may be shown that outlying observations are down weighted in the maximum likelihood equations of linear models with heavy-tailed error distributions, such as Student-t, power exponential, logistic II, so on. This robustness aspect may also be extended to influential observations. An application in which the systematic risk estimate of Microsoft is compared under normal and heavy-tailed errors is presented for illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grammar has always been an important part of language learning. Based on various theories, such as the universal grammar theory (Chomsky, 1959) and, the input theory (Krashen, 1970), the explicit and implicit teaching methods have been developed. Research shows that both methods may have some benefits and disadvantages. The attitude towards English grammar teaching methods in schools has also changed and nowadays grammar teaching methods and learning strategies, as a part of language mastery, are one of the discussion topics among linguists. This study focuses on teacher and learner experiences and beliefs about teaching English grammar and difficulties learners may face. The aim of the study is to conduct a literature review and to find out what scientific knowledge exists concerning the previously named topics. Along with this, the relevant steering documents are investigated focusing on grammar teaching at Swedish upper secondary schools. The universal grammar theory of Chomsky as well as Krashen’s input hypotheses provide the theoretical background for the current study. The study has been conducted applying qualitative and quantitative methods. The systematic search in four databases LIBRIS, ERIK, LLBA and Google Scholar were used for collecting relevant publications. The result shows that scientists’ publications name different grammar areas that are perceived as problematic for learners all over the world. The most common explanation of these difficulties is the influence of learner L1. Research presents teachers’ and learners’ beliefs to the benefits of grammar teaching methods. An effective combination of teaching methods needs to be done to fit learners’ expectations and individual needs. Together, they will contribute to the achieving of higher language proficiency levels and, therefore, they can be successfully applied at Swedish upper secondary schools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neurally adjusted ventilatory assist (NAVA) delivers airway pressure (P(aw)) in proportion to the electrical activity of the diaphragm (EAdi) using an adjustable proportionality constant (NAVA level, cm·H(2)O/μV). During systematic increases in the NAVA level, feedback-controlled down-regulation of the EAdi results in a characteristic two-phased response in P(aw) and tidal volume (Vt). The transition from the 1st to the 2nd response phase allows identification of adequate unloading of the respiratory muscles with NAVA (NAVA(AL)). We aimed to develop and validate a mathematical algorithm to identify NAVA(AL). P(aw), Vt, and EAdi were recorded while systematically increasing the NAVA level in 19 adult patients. In a multistep approach, inspiratory P(aw) peaks were first identified by dividing the EAdi into inspiratory portions using Gaussian mixture modeling. Two polynomials were then fitted onto the curves of both P(aw) peaks and Vt. The beginning of the P(aw) and Vt plateaus, and thus NAVA(AL), was identified at the minimum of squared polynomial derivative and polynomial fitting errors. A graphical user interface was developed in the Matlab computing environment. Median NAVA(AL) visually estimated by 18 independent physicians was 2.7 (range 0.4 to 5.8) cm·H(2)O/μV and identified by our model was 2.6 (range 0.6 to 5.0) cm·H(2)O/μV. NAVA(AL) identified by our model was below the range of visually estimated NAVA(AL) in two instances and was above in one instance. We conclude that our model identifies NAVA(AL) in most instances with acceptable accuracy for application in clinical routine and research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose The accuracy, efficiency, and efficacy of four commonly recommended medication safety assessment methodologies were systematically reviewed. Methods Medical literature databases were systematically searched for any comparative study conducted between January 2000 and October 2009 in which at least two of the four methodologies—incident report review, direct observation, chart review, and trigger tool—were compared with one another. Any study that compared two or more methodologies for quantitative accuracy (adequacy of the assessment of medication errors and adverse drug events) efficiency (effort and cost), and efficacy and that provided numerical data was included in the analysis. Results Twenty-eight studies were included in this review. Of these, 22 compared two of the methodologies, and 6 compared three methods. Direct observation identified the greatest number of reports of drug-related problems (DRPs), while incident report review identified the fewest. However, incident report review generally showed a higher specificity compared to the other methods and most effectively captured severe DRPs. In contrast, the sensitivity of incident report review was lower when compared with trigger tool. While trigger tool was the least labor-intensive of the four methodologies, incident report review appeared to be the least expensive, but only when linked with concomitant automated reporting systems and targeted follow-up. Conclusion All four medication safety assessment techniques—incident report review, chart review, direct observation, and trigger tool—have different strengths and weaknesses. Overlap between different methods in identifying DRPs is minimal. While trigger tool appeared to be the most effective and labor-efficient method, incident report review best identified high-severity DRPs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many field or laboratory situations, well-mixed reservoirs like, for instance, injection or detection wells and gas distribution or sampling chambers define boundaries of transport domains. Exchange of solutes or gases across such boundaries can occur through advective or diffusive processes. First we analyzed situations, where the inlet region consists of a well-mixed reservoir, in a systematic way by interpreting them in terms of injection type. Second, we discussed the mass balance errors that seem to appear in case of resident injections. Mixing cells (MC) can be coupled mathematically in different ways to a domain where advective-dispersive transport occurs: by assuming a continuous solute flux at the interface (flux injection, MC-FI), or by assuming a continuous resident concentration (resident injection). In the latter case, the flux leaving the mixing cell can be defined in two ways: either as the value when the interface is approached from the mixing-cell side (MC-RT -), or as the value when it is approached from the column side (MC-RT +). Solutions of these injection types with constant or-in one case-distance-dependent transport parameters were compared to each other as well as to a solution of a two-layer system, where the first layer was characterized by a large dispersion coefficient. These solutions differ mainly at small Peclet numbers. For most real situations, the model for resident injection MC-RI + is considered to be relevant. This type of injection was modeled with a constant or with an exponentially varying dispersion coefficient within the porous medium. A constant dispersion coefficient will be appropriate for gases because of the Eulerian nature of the usually dominating gaseous diffusion coefficient, whereas the asymptotically growing dispersion coefficient will be more appropriate for solutes due to the Lagrangian nature of mechanical dispersion, which evolves only with the fluid flow. Assuming a continuous resident concentration at the interface between a mixing cell and a column, as in case of the MC-RI + model, entails a flux discontinuity. This flux discontinuity arises inherently from the definition of a mixing cell: the mixing process is included in the balance equation, but does not appear in the description of the flux through the mixing cell. There, only convection appears because of the homogeneous concentration within the mixing cell. Thus, the solute flux through a mixing cell in close contact with a transport domain is generally underestimated. This leads to (apparent) mass balance errors, which are often reported for similar situations and erroneously used to judge the validity of such models. Finally, the mixing cell model MC-RI + defines a universal basis regarding the type of solute injection at a boundary. Depending on the mixing cell parameters, it represents, in its limits, flux as well as resident injections. (C) 1998 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND There is limited research on anaesthesiologists' attitudes and experiences regarding medical error communication, particularly concerning disclosing errors to patients. OBJECTIVE To characterise anaesthesiologists' attitudes and experiences regarding disclosing errors to patients and reporting errors within the hospital, and to examine factors influencing their willingness to disclose or report errors. DESIGN Cross-sectional survey. SETTING Switzerland's five university hospitals' departments of anaesthesia in 2012/2013. PARTICIPANTS Two hundred and eighty-one clinically active anaesthesiologists. MAIN OUTCOME MEASURES Anaesthesiologists' attitudes and experiences regarding medical error communication. RESULTS The overall response rate of the survey was 52% (281/542). Respondents broadly endorsed disclosing harmful errors to patients (100% serious, 77% minor errors, 19% near misses), but also reported factors that might make them less likely to actually disclose such errors. Only 12% of respondents had previously received training on how to disclose errors to patients, although 93% were interested in receiving training. Overall, 97% of respondents agreed that serious errors should be reported, but willingness to report minor errors (74%) and near misses (59%) was lower. Respondents were more likely to strongly agree that serious errors should be reported if they also thought that their hospital would implement systematic changes after errors were reported [(odds ratio, 2.097 (95% confidence interval, 1.16 to 3.81)]. Significant differences in attitudes between departments regarding error disclosure and reporting were noted. CONCLUSION Willingness to disclose or report errors varied widely between hospitals. Thus, heads of department and hospital chiefs need to be aware of the importance of local culture when it comes to error communication. Error disclosure training and improving feedback on how error reports are being used to improve patient safety may also be important steps in increasing anaesthesiologists' communication of errors.