914 resultados para Analyses errors
Resumo:
Introduction: Care home residents are at particular risk from medication errors, and our objective was to determine the prevalence and potential harm of prescribing, monitoring, dispensing and administration errors in UK care homes, and to identify their causes. Methods: A prospective study of a random sample of residents within a purposive sample of homes in three areas. Errors were identified by patient interview, note review, observation of practice and examination of dispensed items. Causes were understood by observation and from theoretically framed interviews with home staff, doctors and pharmacists. Potential harm from errors was assessed by expert judgement. Results: The 256 residents recruited in 55 homes were taking a mean of 8.0 medicines. One hundred and seventy-eight (69.5%) of residents had one or more errors. The mean number per resident was 1.9 errors. The mean potential harm from prescribing, monitoring, administration and dispensing errors was 2.6, 3.7, 2.1 and 2.0 (0 = no harm, 10 = death), respectively. Contributing factors from the 89 interviews included doctors who were not accessible, did not know the residents and lacked information in homes when prescribing; home staff’s high workload, lack of medicines training and drug round interruptions; lack of team work among home, practice and pharmacy; inefficient ordering systems; inaccurate medicine records and prevalence of verbal communication; and difficult to fill (and check) medication administration systems. Conclusions: That two thirds of residents were exposed to one or more medication errors is of concern. The will to improve exists, but there is a lack of overall responsibility. Action is required from all concerned.
Resumo:
BACKGROUND: Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes. OBJECTIVE: This study examined single nucleotide polymorphisms (SNPs) in presumed nutrient-sensitive candidate genes for obesity and obesity-related diseases for main and dietary interaction effects on weight, waist circumference, and fat mass regain over 6 mo. DESIGN: In total, 742 participants who had lost ≥ 8% of their initial body weight were randomly assigned to follow 1 of 5 different ad libitum diets with different glycemic indexes and contents of dietary protein. The SNP main and SNP-diet interaction effects were analyzed by using linear regression models, corrected for multiple testing by using Bonferroni correction and evaluated by using quantile-quantile (Q-Q) plots. RESULTS: After correction for multiple testing, none of the SNPs were significantly associated with weight, waist circumference, or fat mass regain. Q-Q plots showed that ALOX5AP rs4769873 showed a higher observed than predicted P value for the association with less waist circumference regain over 6 mo (-3.1 cm/allele; 95% CI: -4.6, -1.6; P/Bonferroni-corrected P = 0.000039/0.076), independently of diet. Additional associations were identified by using Q-Q plots for SNPs in ALOX5AP, TNF, and KCNJ11 for main effects; in LPL and TUB for glycemic index interaction effects on waist circumference regain; in GHRL, CCK, MLXIPL, and LEPR on weight; in PPARC1A, PCK2, ALOX5AP, PYY, and ADRB3 on waist circumference; and in PPARD, FABP1, PLAUR, and LPIN1 on fat mass regain for dietary protein interaction. CONCLUSION: The observed effects of SNP-diet interactions on weight, waist, and fat mass regain suggest that genetic variation in nutrient-sensitive genes can modify the response to diet. This trial was registered at clinicaltrials.gov as NCT00390637.
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.
Resumo:
As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.
Resumo:
Using annual observations on industrial production over the last three centuries, and on GDP over a 100-year period, we seek an historical perspective on the forecastability of these UK output measures. The series are dominated by strong upward trends, so we consider various specifications of this, including the local linear trend structural time-series model, which allows the level and slope of the trend to vary. Our results are not unduly sensitive to how the trend in the series is modelled: the average sizes of the forecast errors of all models, and the wide span of prediction intervals, attests to a great deal of uncertainty in the economic environment. It appears that, from an historical perspective, the postwar period has been relatively more forecastable.
Resumo:
Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.
Resumo:
Radar refractivity retrievals can capture near-surface humidity changes, but noisy phase changes of the ground clutter returns limit the accuracy for both klystron- and magnetron-based systems. Observations with a C-band (5.6 cm) magnetron weather radar indicate that the correction for phase changes introduced by local oscillator frequency changes leads to refractivity errors no larger than 0.25 N units: equivalent to a relative humidity change of only 0.25% at 20°C. Requested stable local oscillator (STALO) frequency changes were accurate to 0.002 ppm based on laboratory measurements. More serious are the random phase change errors introduced when targets are not at the range-gate center and there are changes in the transmitter frequency (ΔfTx) or the refractivity (ΔN). Observations at C band with a 2-μs pulse show an additional 66° of phase change noise for a ΔfTx of 190 kHz (34 ppm); this allows the effect due to ΔN to be predicted. Even at S band with klystron transmitters, significant phase change noise should occur when a large ΔN develops relative to the reference period [e.g., ~55° when ΔN = 60 for the Next Generation Weather Radar (NEXRAD) radars]. At shorter wavelengths (e.g., C and X band) and with magnetron transmitters in particular, refractivity retrievals relative to an earlier reference period are even more difficult, and operational retrievals may be restricted to changes over shorter (e.g., hourly) periods of time. Target location errors can be reduced by using a shorter pulse or identified by a new technique making alternate measurements at two closely spaced frequencies, which could even be achieved with a dual–pulse repetition frequency (PRF) operation of a magnetron transmitter.
Resumo:
Despite the increasing number of studies examining the correlates of interest and boredom, surprisingly little research has focused on within-person fluctuations in these emotions, making it difficult to describe their situational nature. To address this gap in the literature, this study conducted repeated measurements (12 times) on a sample of 158 undergraduate students using a variety of self-report assessments, and examined the within-person relationships between task-specific perceptions (expectancy, utility, and difficulty) and interest and boredom. This study further explored the role of achievement goals in predicting between-person differences in these within-person relationships. Utilizing hierarchical-linear modeling, we found that, on average, a higher perception of both expectancy and utility, as well as a lower perception of difficulty, was associated with higher interest and lower boredom levels within individuals. Moreover, mastery-approach goals weakened the negative within-person relationship between difficulty and interest and the negative within-person relationship between utility and boredom. Mastery-avoidance and performance-avoidance goals strengthened the negative relationship between expectancy and boredom. These results suggest how educators can more effectively instruct students with different types of goals, minimizing boredom and maximizing interest and learning.
Resumo:
The IEEE 754 standard for oating-point arithmetic is widely used in computing. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. The IEEE infinities are said to have the behaviour of limits. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. We elucidate the transreal tangent and extend real limits to transreal limits. Arguing from this firm foundation, we maintain that there are three category errors in the IEEE 754 standard. Firstly the claim that IEEE infinities are limits of real arithmetic confuses limiting processes with arithmetic. Secondly a defence of IEEE negative zero confuses the limit of a function with the value of a function. Thirdly the definition of IEEE NaNs confuses undefined with unordered. Furthermore we prove that the tangent function, with the infinities given by geometrical con- struction, has a period of an entire rotation, not half a rotation as is commonly understood. This illustrates a category error, confusing the limit with the value of a function, in an important area of applied mathe- matics { trigonometry. We brie y consider the wider implications of this category error. Another paper proposes transreal arithmetic as a basis for floating- point arithmetic; here we take the profound step of proposing transreal arithmetic as a replacement for real arithmetic to remove the possibility of certain category errors in mathematics. Thus we propose both theo- retical and practical advantages of transmathematics. In particular we argue that implementing transreal analysis in trans- floating-point arith- metic would extend the coverage, accuracy and reliability of almost all computer programs that exploit real analysis { essentially all programs in science and engineering and many in finance, medicine and other socially beneficial applications.
Resumo:
Theoretical estimates for the cutoff errors in the Ewald summation method for dipolar systems are derived. Absolute errors in the total energy, forces and torques, both for the real and reciprocal space parts, are considered. The applicability of the estimates is tested and confirmed in several numerical examples. We demonstrate that these estimates can be used easily in determining the optimal parameters of the dipolar Ewald summation in the sense that they minimize the computation time for a predefined, user set, accuracy.
Resumo:
This article shows how one can formulate the representation problem starting from Bayes’ theorem. The purpose of this article is to raise awareness of the formal solutions,so that approximations can be placed in a proper context. The representation errors appear in the likelihood, and the different possibilities for the representation of reality in model and observations are discussed, including nonlinear representation probability density functions. Specifically, the assumptions needed in the usual procedure to add a representation error covariance to the error covariance of the observations are discussed,and it is shown that, when several sub-grid observations are present, their mean still has a representation error ; socalled ‘superobbing’ does not resolve the issue. Connection is made to the off-line or on-line retrieval problem, providing a new simple proof of the equivalence of assimilating linear retrievals and original observations. Furthermore, it is shown how nonlinear retrievals can be assimilated without loss of information. Finally we discuss how errors in the observation operator model can be treated consistently in the Bayesian framework, connecting to previous work in this area.
Resumo:
Glutamine synthetase (GS) is a key enzyme in nitrogen (N) assimilation, particularly during seed development. Three cytosolic GS isoforms (HvGS1) were identified in barley (Hordeum vulgare L. cv Golden Promise). Quantitation of gene expression, localization and response to N supply revealed that each gene plays a non-redundant role in different tissues and during development. Localization of HvGS1_1 in vascular cells of different tissues, combined with its abundance in the stem and its response to changes in N supply, indicate that it is important in N transport and remobilization. HvGS1_1 is located on chromosome 6H at 72.54 cM, close to the marker HVM074 which is associated with a major quantitative trait locus (QTL) for grain protein content (GPC). HvGS1_1 may be a potential candidate gene to manipulate barley GPC. HvGS1_2 mRNA was localized to the leaf mesophyll cells, in the cortex and pericycle of roots, and was the dominant HvGS1 isoform in these tissues. HvGS1_2 expression increased in leaves with an increasing supply of N, suggesting its role in the primary assimilation of N. HvGS1_3 was specifically and predominantly localized in the grain, being highly expressed throughout grain development. HvGS1_3 expression increased specifically in the roots of plants grown on high NH+4, suggesting that it has a primary role in grain N assimilation and also in the protection against ammonium toxicity in roots. The expression of HvGS1 genes is directly correlated with protein and enzymatic activity, indicating that transcriptional regulation is of prime importance in the control of GS activity in barley.
Resumo:
Observational analyses of running 5-year ocean heat content trends (Ht) and net downward top of atmosphere radiation (N) are significantly correlated (r~0.6) from 1960 to 1999, but a spike in Ht in the early 2000s is likely spurious since it is inconsistent with estimates of N from both satellite observations and climate model simulations. Variations in N between 1960 and 2000 were dominated by volcanic eruptions, and are well simulated by the ensemble mean of coupled models from the Fifth Coupled Model Intercomparison Project (CMIP5). We find an observation-based reduction in N of -0.31±0.21 Wm-2 between 1999 and 2005 that potentially contributed to the recent warming slowdown, but the relative roles of external forcing and internal variability remain unclear. While present-day anomalies of N in the CMIP5 ensemble mean and observations agree, this may be due to a cancellation of errors in outgoing longwave and absorbed solar radiation.