135 resultados para errors-in-variables model
Resumo:
Introduction: Care home residents are at particular risk from medication errors, and our objective was to determine the prevalence and potential harm of prescribing, monitoring, dispensing and administration errors in UK care homes, and to identify their causes. Methods: A prospective study of a random sample of residents within a purposive sample of homes in three areas. Errors were identified by patient interview, note review, observation of practice and examination of dispensed items. Causes were understood by observation and from theoretically framed interviews with home staff, doctors and pharmacists. Potential harm from errors was assessed by expert judgement. Results: The 256 residents recruited in 55 homes were taking a mean of 8.0 medicines. One hundred and seventy-eight (69.5%) of residents had one or more errors. The mean number per resident was 1.9 errors. The mean potential harm from prescribing, monitoring, administration and dispensing errors was 2.6, 3.7, 2.1 and 2.0 (0 = no harm, 10 = death), respectively. Contributing factors from the 89 interviews included doctors who were not accessible, did not know the residents and lacked information in homes when prescribing; home staff’s high workload, lack of medicines training and drug round interruptions; lack of team work among home, practice and pharmacy; inefficient ordering systems; inaccurate medicine records and prevalence of verbal communication; and difficult to fill (and check) medication administration systems. Conclusions: That two thirds of residents were exposed to one or more medication errors is of concern. The will to improve exists, but there is a lack of overall responsibility. Action is required from all concerned.
Resumo:
A model of species migration is presented which takes the form of a reaction-diffusion system. We consider special limits of this model in which we demonstrate the existence of travelling wave solutions. These solutions can be used to describe the migration of cells, bacteria, and some organisms. © 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
The IEEE 754 standard for oating-point arithmetic is widely used in computing. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. The IEEE infinities are said to have the behaviour of limits. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. We elucidate the transreal tangent and extend real limits to transreal limits. Arguing from this firm foundation, we maintain that there are three category errors in the IEEE 754 standard. Firstly the claim that IEEE infinities are limits of real arithmetic confuses limiting processes with arithmetic. Secondly a defence of IEEE negative zero confuses the limit of a function with the value of a function. Thirdly the definition of IEEE NaNs confuses undefined with unordered. Furthermore we prove that the tangent function, with the infinities given by geometrical con- struction, has a period of an entire rotation, not half a rotation as is commonly understood. This illustrates a category error, confusing the limit with the value of a function, in an important area of applied mathe- matics { trigonometry. We brie y consider the wider implications of this category error. Another paper proposes transreal arithmetic as a basis for floating- point arithmetic; here we take the profound step of proposing transreal arithmetic as a replacement for real arithmetic to remove the possibility of certain category errors in mathematics. Thus we propose both theo- retical and practical advantages of transmathematics. In particular we argue that implementing transreal analysis in trans- floating-point arith- metic would extend the coverage, accuracy and reliability of almost all computer programs that exploit real analysis { essentially all programs in science and engineering and many in finance, medicine and other socially beneficial applications.
Resumo:
Theoretical estimates for the cutoff errors in the Ewald summation method for dipolar systems are derived. Absolute errors in the total energy, forces and torques, both for the real and reciprocal space parts, are considered. The applicability of the estimates is tested and confirmed in several numerical examples. We demonstrate that these estimates can be used easily in determining the optimal parameters of the dipolar Ewald summation in the sense that they minimize the computation time for a predefined, user set, accuracy.
Resumo:
Contemporary research in generative second language (L2) acquisition has attempted to address observable target-deviant aspects of L2 grammars within a UG-continuity framework (e.g. Lardiere 2000; Schwartz 2003; Sprouse 2004; Prévost & White 1999, 2000). With the aforementioned in mind, the independence of pragmatic and syntactic development, independently observed elsewhere (e.g. Grodzinsky & Reinhart 1993; Lust et al. 1986; Pacheco & Flynn 2005; Serratrice, Sorace & Paoli 2004), becomes particularly interesting. In what follows, I examine the resetting of the Null-Subject Parameter (NSP) for English learners of L2 Spanish. I argue that insensitivity to associated discoursepragmatic constraints on the discursive distribution of overt/null subjects accounts for what appear to be particular errors as a result of syntactic deficits. It is demonstrated that despite target-deviant performance, the majority must have native-like syntactic competence given their knowledge of the Overt Pronoun Constraint (Montalbetti 1984), a principle associated with the Spanish-type setting of the NSP.
Resumo:
The absorption coefficient of a substance distributed as discrete particles in suspension is less than that of the same material dissolved uniformly in a medium—a phenomenon commonly referred to as the flattening effect. The decrease in the absorption coefficient owing to flattening effect depends on the concentration of the absorbing pigment inside the particle, the specific absorption coefficient of the pigment within the particle, and on the diameter of the particle, if the particles are assumed to be spherical. For phytoplankton cells in the ocean, with diameters ranging from less than 1 µm to more than 100 µm, the flattening effect is variable, and sometimes pronounced, as has been well documented in the literature. Here, we demonstrate how the in vivo absorption coefficient of phytoplankton cells per unit concentration of its major pigment, chlorophyll a, can be used to determine the average cell size of the phytoplankton population. Sensitivity analyses are carried out to evaluate the errors in the estimated diameter owing to potential errors in the model assumptions. Cell sizes computed for field samples using the model are compared qualitatively with indirect estimates of size classes derived from high performance liquid chromatography data. Also, the results are compared quantitatively against measurements of cell size in laboratory cultures. The method developed is easy-to-apply as an operational tool for in situ observations, and has the potential for application to remote sensing of ocean colour data.
Resumo:
The representation of the diurnal cycle in the Hadley Centre climate model is evaluated using simulations of the infrared radiances observed by Meteosat 7. In both the window and water vapour channels, the standard version of the model with 19 levels produces a good simulation of the geographical distributions of the mean radiances and of the amplitude of the diurnal cycle. Increasing the vertical resolution to 30 levels leads to further improvements in the mean fields. The timing of the maximum and minimum radiances reveals significant model errors, however, which are sensitive to the frequency with which the radiation scheme is called. In most regions, these errors are consistent with well documented errors in the timing of convective precipitation, which peaks before noon in the model, in contrast to the observed peak in the late afternoon or evening. When the radiation scheme is called every model time step (half an hour), as opposed to every three hours in the standard version, the timing of the minimum radiance is improved for convective regions over central Africa, due to the creation of upper-level layer-cloud by detrainment from the convection scheme, which persists well after the convection itself has dissipated. However, this produces a decoupling between the timing of the diurnal cycles of precipitation and window channel radiance. The possibility is raised that a similar decoupling may occur in reality and the implications of this for the retrieval of the diurnal cycle of precipitation from infrared radiances are discussed.
Resumo:
The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0-an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/.
Resumo:
The realistic representation of rainfall on the local scale in climate models remains a key challenge. Realism encompasses the full spatial and temporal structure of rainfall, and is a key indicator of model skill in representing the underlying processes. In particular, if rainfall is more realistic in a climate model, there is greater confidence in its projections of future change. In this study, the realism of rainfall in a very high-resolution (1.5 km) regional climate model (RCM) is compared to a coarser-resolution 12-km RCM. This is the first time a convection-permitting model has been run for an extended period (1989–2008) over a region of the United Kingdom, allowing the characteristics of rainfall to be evaluated in a climatological sense. In particular, the duration and spatial extent of hourly rainfall across the southern United Kingdom is examined, with a key focus on heavy rainfall. Rainfall in the 1.5-km RCM is found to be much more realistic than in the 12-km RCM. In the 12-km RCM, heavy rain events are not heavy enough, and tend to be too persistent and widespread. While the 1.5-km model does have a tendency for heavy rain to be too intense, it still gives a much better representation of its duration and spatial extent. Long-standing problems in climate models, such as the tendency for too much persistent light rain and errors in the diurnal cycle, are also considerably reduced in the 1.5-km RCM. Biases in the 12-km RCM appear to be linked to deficiencies in the representation of convection.
Resumo:
Diabatic processes can alter Rossby wave structure; consequently errors arising from model processes propagate downstream. However, the chaotic spread of forecasts from initial condition uncertainty renders it difficult to trace back from root mean square forecast errors to model errors. Here diagnostics unaffected by phase errors are used, enabling investigation of systematic errors in Rossby waves in winter-season forecasts from three operational centers. Tropopause sharpness adjacent to ridges decreases with forecast lead time. It depends strongly on model resolution, even though models are examined on a common grid. Rossby wave amplitude reduces with lead time up to about five days, consistent with under-representation of diabatic modification and transport of air from the lower troposphere into upper-tropospheric ridges, and with too weak humidity gradients across the tropopause. However, amplitude also decreases when resolution is decreased. Further work is necessary to isolate the contribution from errors in the representation of diabatic processes.
Resumo:
This article shows how one can formulate the representation problem starting from Bayes’ theorem. The purpose of this article is to raise awareness of the formal solutions,so that approximations can be placed in a proper context. The representation errors appear in the likelihood, and the different possibilities for the representation of reality in model and observations are discussed, including nonlinear representation probability density functions. Specifically, the assumptions needed in the usual procedure to add a representation error covariance to the error covariance of the observations are discussed,and it is shown that, when several sub-grid observations are present, their mean still has a representation error ; socalled ‘superobbing’ does not resolve the issue. Connection is made to the off-line or on-line retrieval problem, providing a new simple proof of the equivalence of assimilating linear retrievals and original observations. Furthermore, it is shown how nonlinear retrievals can be assimilated without loss of information. Finally we discuss how errors in the observation operator model can be treated consistently in the Bayesian framework, connecting to previous work in this area.
Resumo:
Arctic flaw polynyas are considered to be highly productive areas for the formation of sea-ice throughout the winter season. Most estimates of sea-ice production are based on the surface energy balance equation and use global reanalyses as atmospheric forcing, which are too coarse to take into account the impact of polynyas on the atmosphere. Additional errors in the estimates of polynya ice production may result from the methods of calculating atmospheric energy fluxes and the assumption of a thin-ice distribution within polynyas. The present study uses simulations using the mesoscale weather prediction model of the Consortium for Small-scale Modelling (COSMO), where polynya area is prescribed from satellite data. The polynya area is either assumed to be ice-free or to be covered with thin ice of 10 cm. Simulations have been performed for two winter periods (2007/08 and 2008/09). When using a realistic thin-ice thickness of 10 cm, sea-ice production in Laptev polynyas amount to 30 km3 and 73 km3 for the winters 2007/08 and 2008/09, respectively. The higher turbulent energy fluxes of open-water polynyas result in a 50-70% increase in sea-ice production (49 km3 in 2007/08 and 123 km3 in 2008/09). Our results suggest that previous studies have overestimated ice production in the Laptev Sea.
Resumo:
Tropical cyclones have been investigated in a T159 version of the MPI ECHAM5 climate model using a novel technique to diagnose the evolution of the 3-dimensional vorticity structure of tropical cyclones, including their full life cycle from weak initial vortex to their possible extra-tropical transition. Results have been compared with reanalyses (ERA40 and JRA25) and observed tropical storms during the period 1978-1999 for the Northern Hemisphere. There is no indication of any trend in the number or intensity of tropical storms during this period in ECHAM5 or in re-analyses but there are distinct inter-annual variations. The storms simulated by ECHAM5 are realistic both in space and time, but the model and even more so the re-analyses, underestimate the intensities of the most intense storms (in terms of their maximum wind speeds). There is an indication of a response to ENSO with a smaller number of Atlantic storms during El Niño in agreement with previous studies. The global divergence circulation responds to El Niño by setting up a large-scale convergence flow, with the center over the central Pacific with enhanced subsidence over the tropical Atlantic. At the same time there is an increase in the vertical wind shear in the region of the tropical Atlantic where tropical storms normally develop. There is a good correspondence between the model and ERA40 except that the divergence circulation is somewhat stronger in the model. The model underestimates storms in the Atlantic but tends to overestimate them in the Western Pacific and in the North Indian Ocean. It is suggested that the overestimation of storms in the Pacific by the model is related to an overly strong response to the tropical Pacific SST anomalies. The overestimation in 2 the North Indian Ocean is likely to be due to an over prediction in the intensity of monsoon depressions, which are then classified as intense tropical storms. Nevertheless, overall results are encouraging and will further contribute to increased confidence in simulating intense tropical storms with high-resolution climate models.