46 resultados para Geographic and dimension errors
Resumo:
Aim: To examine the causes of prescribing and monitoring errors in English general practices and provide recommendations for how they may be overcome. Design: Qualitative interview and focus group study with purposive sampling and thematic analysis informed by Reason’s accident causation model. Participants: General practice staff participated in a combination of semi-structured interviews (n=34) and six focus groups (n=46). Setting: Fifteen general practices across three primary care trusts in England. Results: We identified seven categories of high-level error-producing conditions: the prescriber, the patient, the team, the task, the working environment, the computer system, and the primary-secondary care interface. Each of these was further broken down to reveal various error-producing conditions. The prescriber’s therapeutic training, drug knowledge and experience, knowledge of the patient, perception of risk, and their physical and emotional health, were all identified as possible causes. The patient’s characteristics and the complexity of the individual clinical case were also found to have contributed to prescribing errors. The importance of feeling comfortable within the practice team was highlighted, as well as the safety of general practitioners (GPs) in signing prescriptions generated by nurses when they had not seen the patient for themselves. The working environment with its high workload, time pressures, and interruptions, and computer related issues associated with mis-selecting drugs from electronic pick-lists and overriding alerts, were all highlighted as possible causes of prescribing errors and often interconnected. Conclusion: This study has highlighted the complex underlying causes of prescribing and monitoring errors in general practices, several of which are amenable to intervention.
Resumo:
Decadal climate predictions exhibit large biases, which are often subtracted and forgotten. However, understanding the causes of bias is essential to guide efforts to improve prediction systems, and may offer additional benefits. Here the origins of biases in decadal predictions are investigated, including whether analysis of these biases might provide useful information. The focus is especially on the lead-time-dependent bias tendency. A “toy” model of a prediction system is initially developed and used to show that there are several distinct contributions to bias tendency. Contributions from sampling of internal variability and a start-time-dependent forcing bias can be estimated and removed to obtain a much improved estimate of the true bias tendency, which can provide information about errors in the underlying model and/or errors in the specification of forcings. It is argued that the true bias tendency, not the total bias tendency, should be used to adjust decadal forecasts. The methods developed are applied to decadal hindcasts of global mean temperature made using the Hadley Centre Coupled Model, version 3 (HadCM3), climate model, and it is found that this model exhibits a small positive bias tendency in the ensemble mean. When considering different model versions, it is shown that the true bias tendency is very highly correlated with both the transient climate response (TCR) and non–greenhouse gas forcing trends, and can therefore be used to obtain observationally constrained estimates of these relevant physical quantities.
Resumo:
The convectively active part of the Madden-Julian Oscillation (MJO) propagates eastward through the warm pool, from the Indian Ocean through the Maritime Continent (the Indonesian archipelago) to the western Pacific. The Maritime Continent's complex topography means the exact nature of the MJO propagation through this region is unclear. Model simulations of the MJO are often poor over the region, leading to local errors in latent heat release and global errors in medium-range weather prediction and climate simulation. Using 14 northern winters of TRMM satellite data it is shown that, where the mean diurnal cycle of precipitation is strong, 80% of the MJO precipitation signal in the Maritime Continent is accounted for by changes in the amplitude of the diurnal cycle. Additionally, the relationship between outgoing long-wave radiation (OLR) and precipitation is weakened here, such that OLR is no longer a reliable proxy for precipitation. The canonical view of the MJO as the smooth eastward propagation of a large-scale precipitation envelope also breaks down over the islands of the Maritime Continent. Instead, a vanguard of precipitation (anomalies of 2.5 mm day^-1 over 10^6 km^2) jumps ahead of the main body by approximately 6 days or 2000 km. Hence, there can be enhanced precipitation over Sumatra, Borneo or New Guinea when the large-scale MJO envelope over the surrounding ocean is one of suppressed precipitation. This behaviour can be accommodated into existing MJO theories. Frictional and topographic moisture convergence and relatively clear skies ahead of the main convective envelope combine with the low thermal inertia of the islands, to allow a rapid response in the diurnal cycle which rectifies onto the lower-frequency MJO. Hence, accurate representations of the diurnal cycle and its scale interaction appear to be necessary for models to simulate the MJO successfully.
Resumo:
Separating edaphic impacts on tree distributions from those of climate and geography is notoriously difficult. Aboveground and belowground factors play important roles, and determining their relative contribution to tree success will greatly assist in refining predictive models and forestry strategies in a changing climate. In a common glasshouse, seedlings of interior Douglas-fir (Pseudotsuga menziesii var. glauca) from multiple populations were grown in multiple forest soils. Fungicide was applied to half of the seedlings to separate soil fungal and nonfungal impacts on seedling performance. Soils of varying geographic and climatic distance from seed origin were compared, using a transfer function approach. Seedling height and biomass were optimized following seed transfer into drier soils, whereas survival was optimized when elevation transfer was minimised. Fungicide application reduced ectomycorrhizal root colonization by c. 50%, with treated seedlings exhibiting greater survival but reduced biomass. Local adaptation of Douglas-fir populations to soils was mediated by soil fungi to some extent in 56% of soil origin by response variable combinations. Mediation by edaphic factors in general occurred in 81% of combinations. Soil biota, hitherto unaccounted for in climate models, interacts with biogeography to influence plant ranges in a changing climate.
Resumo:
This study has explored the prediction errors of tropical cyclones (TCs) in the European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) for the Northern Hemisphere summer period for five recent years. Results for the EPS are contrasted with those for the higher-resolution deterministic forecasts. Various metrics of location and intensity errors are considered and contrasted for verification based on IBTrACS and the numerical weather prediction (NWP) analysis (NWPa). Motivated by the aim of exploring extended TC life cycles, location and intensity measures are introduced based on lower-tropospheric vorticity, which is contrasted with traditional verification metrics. Results show that location errors are almost identical when verified against IBTrACS or the NWPa. However, intensity in the form of the mean sea level pressure (MSLP) minima and 10-m wind speed maxima is significantly underpredicted relative to IBTrACS. Using the NWPa for verification results in much better consistency between the different intensity error metrics and indicates that the lower-tropospheric vorticity provides a good indication of vortex strength, with error results showing similar relationships to those based on MSLP and 10-m wind speeds for the different forecast types. The interannual variation in forecast errors are discussed in relation to changes in the forecast and NWPa system and variations in forecast errors between different ocean basins are discussed in terms of the propagation characteristics of the TCs.
Resumo:
Sixteen monthly air–sea heat flux products from global ocean/coupled reanalyses are compared over 1993–2009 as part of the Ocean Reanalysis Intercomparison Project (ORA-IP). Objectives include assessing the global heat closure, the consistency of temporal variability, comparison with other flux products, and documenting errors against in situ flux measurements at a number of OceanSITES moorings. The ensemble of 16 ORA-IP flux estimates has a global positive bias over 1993–2009 of 4.2 ± 1.1 W m−2. Residual heat gain (i.e., surface flux + assimilation increments) is reduced to a small positive imbalance (typically, +1–2 W m−2). This compensation between surface fluxes and assimilation increments is concentrated in the upper 100 m. Implied steady meridional heat transports also improve by including assimilation sources, except near the equator. The ensemble spread in surface heat fluxes is dominated by turbulent fluxes (>40 W m−2 over the western boundary currents). The mean seasonal cycle is highly consistent, with variability between products mostly <10 W m−2. The interannual variability has consistent signal-to-noise ratio (~2) throughout the equatorial Pacific, reflecting ENSO variability. Comparisons at tropical buoy sites (10°S–15°N) over 2007–2009 showed too little ocean heat gain (i.e., flux into the ocean) in ORA-IP (up to 1/3 smaller than buoy measurements) primarily due to latent heat flux errors in ORA-IP. Comparisons with the Stratus buoy (20°S, 85°W) over a longer period, 2001–2009, also show the ORA-IP ensemble has 16 W m−2 smaller net heat gain, nearly all of which is due to too much latent cooling caused by differences in surface winds imposed in ORA-IP.
Resumo:
Objectives: The current study examined younger and older adults’ error detection accuracy, prediction calibration, and postdiction calibration on a proofreading task, to determine if age-related difference would be present in this type of common error detection task. Method: Participants were given text passages, and were first asked to predict the percentage of errors they would detect in the passage. They then read the passage and circled errors (which varied in complexity and locality), and made postdictions regarding their performance, before repeating this with another passage and answering a comprehension test of both passages. Results: There were no age-related differences in error detection accuracy, text comprehension, or metacognitive calibration, though participants in both age groups were overconfident overall in their metacognitive judgments. Both groups gave similar ratings of motivation to complete the task. The older adults rated the passages as more interesting than younger adults did, although this level of interest did not appear to influence error-detection performance. Discussion: The age equivalence in both proofreading ability and calibration suggests that the ability to proofread text passages and the associated metacognitive monitoring used in judging one’s own performance are maintained in aging. These age-related similarities persisted when younger adults completed the proofreading tasks on a computer screen, rather than with paper and pencil. The findings provide novel insights regarding the influence that cognitive aging may have on metacognitive accuracy and text processing in an everyday task.
Resumo:
We have developed an ensemble Kalman Filter (EnKF) to estimate 8-day regional surface fluxes of CO2 from space-borne CO2 dry-air mole fraction observations (XCO2) and evaluate the approach using a series of synthetic experiments, in preparation for data from the NASA Orbiting Carbon Observatory (OCO). The 32-day duty cycle of OCO alternates every 16 days between nadir and glint measurements of backscattered solar radiation at short-wave infrared wavelengths. The EnKF uses an ensemble of states to represent the error covariances to estimate 8-day CO2 surface fluxes over 144 geographical regions. We use a 12×8-day lag window, recognising that XCO2 measurements include surface flux information from prior time windows. The observation operator that relates surface CO2 fluxes to atmospheric distributions of XCO2 includes: a) the GEOS-Chem transport model that relates surface fluxes to global 3-D distributions of CO2 concentrations, which are sampled at the time and location of OCO measurements that are cloud-free and have aerosol optical depths <0.3; and b) scene-dependent averaging kernels that relate the CO2 profiles to XCO2, accounting for differences between nadir and glint measurements, and the associated scene-dependent observation errors. We show that OCO XCO2 measurements significantly reduce the uncertainties of surface CO2 flux estimates. Glint measurements are generally better at constraining ocean CO2 flux estimates. Nadir XCO2 measurements over the terrestrial tropics are sparse throughout the year because of either clouds or smoke. Glint measurements provide the most effective constraint for estimating tropical terrestrial CO2 fluxes by accurately sampling fresh continental outflow over neighbouring oceans. We also present results from sensitivity experiments that investigate how flux estimates change with 1) bias and unbiased errors, 2) alternative duty cycles, 3) measurement density and correlations, 4) the spatial resolution of estimated flux estimates, and 5) reducing the length of the lag window and the size of the ensemble. At the revision stage of this manuscript, the OCO instrument failed to reach its orbit after it was launched on 24 February 2009. The EnKF formulation presented here is also applicable to GOSAT measurements of CO2 and CH4.
Resumo:
The common GIS-based approach to regional analyses of soil organic carbon (SOC) stocks and changes is to define geographic layers for which unique sets of driving variables are derived, which include land use, climate, and soils. These GIS layers, with their associated attribute data, can then be fed into a range of empirical and dynamic models. Common methodologies for collating and formatting regional data sets on land use, climate, and soils were adopted for the project Assessment of Soil Organic Carbon Stocks and Changes at National Scale (GEFSOC). This permitted the development of a uniform protocol for handling the various input for the dynamic GEFSOC Modelling System. Consistent soil data sets for Amazon-Brazil, the Indo-Gangetic Plains (IGP) of India, Jordan and Kenya, the case study areas considered in the GEFSOC project, were prepared using methodologies developed for the World Soils and Terrain Database (SOTER). The approach involved three main stages: (1) compiling new soil geographic and attribute data in SOTER format; (2) using expert estimates and common sense to fill selected gaps in the measured or primary data; (3) using a scheme of taxonomy-based pedotransfer rules and expert-rules to derive soil parameter estimates for similar soil units with missing soil analytical data. The most appropriate approach varied from country to country, depending largely on the overall accessibility and quality of the primary soil data available in the case study areas. The secondary SOTER data sets discussed here are appropriate for a wide range of environmental applications at national scale. These include agro-ecological zoning, land evaluation, modelling of soil C stocks and changes, and studies of soil vulnerability to pollution. Estimates of national-scale stocks of SOC, calculated using SOTER methods, are presented as a first example of database application. Independent estimates of SOC stocks are needed to evaluate the outcome of the GEFSOC Modelling System for current conditions of land use and climate. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Temporal and spatial patterns of soil water content affect many soil processes including evaporation, infiltration, ground water recharge, erosion and vegetation distribution. This paper describes the analysis of a soil moisture dataset comprising a combination of continuous time series of measurements at a few depths and locations, and occasional roving measurements at a large number of depths and locations. The objectives of the paper are: (i) to develop a technique for combining continuous measurements of soil water contents at a limited number of depths within a soil profile with occasional measurements at a large number of depths, to enable accurate estimation of the soil moisture vertical pattern and the integrated profile water content; and (ii) to estimate time series of soil moisture content at locations where there are just occasional soil water measurements available and some continuous records from nearby locations. The vertical interpolation technique presented here can strongly reduce errors in the estimation of profile soil water and its changes with time. On the other hand, the temporal interpolation technique is tested for different sampling strategies in space and time, and the errors generated in each case are compared.
Resumo:
The radar scattering properties of realistic aggregate snowflakes have been calculated using the Rayleigh-Gans theory. We find that the effect of the snowflake geometry on the scattering may be described in terms of a single universal function, which depends only on the overall shape of the aggregate and not the geometry or size of the pristine ice crystals which compose the flake. This function is well approximated by a simple analytic expression at small sizes; for larger snowflakes we fit a curve to Our numerical data. We then demonstrate how this allows a characteristic snowflake radius to be derived from dual wavelength radar measurements without knowledge of the pristine crystal size or habit, while at the same time showing that this detail is crucial to using such data to estimate ice water content. We also show that the 'effective radius'. characterizing the ratio of particle volume to projected area, cannot be inferred from dual wavelength radar data for aggregates. Finally, we consider the errors involved in approximating snowflakes by 'air-ice spheres', and show that for small enough aggregates the predicted dual wavelength ratio typically agrees to within a few percent, provided some care is taken in choosing the radius of the sphere and the dielectric constant of the air-ice mixture; at larger sizes the radar becomes more sensitive to particle shape, and the errors associated with the sphere model are found to increase accordingly.
Resumo:
[1] In many practical situations where spatial rainfall estimates are needed, rainfall occurs as a spatially intermittent phenomenon. An efficient geostatistical method for rainfall estimation in the case of intermittency has previously been published and comprises the estimation of two independent components: a binary random function for modeling the intermittency and a continuous random function that models the rainfall inside the rainy areas. The final rainfall estimates are obtained as the product of the estimates of these two random functions. However the published approach does not contain a method for estimation of uncertainties. The contribution of this paper is the presentation of the indicator maximum likelihood estimator from which the local conditional distribution of the rainfall value at any location may be derived using an ensemble approach. From the conditional distribution, representations of uncertainty such as the estimation variance and confidence intervals can be obtained. An approximation to the variance can be calculated more simply by assuming rainfall intensity is independent of location within the rainy area. The methodology has been validated using simulated and real rainfall data sets. The results of these case studies show good agreement between predicted uncertainties and measured errors obtained from the validation data.
Resumo:
Aims To investigate the effects of electronic prescribing (EP) on prescribing quality, as indicated by prescribing errors and pharmacists' clinical interventions, in a UK hospital. Methods Prescribing errors and pharmacists' interventions were recorded by the ward pharmacist during a 4 week period both pre- and post-EP, with a second check by the principal investigator. The percentage of new medication orders with a prescribing error and/or pharmacist's intervention was calculated for each study period. Results Following the introduction of EP, there was a significant reduction in both pharmacists' interventions and prescribing errors. Interventions reduced from 73 (3.0% of all medication orders) to 45 (1.9%) (95% confidence interval (CI) for the absolute reduction 0.2, 2.0%), and errors from 94 (3.8%) to 48 (2.0%) (95% CI 0.9, 2.7%). Ten EP-specific prescribing errors were identified. Only 52% of pharmacists' interventions related to a prescribing error pre-EP, and 60% post-EP; only 40% and 56% of prescribing errors resulted in an intervention pre- and post-EP, respectively. Conclusions EP improved the quality of prescribing by reducing both prescribing errors and pharmacists' clinical interventions. Prescribers and pharmacists need to be aware of new types of error with EP, so that they can best target their activities to reduce clinical risk. Pharmacists may need to change the way they work to complement, rather than duplicate, the benefits of EP.
Resumo:
Mannitol is a polymorphic pharmaceutical excipient, which commonly exists in three forms: alpha, beta and delta. Each polymorph has a needle-like morphology, which can give preferred orientation effects when analysed by X-ray powder diffractometry (XRPD) thus providing difficulties for quantitative XRPD assessments. The occurrence of preferred orientation may be demonstrated by sample rotation and the consequent effects on X-ray data can be minimised by reducing the particle size. Using two particle size ranges (less than 125 and 125–500�microns), binary mixtures of beta and delta mannitol were prepared and the delta component was quantified. Samples were assayed in either a static or rotating sampling accessory. Rotation and reducing the particle size range to less than�125 microns halved the limits of detection and quantitation to 1 and 3.6%, respectively. Numerous potential sources of assay errors were investigated; sample packing and mixing errors contributed the greatest source of variation. However, the rotation of samples for both particle size ranges reduced the majority of assay errors examined. This study shows that coupling sample rotation with a particle size reduction minimises preferred orientation effects on assay accuracy, allowing discrimination of two very similar polymorphs at around the 1% level
Resumo:
Following a malicious or accidental atmospheric release in an outdoor environment it is essential for first responders to ensure safety by identifying areas where human life may be in danger. For this to happen quickly, reliable information is needed on the source strength and location, and the type of chemical agent released. We present here an inverse modelling technique that estimates the source strength and location of such a release, together with the uncertainty in those estimates, using a limited number of measurements of concentration from a network of chemical sensors considering a single, steady, ground-level source. The technique is evaluated using data from a set of dispersion experiments conducted in a meteorological wind tunnel, where simultaneous measurements of concentration time series were obtained in the plume from a ground-level point-source emission of a passive tracer. In particular, we analyze the response to the number of sensors deployed and their arrangement, and to sampling and model errors. We find that the inverse algorithm can generate acceptable estimates of the source characteristics with as few as four sensors, providing these are well-placed and that the sampling error is controlled. Configurations with at least three sensors in a profile across the plume were found to be superior to other arrangements examined. Analysis of the influence of sampling error due to the use of short averaging times showed that the uncertainty in the source estimates grew as the sampling time decreased. This demonstrated that averaging times greater than about 5min (full scale time) lead to acceptable accuracy.