29 resultados para ERROR rates


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolution rates were calculated for a range of grain sizes of anorthite and biotite dissolved under far from equilibrium conditions at pH 3, T = 20 degrees C. Dissolution rates were normalized to initial and final BET surface area, geometric surface area, mass and (for biotite only) geometric edge surface area. Constant (within error) dissolution rates were only obtained by normalizing to initial BET surface area for biotite. The normalizing term that gave the smallest variation about the mean for anorthite was initial BET surface area. In field studies, only current (final) surface area is measurable. In this study, final geometric surface area gave the smallest variation for anorthite dissolution rates and final geometric edge surface area for biotite dissolution rates. (c) 2005 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New data show that island arc rocks have (Pb-210/Ra-226)(o) ratios which range from as low as 0.24 up to 2.88. In contrast, (Ra-22S/Th-232) appears always within error of I suggesting that the large Ra-226-excesses observed in arc rocks were generated more than 30 years ago. This places a maximum estimate on melt ascent velocities of around 4000 m/year and provides further confidence that the Ra-226 excesses reflect deep (source) processes rather than shallow level alteration or seawater contamination. Conversely, partial melting must have occurred more than 30 years prior to eruption. The Pb-210 deficits are most readily explained by protracted magma degassing. Using published numerical models, the data suggest that degassing occurred continuously for periods up to several decades just prior to eruption but no link with eruption periodicity was found. Longer periods are required if degassing is discontinuous, less than 100% efficient or if magma is recharged or stored after degassing. The long durations suggest much of this degassing occurs at depth with implications for the formation of hydrothermal and copper-porphyry systems. A suite of lavas erupted in 1985-1986 from Sangeang Api volcano in the Sunda arc are characterised by deficits of Pb-210 relative to Ra-226 from which 6-8 years of continuous Rn-222 degassing would be inferred from recent numerical models. These data also form a linear (Pb-210)/Pb-(Ra-226)/Pb array which might be interpreted as a 71-year isochron. However, the array passes through the origin suggesting displacement downwards from the equiline in response to degassing and so the slope of the array is inferred not to have any age significance. Simple modelling shows that the range of (Ra-226)/Pb ratios requires thousands of years to develop consistent with differentiation occurring in response to cooling at the base of the crust. Thus, degassing post-dated, and was not responsible for magma differentiation. The formation, migration and extraction of gas bubbles must be extremely efficient in mafic magma whereas the higher viscosity of more siliceous magmas retards the process and can lead to Pb-210 excesses. A possible negative correlation between (Pb-210/Ra-226)(o) and SO2 emission rate requires further testing but may have implications for future eruptions. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The third episode of lava dome growth at Soufrière Hills Volcano began 1 August 2005 and ended 20 April 2007. Volumes of the dome and talus produced were measured using a photo-based method with a calibrated camera for increased accuracy. The total dense rock equivalent (DRE) volume of extruded andesite magma (306 ± 51 Mm3) was similar within error to that produced in the earlier episodes but the average extrusion rate was 5.6 ± 0.9 m3s−1 (DRE), higher than the previous episodes. Extrusion rates varied in a pulsatory manner from <0.5 m3s−1 to ∼20 m3s−1. On 18 May 2006, the lava dome had reached a volume of 85 Mm3 DRE and it was removed in its entirety during a massive dome collapse on 20 May 2006. Extrusion began again almost immediately and built a dome of 170 Mm3 DRE with a summit height 1047 m above sea level by 4 April 2007. There were few moderate-sized dome collapses (1–10 Mm3) during this extrusive episode in contrast to the first episode of dome growth in 1995–8 when they were numerous. The first and third episodes of dome growth showed a similar pattern of low (<0.5 m3s−1) but increasing magma flux during the early stages, with steady high flux after extrusion of ∼25 Mm3

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Threshold Error Correction Models are used to analyse the term structure of interest Rates. The paper develops and uses a generalisation of existing models that encompasses both the Band and Equilibrium threshold models of [Balke and Fomby ((1997) Threshold cointegration. Int Econ Rev 38(3):627–645)] and estimates this model using a Bayesian approach. Evidence is found for threshold effects in pairs of longer rates but not in pairs of short rates. The Band threshold model is supported in preference to the Equilibrium model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A key strategy to improve the skill of quantitative predictions of precipitation, as well as hazardous weather such as severe thunderstorms and flash floods is to exploit the use of observations of convective activity (e.g. from radar). In this paper, a convection-permitting ensemble prediction system (EPS) aimed at addressing the problems of forecasting localized weather events with relatively short predictability time scale and based on a 1.5 km grid-length version of the Met Office Unified Model is presented. Particular attention is given to the impact of using predicted observations of radar-derived precipitation intensity in the ensemble transform Kalman filter (ETKF) used within the EPS. Our initial results based on the use of a 24-member ensemble of forecasts for two summer case studies show that the convective-scale EPS produces fairly reliable forecasts of temperature, horizontal winds and relative humidity at 1 h lead time, as evident from the inspection of rank histograms. On the other hand, the rank histograms seem also to show that the EPS generates too much spread for forecasts of (i) surface pressure and (ii) surface precipitation intensity. These may indicate that for (i) the value of surface pressure observation error standard deviation used to generate surface pressure rank histograms is too large and for (ii) may be the result of non-Gaussian precipitation observation errors. However, further investigations are needed to better understand these findings. Finally, the inclusion of predicted observations of precipitation from radar in the 24-member EPS considered in this paper does not seem to improve the 1-h lead time forecast skill.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we extend to the time-harmonic Maxwell equations the p-version analysis technique developed in [R. Hiptmair, A. Moiola and I. Perugia, Plane wave discontinuous Galerkin methods for the 2D Helmholtz equation: analysis of the p-version, SIAM J. Numer. Anal., 49 (2011), 264-284] for Trefftz-discontinuous Galerkin approximations of the Helmholtz problem. While error estimates in a mesh-skeleton norm are derived parallel to the Helmholtz case, the derivation of estimates in a mesh-independent norm requires new twists in the duality argument. The particular case where the local Trefftz approximation spaces are built of vector-valued plane wave functions is considered, and convergence rates are derived.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analysis of the attribution of past and future changes in stratospheric ozone and temperature to anthropogenic forcings is presented. The analysis is an extension of the study of Shepherd and Jonsson (2008) who analyzed chemistry-climate simulations from the Canadian Middle Atmosphere Model (CMAM) and attributed both past and future changes to changes in the external forcings, i.e. the abundances of ozone-depleting substances (ODS) and well-mixed greenhouse gases. The current study is based on a new CMAM dataset and includes two important changes. First, we account for the nonlinear radiative response to changes in CO2. It is shown that over centennial time scales the radiative response in the upper stratosphere to CO2 changes is significantly nonlinear and that failure to account for this effect leads to a significant error in the attribution. To our knowledge this nonlinearity has not been considered before in attribution analysis, including multiple linear regression studies. For the regression analysis presented here the nonlinearity was taken into account by using CO2 heating rate, rather than CO2 abundance, as the explanatory variable. This approach yields considerable corrections to the results of the previous study and can be recommended to other researchers. Second, an error in the way the CO2 forcing changes are implemented in the CMAM was corrected, which significantly affects the results for the recent past. As the radiation scheme, based on Fomichev et al. (1998), is used in several other models we provide some description of the problem and how it was fixed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulations of ozone loss rates using a three-dimensional chemical transport model and a box model during recent Antarctic and Arctic winters are compared with experimental loss rates. The study focused on the Antarctic winter 2003, during which the first Antarctic Match campaign was organized, and on Arctic winters 1999/2000, 2002/2003. The maximum ozone loss rates retrieved by the Match technique for the winters and levels studied reached 6 ppbv/sunlit hour and both types of simulations could generally reproduce the observations at 2-sigma error bar level. In some cases, for example, for the Arctic winter 2002/2003 at 475 K level, an excellent agreement within 1-sigma standard deviation level was obtained. An overestimation was also found with the box model simulation at some isentropic levels for the Antarctic winter and the Arctic winter 1999/2000, indicating an overestimation of chlorine activation in the model. Loss rates in the Antarctic show signs of saturation in September, which have to be considered in the comparison. Sensitivity tests were performed with the box model in order to assess the impact of kinetic parameters of the ClO-Cl2O2 catalytic cycle and total bromine content on the ozone loss rate. These tests resulted in a maximum change in ozone loss rates of 1.2 ppbv/sunlit hour, generally in high solar zenith angle conditions. In some cases, a better agreement was achieved with fastest photolysis of Cl2O2 and additional source of total inorganic bromine but at the expense of overestimation of smaller ozone loss rates derived later in the winter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the effect of short- and long-term interest rates, and interest rate spreads upon real estate index returns in the UK. Using Johansen's vector autoregressive framework, it is found that the real estate index cointegrates with the term spread, but not with the short or long rates themselves. Granger causality tests indicate that movements in short term interest rates and the spread cause movements in the returns series. However, decomposition of the forecast error variances from VAR models indicate that changes in these variables can only explain a small proportion of the overall variability of the returns, and that the effect has fully worked through after two months. The results suggest that these financial variables could potentially be used as leading indicators for real estate markets, with corresponding implications for return predictability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine a method recently proposed by Hinich and Patterson (mimeo, University of Texas at Austin, 1995) for testing the validity of specifying a GARCH error structure for financial time series data in the context of a set of ten daily Sterling exchange rates. The results demonstrate that there are statistical structures present in the data that cannot be captured by a GARCH model, or any of its variants. This result has important implications for the interpretation of the recent voluminous literature which attempts to model financial asset returns using this family of models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper forecasts Daily Sterling exchange rate returns using various naive, linear and non-linear univariate time-series models. The accuracy of the forecasts is evaluated using mean squared error and sign prediction criteria. These show only a very modest improvement over forecasts generated by a random walk model. The Pesaran–Timmerman test and a comparison with forecasts generated artificially shows that even the best models have no evidence of market timing ability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years an increasing number of papers have employed meta-analysis to integrate effect sizes of researchers’ own series of studies within a single paper (“internal meta-analysis”). Although this approach has the obvious advantage of obtaining narrower confidence intervals, we show that it could inadvertently inflate false-positive rates if researchers are motivated to use internal meta-analysis in order to obtain a significant overall effect. Specifically, if one decides whether to stop or continue a further replication experiment depending on the significance of the results in an internal meta-analysis, false-positive rates would increase beyond the nominal level. We conducted a set of Monte-Carlo simulations to demonstrate our argument, and provided a literature review to gauge awareness and prevalence of this issue. Furthermore, we made several recommendations when using internal meta-analysis to make a judgment on statistical significance.