86 resultados para Three models

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A series of inquiries and reports suggest considerable failings in the care provided to some patients in the NHS. Although the Bristol Inquiry report of 2001 led to the creation of many new regulatory bodies to supervise the NHS, they have never enjoyed consistent support from government and the Mid Staffordshire Inquiry in 2013 suggests they made little difference. Why do some parts of the NHS disregard patients’ interests and how we should we respond to the challenge? The following discusses the evolution of approaches to NHS governance through the Hippocratic, Managerial and Commercial models, and assesses their risks and benefits. Apart from the ethical imperative, the need for effective governance is driven both by the growth in information available to the public and the resources wasted by ineffective systems of care. Appropriate solutions depend on an understanding of the perverse incentives inherent in each model and the need for greater sensitivity to the voices of patients and the public.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Canopy interception of incident precipitation is a critical component of the forest water balance during each of the four seasons. Models have been developed to predict precipitation interception from standard meteorological variables because of acknowledged difficulty in extrapolating direct measurements of interception loss from forest to forest. No known study has compared and validated canopy interception models for a leafless deciduous forest stand in the eastern United States. Interception measurements from an experimental plot in a leafless deciduous forest in northeastern Maryland (39°42'N, 75°5'W) for 11 rainstorms in winter and early spring 2004/05 were compared to predictions from three models. The Mulder model maintains a moist canopy between storms. The Gash model requires few input variables and is formulated for a sparse canopy. The WiMo model optimizes the canopy storage capacity for the maximum wind speed during each storm. All models showed marked underestimates and overestimates for individual storms when the measured ratio of interception to gross precipitation was far more or less, respectively, than the specified fraction of canopy cover. The models predicted the percentage of total gross precipitation (PG) intercepted to within the probable standard error (8.1%) of the measured value: the Mulder model overestimated the measured value by 0.1% of PG; the WiMo model underestimated by 0.6% of PG; and the Gash model underestimated by 1.1% of PG. The WiMo model’s advantage over the Gash model indicates that the canopy storage capacity increases logarithmically with the maximum wind speed. This study has demonstrated that dormant-season precipitation interception in a leafless deciduous forest may be satisfactorily predicted by existing canopy interception models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The performance of three urban land surface models, run in offline mode, with their default external parameters, is evaluated for two distinctly different sites in Helsinki: Torni and Kumpula. The former is a dense city centre site with 22% vegetation, while the latter is a suburban site with over 50% vegetation. At both locations the models are compared against sensible and latent heat fluxes measured using the eddy covariance technique, along with snow depth observations. The cold climate experienced by the city causes strong seasonal variations that include snow cover and stable atmospheric conditions. Most of the time the three models are able to account for the differences between the study areas as well as the seasonal and diurnal variability of the energy balance components. However, the performances are not systematic across the modelled components, season and surface type. The net all-wave radiation is well simulated, with the greatest uncertainties related to snowmelt timing, when the fraction of snow cover has a key role, particularly in determining the surface albedo. For the turbulent fluxes, more variation between the models is seen which can partly be explained by the different methods in their calculation and partly by surface parameter values. For the sensible heat flux, simulation of wintertime values was the main problem, which also leads to issues in predicting near-surface stabilities particularly at the dense city centre site. All models have the most difficulties in simulating latent heat flux. This study particularly emphasizes that improvements are needed in the parameterization of anthropogenic heat flux and thermal parameters in winter, snow cover in spring and evapotranspiration in order to improve the surface energy balance modelling in cold climate cities.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The contraction of a species’ distribution range, which results from the extirpation of local populations, generally precedes its extinction. Therefore, understanding drivers of range contraction is important for conservation and management. Although there are many processes that can potentially lead to local extirpation and range contraction, three main null models have been proposed: demographic, contagion, and refuge. The first two models postulate that the probability of local extirpation for a given area depends on its relative position within the range; but these models generate distinct spatial predictions because they assume either a ubiquitous (demographic) or a clinal (contagion) distribution of threats. The third model (refuge) postulates that extirpations are determined by the intensity of human impacts, leading to heterogeneous spatial predictions potentially compatible with those made by the other two null models. A few previous studies have explored the generality of some of these null models, but we present here the first comprehensive evaluation of all three models. Using descriptive indices and regression analyses we contrast the predictions made by each of the null models using empirical spatial data describing range contraction in 386 terrestrial vertebrates (mammals, birds, amphibians, and reptiles) distributed across the World. Observed contraction patterns do not consistently conform to the predictions of any of the three models, suggesting that these may not be adequate null models to evaluate range contraction dynamics among terrestrial vertebrates. Instead, our results support alternative null models that account for both relative position and intensity of human impacts. These new models provide a better multifactorial baseline to describe range contraction patterns in vertebrates. This general baseline can be used to explore how additional factors influence contraction, and ultimately extinction for particular areas or species as well as to predict future changes in light of current and new threats.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We examine the motion of the ground surface on the Soufriere Hills Volcano, Montserrat between 1998 and 2000 using radar interferometry (InSAR). To minimise the effects of variable atmospheric water vapour on the InSAR measurements we use independently-derived measurements of the radar path delay from six continuous GPS receivers. The surfaces providing a measurable inter-ferometric signal are those on pyroclastic flow deposits, mainly emplaced in 1997. Three types of surface motion can be discriminated. Firstly, the surfaces of thick, valley-filling deposits subsided at rates of 150-120 mm/year in the year after emplacement to 50-30 mm/year two years later. This must be due to contraction and settling effects during cooling. The second type is the near-field motion localised within about one kilometre of the dome. Both subsidence and uplift events are seen and though the former could be due to surface gravitational effects, the latter may reflect shallow (< 1 km) pressurisation effects within the conduit/dome. Far-field motions of the surface away from the deeply buried valleys are interpreted as crustal strains. Because the flux of magma to the surface stopped from March 1998 to November 1999 and then resumed from November 1999 through 2000, we use InSAR data from these two periods to test the crustal strain behaviour of three models of magma supply: open, depleting and unbalanced. The InSAR observations of strain gradients of 75-80 mm/year/krn uplift during the period of quiescence on the western side of the volcano are consistent with an unbalanced model in which magma supply into a crustal magma chamber continues during quiescence, raising chamber pressure that is then released upon resumption of effusion. GPS motion vectors agree qualitatively with the InSAR displacements but are of smaller magnitude. The discrepancy may be due to inaccurate compensation for atmospheric delays in the InSAR data. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the impact of aerosol forcing uncertainty on the robustness of estimates of the twentieth-century warming attributable to anthropogenic greenhouse gas emissions. Attribution analyses on three coupled climate models with very different sensitivities and aerosol forcing are carried out. The Third Hadley Centre Coupled Ocean - Atmosphere GCM (HadCM3), Parallel Climate Model (PCM), and GFDL R30 models all provide good simulations of twentieth-century global mean temperature changes when they include both anthropogenic and natural forcings. Such good agreement could result from a fortuitous cancellation of errors, for example, by balancing too much ( or too little) greenhouse warming by too much ( or too little) aerosol cooling. Despite a very large uncertainty for estimates of the possible range of sulfate aerosol forcing obtained from measurement campaigns, results show that the spatial and temporal nature of observed twentieth-century temperature change constrains the component of past warming attributable to anthropogenic greenhouse gases to be significantly greater ( at the 5% level) than the observed warming over the twentieth century. The cooling effects of aerosols are detected in all three models. Both spatial and temporal aspects of observed temperature change are responsible for constraining the relative roles of greenhouse warming and sulfate cooling over the twentieth century. This is because there are distinctive temporal structures in differential warming rates between the hemispheres, between land and ocean, and between mid- and low latitudes. As a result, consistent estimates of warming attributable to greenhouse gas emissions are obtained from all three models, and predictions are relatively robust to the use of more or less sensitive models. The transient climate response following a 1% yr(-1) increase in CO2 is estimated to lie between 2.2 and 4 K century(-1) (5-95 percentiles).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Three existing models of Interplanetary Coronal Mass Ejection (ICME) transit between the Sun and the Earth are compared to coronagraph and in situ observations: all three models are found to perform with a similar level of accuracy (i.e. an average error between observed and predicted 1AU transit times of approximately 11 h). To improve long-term space weather prediction, factors influencing CME transit are investigated. Both the removal of the plane of sky projection (as suffered by coronagraph derived speeds of Earth directed CMEs) and the use of observed values of solar wind speed, fail to significantly improve transit time prediction. However, a correlation is found to exist between the late/early arrival of an ICME and the width of the preceding sheath region, suggesting that the error is a geometrical effect that can only be removed by a more accurate determination of a CME trajectory and expansion. The correlation between magnetic field intensity and speed of ejecta at 1AU is also investigated. It is found to be weak in the body of the ICME, but strong in the sheath, if the upstream solar wind conditions are taken into account.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The skill of numerical Lagrangian drifter trajectories in three numerical models is assessed by comparing these numerically obtained paths to the trajectories of drifting buoys in the real ocean. The skill assessment is performed using the two-sample Kolmogorov–Smirnov statistical test. To demonstrate the assessment procedure, it is applied to three different models of the Agulhas region. The test can either be performed using crossing positions of one-dimensional sections in order to test model performance in specific locations, or using the total two-dimensional data set of trajectories. The test yields four quantities: a binary decision of model skill, a confidence level which can be used as a measure of goodness-of-fit of the model, a test statistic which can be used to determine the sensitivity of the confidence level, and cumulative distribution functions that aid in the qualitative analysis. The ordering of models by their confidence levels is the same as the ordering based on the qualitative analysis, which suggests that the method is suited for model validation. Only one of the three models, a 1/10° two-way nested regional ocean model, might have skill in the Agulhas region. The other two models, a 1/2° global model and a 1/8° assimilative model, might have skill only on some sections in the region

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Varroa destructor is a parasitic mite of the Eastern honeybee Apis cerana. Fifty years ago, two distinct evolutionary lineages (Korean and Japanese) invaded the Western honeybee Apis mellifera. This haplo-diploid parasite species reproduces mainly through brother sister matings, a system which largely favors the fixation of new mutations. In a worldwide sample of 225 individuals from 21 locations collected on Western honeybees and analyzed at 19 microsatellite loci, a series of de novo mutations was observed. Using historical data concerning the invasion, this original biological system has been exploited to compare three mutation models with allele size constraints for microsatellite markers: stepwise (SMM) and generalized (GSM) mutation models, and a model with mutation rate increasing exponentially with microsatellite length (ESM). Posterior probabilities of the three models have been estimated for each locus individually using reversible jump Markov Chain Monte Carlo. The relative support of each model varies widely among loci, but the GSM is the only model that always receives at least 9% support, whatever the locus. The analysis also provides robust estimates of mutation parameters for each locus and of the divergence time of the two invasive lineages (67,000 generations with a 90% credibility interval of 35,000-174,000). With an average of 10 generations per year, this divergence time fits with the last post-glacial Korea Japan land separation. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the reliability literature, maintenance time is usually ignored during the optimization of maintenance policies. In some scenarios, costs due to system failures may vary with time, and the ignorance of maintenance time will lead to unrealistic results. This paper develops maintenance policies for such situations where the system under study operates iteratively at two successive states: up or down. The costs due to system failure at the up state consist of both business losses & maintenance costs, whereas those at the down state only include maintenance costs. We consider three models: Model A, B, and C: Model A makes only corrective maintenance (CM). Model B performs imperfect preventive maintenance (PM) sequentially, and CM. Model C executes PM periodically, and CM; this PM can restore the system as good as the state just after the latest CM. The CM in this paper is imperfect repair. Finally, the impact of these maintenance policies is illustrated through numerical examples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interpretation of ambiguity is consistently associated with anxiety in children, however, the temporal relationship between interpretation and anxiety remains unclear as do the developmental origins of interpretative biases. This study set out to test a model of the development of interpretative biases in a prospective study of 110 children aged 5–9 years of age. Children and their parents were assessed three times, annually, on measures of anxiety and interpretation of ambiguous scenarios (including, for parents, both their own interpretations and their expectations regarding their child). Three models were constructed to assess associations between parent and child anxiety and threat and distress cognitions and expectancies. The three models were all a reasonable fit of the data, and supported conclusions that: (i) children’s threat and distress cognitions were stable over time and were significantly associated with anxiety, (ii) parents’ threat and distress cognitions and expectancies significantly predicted child threat cognitions at some time points, and (iii) parental anxiety significantly predicted parents cognitions, which predicted parental expectancies at some time points. Parental expectancies were also significantly predicted by child cognitions. The findings varied depending on assessment time point and whether threat or distress cognitions were being considered. The findings support the notion that child and parent cognitive processes, in particular parental expectations, may be a useful target in the treatment or prevention of anxiety disorders in children.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A simple and coherent framework for partitioning uncertainty in multi-model climate ensembles is presented. The analysis of variance (ANOVA) is used to decompose a measure of total variation additively into scenario uncertainty, model uncertainty and internal variability. This approach requires fewer assumptions than existing methods and can be easily used to quantify uncertainty related to model-scenario interaction - the contribution to model uncertainty arising from the variation across scenarios of model deviations from the ensemble mean. Uncertainty in global mean surface air temperature is quantified as a function of lead time for a subset of the Coupled Model Intercomparison Project phase 3 ensemble and results largely agree with those published by other authors: scenario uncertainty dominates beyond 2050 and internal variability remains approximately constant over the 21st century. Both elements of model uncertainty, due to scenario-independent and scenario-dependent deviations from the ensemble mean, are found to increase with time. Estimates of model deviations that arise as by-products of the framework reveal significant differences between models that could lead to a deeper understanding of the sources of uncertainty in multi-model ensembles. For example, three models are shown diverging pattern over the 21st century, while another model exhibits an unusually large variation among its scenario-dependent deviations.