890 resultados para the EFQM excellence model
Resumo:
Development research has responded to a number of charges over the past few decades. For example, when traditional research was accused of being 'top-down', the response was participatory research, linking the 'receptors' to the generators of research. As participatory processes were recognised as producing limited outcomes, the demand-led agenda was born. In response to the alleged failure of research to deliver its products, the 'joined-up' model, which links research with the private sector, has become popular. However, using examples from animal-health research, this article demonstrates that all the aforementioned approaches are seriously limited in their attempts to generate outputs to address the multi-faceted problems facing the poor. The article outlines a new approach to research: the Mosaic Model. By combining different knowledge forms, and focusing on existing gaps, the model aims to bridge basic and applied findings to enhance the efficiency and value of research, past, present, and future.
A hierarchical Bayesian model for predicting the functional consequences of amino-acid polymorphisms
Resumo:
Genetic polymorphisms in deoxyribonucleic acid coding regions may have a phenotypic effect on the carrier, e.g. by influencing susceptibility to disease. Detection of deleterious mutations via association studies is hampered by the large number of candidate sites; therefore methods are needed to narrow down the search to the most promising sites. For this, a possible approach is to use structural and sequence-based information of the encoded protein to predict whether a mutation at a particular site is likely to disrupt the functionality of the protein itself. We propose a hierarchical Bayesian multivariate adaptive regression spline (BMARS) model for supervised learning in this context and assess its predictive performance by using data from mutagenesis experiments on lac repressor and lysozyme proteins. In these experiments, about 12 amino-acid substitutions were performed at each native amino-acid position and the effect on protein functionality was assessed. The training data thus consist of repeated observations at each position, which the hierarchical framework is needed to account for. The model is trained on the lac repressor data and tested on the lysozyme mutations and vice versa. In particular, we show that the hierarchical BMARS model, by allowing for the clustered nature of the data, yields lower out-of-sample misclassification rates compared with both a BMARS and a frequen-tist MARS model, a support vector machine classifier and an optimally pruned classification tree.
Resumo:
In this paper we focus on the one year ahead prediction of the electricity peak-demand daily trajectory during the winter season in Central England and Wales. We define a Bayesian hierarchical model for predicting the winter trajectories and present results based on the past observed weather. Thanks to the flexibility of the Bayesian approach, we are able to produce the marginal posterior distributions of all the predictands of interest. This is a fundamental progress with respect to the classical methods. The results are encouraging in both skill and representation of uncertainty. Further extensions are straightforward at least in principle. The main two of those consist in conditioning the weather generator model with respect to additional information like the knowledge of the first part of the winter and/or the seasonal weather forecast. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
A model for comparing the inventory costs of purchasing under the economic order quantity (EOQ) system and the just-in-time (JIT) order purchasing system in existing literature concluded that JIT purchasing was virtually always the preferable inventory ordering system especially at high level of annual demand. By expanding the classical EOQ model, this paper shows that it is possible for the EOQ system to be more cost effective than the JIT system once the inventory demand approaches the EOQ-JIT cost indifference point. The case study conducted in the ready-mixed concrete industry in Singapore supported this proposition.
Resumo:
Our objective in this study was to develop and implement an effective intervention strategy to manipulate the amount and composition of dietary fat and carbohydrate (CHO) in free-living individuals in the RISCK study. The study was a randomized, controlled dietary intervention study that was conducted in 720 participants identified as higher risk for or with metabolic syndrome. All followed a 4-wk run-in reference diet [high saturated fatty acids (SF)/high glycemic index (GI)]. Volunteers were randomized to continue this diet for a further 24 wk or to I of 4 isoenergetic prescriptions [high monounsaturated fatty acids (MUFA)/high GI; high MUFA/low GI; low fat (LF)/high GI; and LF/low GI]. We developed a food exchange model to implement each diet. Dietary records and plasma phospholipid fatty acids were used to assess the effectiveness of the intervention strategy. Reported fat intake from the LF diets was significantly reduced to 28% of energy (%E) compared with 38% E from the HM and LF diets. SF intake was successfully decreased in the HM and LF diets was similar to 10% E compared with 17% E in the reference diet (P = 0.001). Dietary MUFA in the HIM diets was similar to 17% E, significantly higher than in the reference (12% E) and LF diets (10% E) (P = 0.001). Changes in plasma phospholipid fatty acids provided further evidence for the successful manipulation of fat intake. The GI of the HGI and LGI arms differed by similar to 9 points (P = 0.001). The food exchange model provided an effective dietary strategy for the design and implementation across multiple sites of 5 experimental diets with specific targets for the proportion of fat and CHO. J. Nutr. 139: 1534-1540, 2009.
Resumo:
In addition to projected increases in global mean sea level over the 21st century, model simulations suggest there will also be changes in the regional distribution of sea level relative to the global mean. There is a considerable spread in the projected patterns of these changes by current models, as shown by the recent Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment (AR4). This spread has not reduced from that given by the Third Assessment models. Comparison with projections by ensembles of models based on a single structure supports an earlier suggestion that models of similar formulation give more similar patterns of sea level change. Analysing an AR4 ensemble of model projections under a business-as-usual scenario shows that steric changes (associated with subsurface ocean density changes) largely dominate the sea level pattern changes. The relative importance of subsurface temperature or salinity changes in contributing to this differs from region to region and, to an extent, from model-to-model. In general, thermosteric changes give the spatial variations in the Southern Ocean, halosteric changes dominate in the Arctic and strong compensation between thermosteric and halosteric changes characterises the Atlantic. The magnitude of sea level and component changes in the Atlantic appear to be linked to the amount of Atlantic meridional overturning circulation (MOC) weakening. When the MOC weakening is substantial, the Atlantic thermosteric patterns of change arise from a dominant role of ocean advective heat flux changes.
Resumo:
A Bayesian Model Averaging approach to the estimation of lag structures is introduced, and applied to assess the impact of R&D on agricultural productivity in the US from 1889 to 1990. Lag and structural break coefficients are estimated using a reversible jump algorithm that traverses the model space. In addition to producing estimates and standard deviations for the coe¢ cients, the probability that a given lag (or break) enters the model is estimated. The approach is extended to select models populated with Gamma distributed lags of di¤erent frequencies. Results are consistent with the hypothesis that R&D positively drives productivity. Gamma lags are found to retain their usefulness in imposing a plausible structure on lag coe¢ cients, and their role is enhanced through the use of model averaging.
Resumo:
We compared output from 3 dynamic process-based models (DMs: ECOSSE, MILLENNIA and the Durham Carbon Model) and 9 bioclimatic envelope models (BCEMs; including BBOG ensemble and PEATSTASH) ranging from simple threshold to semi-process-based models. Model simulations were run at 4 British peatland sites using historical climate data and climate projections under a medium (A1B) emissions scenario from the 11-RCM (regional climate model) ensemble underpinning UKCP09. The models showed that blanket peatlands are vulnerable to projected climate change; however, predictions varied between models as well as between sites. All BCEMs predicted a shift from presence to absence of a climate associated with blanket peat, where the sites with the lowest total annual precipitation were closest to the presence/absence threshold. DMs showed a more variable response. ECOSSE predicted a decline in net C sink and shift to net C source by the end of this century. The Durham Carbon Model predicted a smaller decline in the net C sink strength, but no shift to net C source. MILLENNIA predicted a slight overall increase in the net C sink. In contrast to the BCEM projections, the DMs predicted that the sites with coolest temperatures and greatest total annual precipitation showed the largest change in carbon sinks. In this model inter-comparison, the greatest variation in model output in response to climate change projections was not between the BCEMs and DMs but between the DMs themselves, because of different approaches to modelling soil organic matter pools and decomposition amongst other processes. The difference in the sign of the response has major implications for future climate feedbacks, climate policy and peatland management. Enhanced data collection, in particular monitoring peatland response to current change, would significantly improve model development and projections of future change.
Resumo:
We test the response of the Oxford-RAL Aerosol and Cloud (ORAC) retrieval algorithm for MSG SEVIRI to changes in the aerosol properties used in the dust aerosol model, using data from the Dust Outflow and Deposition to the Ocean (DODO) flight campaign in August 2006. We find that using the observed DODO free tropospheric aerosol size distribution and refractive index increases simulated top of the atmosphere radiance at 0.55 µm assuming a fixed erosol optical depth of 0.5 by 10–15 %, reaching a maximum difference at low solar zenith angles. We test the sensitivity of the retrieval to the vertical distribution f the aerosol and find that this is unimportant in determining simulated radiance at 0.55 µm. We also test the ability of the ORAC retrieval when used to produce the GlobAerosol dataset to correctly identify continental aerosol outflow from the African continent and we find that it poorly constrains aerosol speciation. We develop spatially and temporally resolved prior distributions of aerosols to inform the retrieval which incorporates five aerosol models: desert dust, maritime, biomass burning, urban and continental. We use a Saharan Dust Index and the GEOS-Chem chemistry transport model to describe dust and biomass burning aerosol outflow, and compare AOD using our speciation against the GlobAerosol retrieval during January and July 2006. We find AOD discrepancies of 0.2–1 over regions of intense biomass burning outflow, where AOD from our aerosol speciation and GlobAerosol speciation can differ by as much as 50 - 70 %.
Resumo:
We examine whether a three-regime model that allows for dormant, explosive and collapsing speculative behaviour can explain the dynamics of the S&P 500. We extend existing models of speculative behaviour by including a third regime that allows a bubble to grow at a steady rate, and propose abnormal volume as an indicator of the probable time of bubble collapse. We also examine the financial usefulness of the three-regime model by studying a trading rule formed using inferences from it, whose use leads to higher Sharpe ratios and end of period wealth than from employing existing models or a buy-and-hold strategy.
Resumo:
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalizations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts.
Resumo:
We describe the HadGEM2 family of climate configurations of the Met Office Unified Model, MetUM. The concept of a model "family" comprises a range of specific model configurations incorporating different levels of complexity but with a common physical framework. The HadGEM2 family of configurations includes atmosphere and ocean components, with and without a vertical extension to include a well-resolved stratosphere, and an Earth-System (ES) component which includes dynamic vegetation, ocean biology and atmospheric chemistry. The HadGEM2 physical model includes improvements designed to address specific systematic errors encountered in the previous climate configuration, HadGEM1, namely Northern Hemisphere continental temperature biases and tropical sea surface temperature biases and poor variability. Targeting these biases was crucial in order that the ES configuration could represent important biogeochemical climate feedbacks. Detailed descriptions and evaluations of particular HadGEM2 family members are included in a number of other publications, and the discussion here is limited to a summary of the overall performance using a set of model metrics which compare the way in which the various configurations simulate present-day climate and its variability.
Resumo:
Using the virtual porous carbon model proposed by Harris et al, we study the effect of carbon surface oxidation on the pore size distribution (PSD) curve determined from simulated Ar, N(2) and CO(2) isotherms. It is assumed that surface oxidation is not destructive for the carbon skeleton, and that all pores are accessible for studied molecules (i.e., only the effect of the change of surface chemical composition is studied). The results obtained show two important things, i.e., oxidation of the carbon surface very slightly changes the absolute porosity (calculated from the geometric method of Bhattacharya and Gubbins (BG)); however, PSD curves calculated from simulated isotherms are to a greater or lesser extent affected by the presence of surface oxides. The most reliable results are obtained from Ar adsorption data. Not only is adsorption of this adsorbate practically independent from the presence of surface oxides, but, more importantly, for this molecule one can apply the slit-like model of pores as the first approach to recover the average pore diameter of a real carbon structure. For nitrogen, the effect of carbon surface chemical composition is observed due to the quadrupole moment of this molecule, and this effect shifts the PSD curves compared to Ar. The largest differences are seen for CO2, and it is clearly demonstrated that the PSD curves obtained from adsorption isotherms of this molecule contain artificial peaks and the average pore diameter is strongly influenced by the presence of electrostatic adsorbate-adsorbate as well as adsorbate-adsorbent interactions.
Resumo:
Variations in the Atlantic Meridional Overturning Circulation (MOC) exert an important influence on climate, particularly on decadal time scales. Simulation of the MOC in coupled climate models is compromised, to a degree that is unknown, by their lack of fidelity in resolving some of the key processes involved. There is an overarching need to increase the resolution and fidelity of climate models, but also to assess how increases in resolution influence the simulation of key phenomena such as the MOC. In this study we investigate the impact of significantly increasing the (ocean and atmosphere) resolution of a coupled climate model on the simulation of MOC variability by comparing high and low resolution versions of the same model. In both versions, decadal variability of the MOC is closely linked to density anomalies that propagate from the Labrador Sea southward along the deep western boundary. We demonstrate that the MOC adjustment proceeds more rapidly in the higher resolution model due the increased speed of western boundary waves. However, the response of the Atlantic Sea Surface Temperatures (SSTs) to MOC variations is relatively robust - in pattern if not in magnitude - across the two resolutions. The MOC also excites a coupled ocean-atmosphere response in the tropical Atlantic in both model versions. In the higher resolution model, but not the lower resolution model, there is evidence of a significant response in the extratropical atmosphere over the North Atlantic 6 years after a maximum in the MOC. In both models there is evidence of a weak negative feedback on deep density anomalies in the Labrador Sea, and hence on the MOC (with a time scale of approximately ten years). Our results highlight the need for further work to understand the decadal variability of the MOC and its simulation in climate models.
Resumo:
We present an analysis of the oceanic heat advection and its variability in the upper 500 m in the southeastern tropical Pacific (100W–75W, 25S–10S) as simulated by the global coupled model HiGEM, which has one of the highest resolutions currently used in long-term integrations. The simulated climatology represents a temperature advection field arising from transient small-scale (<450 km) features, with structures and transport that appear consistent with estimates based on available observational data for the mooring at 20S, 85W. The transient structures are very persistent (>4 months), and in specific locations they generate an important contribution to the local upper-ocean heat budget, characterised by scales of a few hundred kilometres, and periods of over a year. The contribution from such structures to the local, long-term oceanic heat budget however can be of either sign, or vanishing, depending on the location; and, although there appears some organisation in preferential areas of activity, the average over the entire region is small. While several different mechanisms may be responsible for the temperature advection by transients, we find that a significant, and possibly dominant, component is associated with vortices embedded in the large-scale, climatological salinity gradient associated with the fresh intrusion of mid-latitude intermediate water which penetrates north-westward beneath the tropical thermocline