890 resultados para the EFQM excellence model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this chapter is to provide an elementary introduction to the non-renewable resource model with multiple demand curves. The theoretical literature following Hotelling (1931) assumed that all energy needs are satisfied by one type of resource (e.g. ‘oil’), extractible at different per-unit costs. This formulation implicitly assumes that all users are the same distance from each resource pool, that all users are subject to the same regulations, and that motorist users can switch as easily from liquid fossil fuels to coal as electric utilities can. These assumptions imply, as Herfindahl (1967) showed, that in competitive equilibrium all users will exhaust a lower cost resource completely before beginning to extract a higher cost resource: simultaneous extraction of different grades of oil or of oil and coal should never occur. In trying to apply the single-demand curve model during the last twenty years, several teams of authors have independently found a need to generalize it to account for users differing in their (1) location, (2) regulatory environment, or (3) resource needs. Each research team found that Herfindahl's strong, unrealistic conclusion disappears in the generalized model; in its place, a weaker Herfindahl result emerges. Since each research team focussed on a different application, however, it has not always been clear that everyone has been describing the same generalized model. Our goal is to integrate the findings of these teams and to exposit the generalized model in a form which is easily accessible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thunderstorm, resulting from vigorous convective activity, is one of the most spectacular weather phenomena in the atmosphere. A common feature of the weather during the pre-monsoon season over the Indo-Gangetic Plain and northeast India is the outburst of severe local convective storms, commonly known as ‘Nor’westers’(as they move from northwest to southeast). The severe thunderstorms associated with thunder, squall lines, lightning and hail cause extensive losses in agricultural, damage to structure and also loss of life. In this paper, sensitivity experiments have been conducted with the Non-hydrostatic Mesoscale Model (NMM) to test the impact of three microphysical schemes in capturing the severe thunderstorm event occurred over Kolkata on 15 May 2009. The results show that the WRF-NMM model with Ferrier microphysical scheme appears to reproduce the cloud and precipitation processes more realistically than other schemes. Also, we have made an attempt to diagnose four severe thunderstorms that occurred during pre-monsoon seasons of 2006, 2007 and 2008 through the simulated radar reflectivity fields from NMM model with Ferrier microphysics scheme and validated the model results with Kolkata Doppler Weather Radar (DWR) observations. Composite radar reflectivity simulated by WRF-NMM model clearly shows the severe thunderstorm movement as observed by DWR imageries, but failed to capture the intensity as in observations. The results of these analyses demonstrated the capability of high resolution WRF-NMM model in the simulation of severe thunderstorm events and determined that the 3 km model improve upon current abilities when it comes to simulating severe thunderstorms over east Indian region

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In den letzten Jahrzehnten haben sich makroskalige hydrologische Modelle als wichtige Werkzeuge etabliert um den Zustand der globalen erneuerbaren Süßwasserressourcen flächendeckend bewerten können. Sie werden heutzutage eingesetzt um eine große Bandbreite wissenschaftlicher Fragestellungen zu beantworten, insbesondere hinsichtlich der Auswirkungen anthropogener Einflüsse auf das natürliche Abflussregime oder der Auswirkungen des globalen Wandels und Klimawandels auf die Ressource Wasser. Diese Auswirkungen lassen sich durch verschiedenste wasserbezogene Kenngrößen abschätzen, wie z.B. erneuerbare (Grund-)Wasserressourcen, Hochwasserrisiko, Dürren, Wasserstress und Wasserknappheit. Die Weiterentwicklung makroskaliger hydrologischer Modelle wurde insbesondere durch stetig steigende Rechenkapazitäten begünstigt, aber auch durch die zunehmende Verfügbarkeit von Fernerkundungsdaten und abgeleiteten Datenprodukten, die genutzt werden können, um die Modelle anzutreiben und zu verbessern. Wie alle makro- bis globalskaligen Modellierungsansätze unterliegen makroskalige hydrologische Simulationen erheblichen Unsicherheiten, die (i) auf räumliche Eingabedatensätze, wie z.B. meteorologische Größen oder Landoberflächenparameter, und (ii) im Besonderen auf die (oftmals) vereinfachte Abbildung physikalischer Prozesse im Modell zurückzuführen sind. Angesichts dieser Unsicherheiten ist es unabdingbar, die tatsächliche Anwendbarkeit und Prognosefähigkeit der Modelle unter diversen klimatischen und physiographischen Bedingungen zu überprüfen. Bisher wurden die meisten Evaluierungsstudien jedoch lediglich in wenigen, großen Flusseinzugsgebieten durchgeführt oder fokussierten auf kontinentalen Wasserflüssen. Dies steht im Kontrast zu vielen Anwendungsstudien, deren Analysen und Aussagen auf simulierten Zustandsgrößen und Flüssen in deutlich feinerer räumlicher Auflösung (Gridzelle) basieren. Den Kern der Dissertation bildet eine umfangreiche Evaluierung der generellen Anwendbarkeit des globalen hydrologischen Modells WaterGAP3 für die Simulation von monatlichen Abflussregimen und Niedrig- und Hochwasserabflüssen auf Basis von mehr als 2400 Durchflussmessreihen für den Zeitraum 1958-2010. Die betrachteten Flusseinzugsgebiete repräsentieren ein breites Spektrum klimatischer und physiographischer Bedingungen, die Einzugsgebietsgröße reicht von 3000 bis zu mehreren Millionen Quadratkilometern. Die Modellevaluierung hat dabei zwei Zielsetzungen: Erstens soll die erzielte Modellgüte als Bezugswert dienen gegen den jegliche weiteren Modellverbesserungen verglichen werden können. Zweitens soll eine Methode zur diagnostischen Modellevaluierung entwickelt und getestet werden, die eindeutige Ansatzpunkte zur Modellverbesserung aufzeigen soll, falls die Modellgüte unzureichend ist. Hierzu werden komplementäre Modellgütemaße mit neun Gebietsparametern verknüpft, welche die klimatischen und physiographischen Bedingungen sowie den Grad anthropogener Beeinflussung in den einzelnen Einzugsgebieten quantifizieren. WaterGAP3 erzielt eine mittlere bis hohe Modellgüte für die Simulation von sowohl monatlichen Abflussregimen als auch Niedrig- und Hochwasserabflüssen, jedoch sind für alle betrachteten Modellgütemaße deutliche räumliche Muster erkennbar. Von den neun betrachteten Gebietseigenschaften weisen insbesondere der Ariditätsgrad und die mittlere Gebietsneigung einen starken Einfluss auf die Modellgüte auf. Das Modell tendiert zur Überschätzung des jährlichen Abflussvolumens mit steigender Aridität. Dieses Verhalten ist charakteristisch für makroskalige hydrologische Modelle und ist auf die unzureichende Abbildung von Prozessen der Abflussbildung und –konzentration in wasserlimitierten Gebieten zurückzuführen. In steilen Einzugsgebieten wird eine geringe Modellgüte hinsichtlich der Abbildung von monatlicher Abflussvariabilität und zeitlicher Dynamik festgestellt, die sich auch in der Güte der Niedrig- und Hochwassersimulation widerspiegelt. Diese Beobachtung weist auf notwendige Modellverbesserungen in Bezug auf (i) die Aufteilung des Gesamtabflusses in schnelle und verzögerte Abflusskomponente und (ii) die Berechnung der Fließgeschwindigkeit im Gerinne hin. Die im Rahmen der Dissertation entwickelte Methode zur diagnostischen Modellevaluierung durch Verknüpfung von komplementären Modellgütemaßen und Einzugsgebietseigenschaften wurde exemplarisch am Beispiel des WaterGAP3 Modells erprobt. Die Methode hat sich als effizientes Werkzeug erwiesen, um räumliche Muster in der Modellgüte zu erklären und Defizite in der Modellstruktur zu identifizieren. Die entwickelte Methode ist generell für jedes hydrologische Modell anwendbar. Sie ist jedoch insbesondere für makroskalige Modelle und multi-basin Studien relevant, da sie das Fehlen von feldspezifischen Kenntnissen und gezielten Messkampagnen, auf die üblicherweise in der Einzugsgebietsmodellierung zurückgegriffen wird, teilweise ausgleichen kann.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To establish a prediction model of the degree of disability in adults with Spinal CordInjury (SCI ) based on the use of the WHO-DAS II . Methods: The disability degree was correlatedwith three variable groups: clinical, sociodemographic and those related with rehabilitation services.A model of multiple linear regression was built to predict disability. 45 people with sci exhibitingdiverse etiology, neurological level and completeness participated. Patients were older than 18 andthey had more than a six-month post-injury. The WHO-DAS II and the ASIA impairment scale(AIS ) were used. Results: Variables that evidenced a significant relationship with disability were thefollowing: occupational situation, type of affiliation to the public health care system, injury evolutiontime, neurological level, partial preservation zone, ais motor and sensory scores and number ofclinical complications during the last year. Complications significantly associated to disability werejoint pain, urinary infections, intestinal problems and autonomic disreflexia. None of the variablesrelated to rehabilitation services showed significant association with disability. The disability degreeexhibited significant differences in favor of the groups that received the following services: assistivedevices supply and vocational, job or educational counseling. Conclusions: The best predictiondisability model in adults with sci with more than six months post-injury was built with variablesof injury evolution time, AIS sensory score and injury-related unemployment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical description of Si oxidation given by Deal and Grove has well-known limitations for thin oxides (below 200 Ã). Among the large number of alternative models published so far, the interfacial emission model has shown the greatest ability to fit the experimental oxidation curves. It relies on the assumption that during oxidation Si interstitials are emitted to the oxide to release strain and that the accumulation of these interstitials near the interface reduces the reaction rate there. The resulting set of differential equations makes it possible to model diverse oxidation experiments. In this paper, we have compared its predictions with two sets of experiments: (1) the pressure dependence for subatmospheric oxygen pressure and (2) the enhancement of the oxidation rate after annealing in inert atmosphere. The result is not satisfactory and raises serious doubts about the model’s correctness

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulations of the top-of-atmosphere radiative-energy budget from the Met Office global numerical weather-prediction model are evaluated using new data from the Geostationary Earth Radiation Budget (GERB) instrument on board the Meteosat-8 satellite. Systematic discrepancies between the model simulations and GERB measurements greater than 20 Wm-2 in outgoing long-wave radiation (OLR) and greater than 60 Wm-2 in reflected short-wave radiation (RSR) are identified over the period April-September 2006 using 12 UTC data. Convective cloud over equatorial Africa is spatially less organized and less reflective than in the GERB data. This bias depends strongly on convective-cloud cover, which is highly sensitive to changes in the model convective parametrization. Underestimates in model OLR over the Gulf of Guinea coincide with unrealistic southerly cloud outflow from convective centres to the north. Large overestimates in model RSR over the subtropical ocean, greater than 50 Wm-2 at 12 UTC, are explained by unrealistic radiative properties of low-level cloud relating to overestimation of cloud liquid water compared with independent satellite measurements. The results of this analysis contribute to the development and improvement of parametrizations in the global forecast model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Observations show the oceans have warmed over the past 40 yr. with appreciable regional variation and more warming at the surface than at depth. Comparing the observations with results from two coupled ocean-atmosphere climate models [the Parallel Climate Model version 1 (PCM) and the Hadley Centre Coupled Climate Model version 3 (HadCM3)] that include anthropogenic forcing shows remarkable agreement between the observed and model-estimated warming. In this comparison the models were sampled at the same locations as gridded yearly observed data. In the top 100 m of the water column the warming is well separated from natural variability, including both variability arising from internal instabilities of the coupled ocean-atmosphere climate system and that arising from volcanism and solar fluctuations. Between 125 and 200 m the agreement is not significant, but then increases again below this level, and remains significant down to 600 m. Analysis of PCM's heat budget indicates that the warming is driven by an increase in net surface heat flux that reaches 0.7 W m(-2) by the 1990s; the downward longwave flux increases bv 3.7 W m(-2). which is not fully compensated by an increase in the upward longwave flux of 2.2 W m(-2). Latent and net solar heat fluxes each decrease by about 0.6 W m(-2). The changes in the individual longwave components are distinguishable from the preindustrial mean by the 1920s, but due to cancellation of components. changes in the net surface heat flux do not become well separated from zero until the 1960s. Changes in advection can also play an important role in local ocean warming due to anthropogenic forcing, depending, on the location. The observed sampling of ocean temperature is highly variable in space and time. but sufficient to detect the anthropogenic warming signal in all basins, at least in the surface layers, bv the 1980s.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ozone and temperature profiles from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) have been assimilated, using three-dimensional variational assimilation, into a stratosphere troposphere version of the Met Office numerical weather-prediction system. Analyses are made for the month of September 2002, when there was an unprecedented split in the southern hemisphere polar vortex. The analyses are validated against independent ozone observations from sondes, limb-occultation and total column ozone satellite instruments. Through most of the stratosphere, precision varies from 5 to 15%, and biases are 15% or less of the analysed field. Problems remain in the vortex and below the 60 hPa. level, especially at the tropopause where the analyses have too much ozone and poor agreement with independent data. Analysis problems are largely a result of the model rather than the data, giving confidence in the MIPAS ozone retrievals, though there may be a small high bias in MIPAS ozone in the lower stratosphere. Model issues include an excessive Brewer-Dobson circulation, which results both from known problems with the tracer transport scheme and from the data assimilation of dynamical variables. The extreme conditions of the vortex split reveal large differences between existing linear ozone photochemistry schemes. Despite these issues, the ozone analyses are able to successfully describe the ozone hole split and compare well to other studies of this event. Recommendations are made for the further development of the ozone assimilation system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we focus on the one year ahead prediction of the electricity peak-demand daily trajectory during the winter season in Central England and Wales. We define a Bayesian hierarchical model for predicting the winter trajectories and present results based on the past observed weather. Thanks to the flexibility of the Bayesian approach, we are able to produce the marginal posterior distributions of all the predictands of interest. This is a fundamental progress with respect to the classical methods. The results are encouraging in both skill and representation of uncertainty. Further extensions are straightforward at least in principle. The main two of those consist in conditioning the weather generator model with respect to additional information like the knowledge of the first part of the winter and/or the seasonal weather forecast. Copyright (C) 2006 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a technique for assessing the diurnal development of convective storm systems based on outgoing longwave radiation fields. Using the size distribution of the storms measured from a series of images, we generate an array in the lengthscale-time domain based on the standard score statistic. It demonstrates succinctly the size evolution of storms as well as the dissipation kinematics. It also provides evidence related to the temperature evolution of the cloud tops. We apply this approach to a test case comparing observations made by the Geostationary Earth Radiation Budget instrument to output from the Met Office Unified Model run at two resolutions. The 12km resolution model produces peak convective activity on all lengthscales significantly earlier in the day than shown by the observations and no evidence for storms growing in size. The 4km resolution model shows realistic timing and growth evolution although the dissipation mechanism still differs from the observed data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work was to construct a dynamic model of hepatic amino acid metabolism in the lactating dairy cow that could be parameterized using net flow data from in vivo experiments. The model considers 22 amino acids, ammonia, urea, and 13 energetic metabolites, and was parameterized using a steady-state balance model and two in vivo, net flow experiments conducted with mid-lactation dairy cows. Extracellular flows were derived directly from the observed data. An optimization routine was used to derive nine intracellular flows. The resulting dynamic model was found to be stable across a range of inputs suggesting that it can be perturbed and applied to other physiological states. Although nitrogen was generally in balance, leucine was in slight deficit compared to predicted needs for export protein synthesis, suggesting that an alternative source of leucine (e.g. peptides) was utilized. Simulations of varying glucagon concentrations indicated that an additional 5 mol/d of glucose could be synthesized at the reference substrate concentrations and blood flows. The increased glucose production was supported by increased removal from blood of lactate, glutamate, aspartate, alanine, asparagine, and glutamine. As glucose Output increased, ketone body and acetate release increased while CO2 release declined. The pattern of amino acids appearing in hepatic vein blood was affected by changes in amino acid concentration in portal vein blood, portal blood flow rate and glucagon concentration, with methionine and phenylalanine being the most affected of essential amino acids. Experimental evidence is insufficient to determine whether essential amino acids are affected by varying gluconeogenic demands. (C) 2004 Published by Elsevier Ltd.