890 resultados para the EFQM excellence model
Resumo:
Weather and climate model simulations of the West African Monsoon (WAM) have generally poor representation of the rainfall distribution and monsoon circulation because key processes, such as clouds and convection, are poorly characterized. The vertical distribution of cloud and precipitation during the WAM are evaluated in Met Office Unified Model simulations against CloudSat observations. Simulations were run at 40-km and 12-km horizontal grid length using a convection parameterization scheme and at 12-km, 4-km, and 1.5-km grid length with the convection scheme effectively switched off, to study the impact of model resolution and convection parameterization scheme on the organisation of tropical convection. Radar reflectivity is forward-modelled from the model cloud fields using the CloudSat simulator to present a like-with-like comparison with the CloudSat radar observations. The representation of cloud and precipitation at 12-km horizontal grid length improves dramatically when the convection parameterization is switched off, primarily because of a reduction in daytime (moist) convection. Further improvement is obtained when reducing model grid length to 4 km or 1.5 km, especially in the representation of thin anvil and mid-level cloud, but three issues remain in all model configurations. Firstly, all simulations underestimate the fraction of anvils with cloud top height above 12 km, which can be attributed to too low ice water contents in the model compared to satellite retrievals. Secondly, the model consistently detrains mid-level cloud too close to the freezing level, compared to higher altitudes in CloudSat observations. Finally, there is too much low-level cloud cover in all simulations and this bias was not improved when adjusting the rainfall parameters in the microphysics scheme. To improve model simulations of the WAM, more detailed and in-situ observations of the dynamics and microphysics targeting these non-precipitating cloud types are required.
Resumo:
We present the global general circulation model IPSL-CM5 developed to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5). This model includes an interactive carbon cycle, a representation of tropospheric and stratospheric chemistry, and a comprehensive representation of aerosols. As it represents the principal dynamical, physical, and bio-geochemical processes relevant to the climate system, it may be referred to as an Earth System Model. However, the IPSL-CM5 model may be used in a multitude of configurations associated with different boundary conditions and with a range of complexities in terms of processes and interactions. This paper presents an overview of the different model components and explains how they were coupled and used to simulate historical climate changes over the past 150 years and different scenarios of future climate change. A single version of the IPSL-CM5 model (IPSL-CM5A-LR) was used to provide climate projections associated with different socio-economic scenarios, including the different Representative Concentration Pathways considered by CMIP5 and several scenarios from the Special Report on Emission Scenarios considered by CMIP3. Results suggest that the magnitude of global warming projections primarily depends on the socio-economic scenario considered, that there is potential for an aggressive mitigation policy to limit global warming to about two degrees, and that the behavior of some components of the climate system such as the Arctic sea ice and the Atlantic Meridional Overturning Circulation may change drastically by the end of the twenty-first century in the case of a no climate policy scenario. Although the magnitude of regional temperature and precipitation changes depends fairly linearly on the magnitude of the projected global warming (and thus on the scenario considered), the geographical pattern of these changes is strikingly similar for the different scenarios. The representation of atmospheric physical processes in the model is shown to strongly influence the simulated climate variability and both the magnitude and pattern of the projected climate changes.
Resumo:
The polynyas of the Laptev Sea are regions of particular interest due to the strong formation of Arctic sea-ice. In order to simulate the polynya dynamics and to quantify ice production, we apply the Finite Element Sea-Ice Ocean Model FESOM. In previous simulations FESOM has been forced with daily atmospheric NCEP (National Centers for Environmental Prediction) 1. For the periods 1 April to 9 May 2008 and 1 January to 8 February 2009 we examine the impact of different forcing data: daily and 6-hourly NCEP reanalyses 1 (1.875° x 1.875°), 6-hourly NCEP reanalyses 2 (1.875° x 1.875°), 6-hourly analyses from the GME (Global Model of the German Weather Service) (0.5° x 0.5°) and high-resolution hourly COSMO (Consortium for Small-Scale Modeling) data (5 km x 5 km). In all FESOM simulations, except for those with 6-hourly and daily NCEP 1 data, the openings and closings of polynyas are simulated in principle agreement with satellite products. Over the fast-ice area the wind fields of all atmospheric data are similar and close to in situ measurements. Over the polynya areas, however, there are strong differences between the forcing data with respect to air temperature and turbulent heat flux. These differences have a strong impact on sea-ice production rates. Depending on the forcing fields polynya ice production ranges from 1.4 km3 to 7.8 km3 during 1 April to 9 May 2011 and from 25.7 km3 to 66.2 km3 during 1 January to 8 February 2009. Therefore, atmospheric forcing data with high spatial and temporal resolution which account for the presence of the polynyas are needed to reduce the uncertainty in quantifying ice production in polynyas.
Resumo:
The Arctic sea ice cover is thinning and retreating, causing changes in surface roughness that in turn modify the momentum flux from the atmosphere through the ice into the ocean. New model simulations comprising variable sea ice drag coefficients for both the air and water interface demonstrate that the heterogeneity in sea ice surface roughness significantly impacts the spatial distribution and trends of ocean surface stress during the last decades. Simulations with constant sea ice drag coefficients as used in most climate models show an increase in annual mean ocean surface stress (0.003 N/m2 per decade, 4.6%) due to the reduction of ice thickness leading to a weakening of the ice and accelerated ice drift. In contrast, with variable drag coefficients our simulations show annual mean ocean surface stress is declining at a rate of -0.002 N/m2 per decade (3.1%) over the period 1980-2013 because of a significant reduction in surface roughness associated with an increasingly thinner and younger sea ice cover. The effectiveness of sea ice in transferring momentum does not only depend on its resistive strength against the wind forcing but is also set by its top and bottom surface roughness varying with ice types and ice conditions. This reveals the need to account for sea ice surface roughness variations in climate simulations in order to correctly represent the implications of sea ice loss under global warming.
Resumo:
The Madden-Julian Oscillation (MJO) is the dominant mode of intraseasonal variability in the Trop- ics. It can be characterised as a planetary-scale coupling between the atmospheric circulation and organised deep convection that propagates east through the equatorial Indo-Pacific region. The MJO interacts with weather and climate systems on a near-global scale and is a crucial source of predictability for weather forecasts on medium to seasonal timescales. Despite its global signifi- cance, accurately representing the MJO in numerical weather prediction (NWP) and climate models remains a challenge. This thesis focuses on the representation of the MJO in the Integrated Forecasting System (IFS) at the European Centre for Medium-Range Weather Forecasting (ECMWF), a state-of-the-art NWP model. Recent modifications to the model physics in Cycle 32r3 (Cy32r3) of the IFS led to ad- vances in the simulation of the MJO; for the first time the observed amplitude of the MJO was maintained throughout the integration period. A set of hindcast experiments, which differ only in their formulation of convection, have been performed between May 2008 and April 2009 to asses the sensitivity of MJO simulation in the IFS to the Cy32r3 convective parameterization. Unique to this thesis is the attribution of the advances in MJO simulation in Cy32r3 to the mod- ified convective parameterization, specifically, the relative-humidity-dependent formulation for or- ganised deep entrainment. Increasing the sensitivity of the deep convection scheme to environmen- tal moisture is shown to modify the relationship between precipitation and moisture in the model. Through dry-air entrainment, convective plumes ascending in low-humidity environments terminate lower in the atmosphere. As a result, there is an increase in the occurrence of cumulus congestus, which acts to moisten the mid-troposphere. Due to the modified precipitation-moisture relationship more moisture is able to build up which effectively preconditions the tropical atmosphere for the transition to deep convection. Results from this thesis suggest that a tropospheric moisture control on convection is key to simulating the interaction between the physics and large-scale circulation associated with the MJO.
Resumo:
The General Ocean Turbulence Model (GOTM) is applied to the diagnostic turbulence field of the mixing layer (ML) over the equatorial region of the Atlantic Ocean. Two situations were investigated: rainy and dry seasons, defined, respectively, by the presence of the intertropical convergence zone and by its northward displacement. Simulations were carried out using data from a PIRATA buoy located on the equator at 23 degrees W to compute surface turbulent fluxes and from the NASA/GEWEX Surface Radiation Budget Project to close the surface radiation balance. A data assimilation scheme was used as a surrogate for the physical effects not present in the one-dimensional model. In the rainy season, results show that the ML is shallower due to the weaker surface stress and stronger stable stratification; the maximum ML depth reached during this season is around 15 m, with an averaged diurnal variation of 7 m depth. In the dry season, the stronger surface stress and the enhanced surface heat balance components enable higher mechanical production of turbulent kinetic energy and, at night, the buoyancy acts also enhancing turbulence in the first meters of depth, characterizing a deeper ML, reaching around 60 m and presenting an average diurnal variation of 30 m.
Resumo:
In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate, the complementary exponential geometric distribution, which is complementary to the exponential geometric model proposed by Adamidis and Loukas (1998). The new distribution arises on a latent complementary risks scenario, in which the lifetime associated with a particular risk is not observable; rather, we observe only the maximum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its reliability and failure rate functions, moments, including the mean and variance, variation coefficient, and modal value. The parameter estimation is based on the usual maximum likelihood approach. We report the results of a misspecification simulation study performed in order to assess the extent of misspecification errors when testing the exponential geometric distribution against our complementary one in the presence of different sample size and censoring percentage. The methodology is illustrated on four real datasets; we also make a comparison between both modeling approaches. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We construct static soliton solutions with non-zero Hopf topological charges to a theory which is an extension of the Skyrme-Faddeev model by the addition of a further quartic term in derivatives. We use an axially symmetric ansatz based on toroidal coordinates, and solve the resulting two coupled non-linear partial differential equations in two variables by a successive over-relaxation (SOR) method. We construct numerical solutions with Hopf charge up to four, and calculate their analytical behavior in some limiting cases. The solutions present an interesting behavior under the changes of a special combination of the coupling constants of the quartic terms. Their energies and sizes tend to zero as that combination approaches a particular special value. We calculate the equivalent of the Vakulenko and Kapitanskii energy bound for the theory and find that it vanishes at that same special value of the coupling constants. In addition, the model presents an integrable sector with an in finite number of local conserved currents which apparently are not related to symmetries of the action. In the intersection of those two special sectors the theory possesses exact vortex solutions (static and time dependent) which were constructed in a previous paper by one of the authors. It is believed that such model describes some aspects of the low energy limit of the pure SU(2) Yang-Mills theory, and our results may be important in identifying important structures in that strong coupling regime.
Resumo:
In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.
Resumo:
The Grubbs` measurement model is frequently used to compare several measuring devices. It is common to assume that the random terms have a normal distribution. However, such assumption makes the inference vulnerable to outlying observations, whereas scale mixtures of normal distributions have been an interesting alternative to produce robust estimates, keeping the elegancy and simplicity of the maximum likelihood theory. The aim of this paper is to develop an EM-type algorithm for the parameter estimation, and to use the local influence method to assess the robustness aspects of these parameter estimates under some usual perturbation schemes, In order to identify outliers and to criticize the model building we use the local influence procedure in a Study to compare the precision of several thermocouples. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The main objective of this paper is to study a logarithm extension of the bimodal skew normal model introduced by Elal-Olivero et al. [1]. The model can then be seen as an alternative to the log-normal model typically used for fitting positive data. We study some basic properties such as the distribution function and moments, and discuss maximum likelihood for parameter estimation. We report results of an application to a real data set related to nickel concentration in soil samples. Model fitting comparison with several alternative models indicates that the model proposed presents the best fit and so it can be quite useful in real applications for chemical data on substance concentration. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
There is a need of scientific evidence of claimed nutraceutical effects, but also there is a social movement towards the use of natural products and among them algae are seen as rich resources. Within this scenario, the development of methodology for rapid and reliable assessment of markers of efficiency and security of these extracts is necessary. The rat treated with streptozotocin has been proposed as the most appropriate model of systemic oxidative stress for studying antioxidant therapies. Cystoseira is a brown alga containing fucoxanthin and other carothenes whose pressure-assisted extracts were assayed to discover a possible beneficial effect on complications related to diabetes evolution in an acute but short-term model. Urine was selected as the sample and CE-TOF-MS as the analytical technique to obtain the fingerprints in a non-target metabolomic approach. Multivariate data analysis revealed a good clustering of the groups and permitted the putative assignment of compounds statistically significant in the classification. Interestingly a group of compounds associated to lysine glycation and cleavage from proteins was found to be increased in diabetic animals receiving vehicle as compared to control animals receiving vehicle (N6, N6, N6-trimethyl-L-lysine, N-methylnicotinamide, galactosylhydroxylysine, L-carnitine, N6-acetyl-N6-hydroxylysine, fructose-lysine, pipecolic acid, urocanic acid, amino-isobutanoate, formylisoglutamine. Fructoselysine significantly decreased after the treatment changing from a 24% increase to a 19% decrease. CE-MS fingerprinting of urine has provided a group of compounds different to those detected with other techniques and therefore proves the necessity of a cross-platform analysis to obtain a broad view of biological samples.
Methodology for identifying parameters for the TRNSYS model Type 210 -wood pellet stoves and boilers
Resumo:
This report describes a method how to perform measurements on boilers and stoves and how to identify parameters from the measurements for the boiler/stove-model TRNSYS Type 210. The model can be used for detailed annual system simulations using TRNSYS. Experience from measurements on three different pellet stoves and four boilers were used to develop this methodology. Recommendations for the set up of measurements are given and the re-quired combustion theory for the data evaluation and data preparation are given. The data evalua-tion showed that the uncertainties are quite large for the measured flue gas flow rate and for boilers and stoves with high fraction of energy going to the water jacket also the calculated heat rate to the room may have large uncertainties. A methodology for the parameter identification process and identified parameters for two different stoves and three boilers are given. Finally the identified models are compared with measured data showing that the model generally agreed well with meas-ured data during both stationary and dynamic conditions.
Resumo:
A customer is presumed to gravitate to a facility by the distance to it and the attractiveness of it. However regarding the location of the facility, the presumption is that the customer opts for the shortest route to the nearest facility.This paradox was recently solved by the introduction of the gravity p-median model. The model is yet to be implemented and tested empirically. We implemented the model in an empirical problem of locating locksmiths, vehicle inspections, and retail stores ofv ehicle spare-parts, and we compared the solutions with those of the p-median model. We found the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
Resumo:
Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.