35 resultados para the EFQM excellence model
em Aston University Research Archive
Resumo:
With the advent of globalisation companies all around the world must improve their performance in order to survive. The threats are coming from everywhere, and in different ways, such as low cost products, high quality products, new technologies, and new products. Different companies in different countries are using various techniques and using quality criteria items to strive for excellence. Continuous improvement techniques are used to enable companies to improve their operations. Therefore, companies are using techniques such as TQM, Kaizen, Six-Sigma, Lean Manufacturing, and quality award criteria items such as Customer Focus, Human Resources, Information & Analysis, and Process Management. The purpose of this paper is to compare the use of these techniques and criteria items in two countries, Mexico and the United Kingdom, which differ in culture and industrial structure. In terms of the use of continuous improvement tools and techniques, Mexico formally started to deal with continuous improvement by creating its National Quality Award soon after the Americans began the Malcolm Baldrige National Quality Award. The United Kingdom formally started by using the European Quality Award (EQA), modified and renamed as the EFQM Excellence Model. The methodology used in this study was to undertake a literature review of the subject matter and to study some general applications around the world. A questionnaire survey was then designed and a survey undertaken based on the same scale, about the same sample size, and the about the same industrial sector within the two countries. The survey presents a brief definition of each of the constructs to facilitate understanding of the questions. The analysis of the data was then conducted with the assistance of a statistical software package. The survey results indicate both similarities and differences in the strengths and weaknesses of the companies in the two countries. One outcome of the analysis is that it enables the companies to use the results to benchmark themselves and thus act to reinforce their strengths and to reduce their weaknesses.
Resumo:
The size frequency distributions of diffuse, primitive and classic β- amyloid (Aβ) deposits were studied in single sections of cortical tissue from patients with Alzheimer's disease (AD) and Down's syndrome (DS) and compared with those predicted by the log-normal model. In a sample of brain regions, these size distributions were compared with those obtained by serial reconstruction through the tissue and the data used to adjust the size distributions obtained in single sections. The adjusted size distributions of the diffuse, primitive and classic deposits deviated significantly from a log-normal model in AD and DS, the greatest deviations from the model being observed in AD. More Aβ deposits were observed close to the mean and fewer in the larger size classes than predicted by the model. Hence, the growth of Aβ deposits in AD and DS does not strictly follow the log-normal model, deposits growing to within a more restricted size range than predicted. However, Aβ deposits grow to a larger size in DS compared with AD which may reflect differences in the mechanism of Aβ formation.
Resumo:
The size frequency distributions of diffuse, primitive and cored senile plaques (SP) were studied in single sections of the temporal lobe from 10 patients with Alzheimer’s disease (AD). The size distribution curves were unimodal and positively skewed. The size distribution curve of the diffuse plaques was shifted towards larger plaques while those of the neuritic and cored plaques were shifted towards smaller plaques. The neuritic/diffuse plaque ratio was maximal in the 11 – 30 micron size class and the cored/ diffuse plaque ratio in the 21 – 30 micron size class. The size distribution curves of the three types of plaque deviated significantly from a log-normal distribution. Distributions expressed on a logarithmic scale were ‘leptokurtic’, i.e. with excess of observations near the mean. These results suggest that SP in AD grow to within a more restricted size range than predicted from a log-normal model. In addition, there appear to be differences in the patterns of growth of diffuse, primitive and cored plaques. If neuritic and cored plaques develop from earlier diffuse plaques, then smaller diffuse plaques are more likely to be converted to mature plaques.
Resumo:
The number of new chemical entities (NCE) is increasing every day after the introduction of combinatorial chemistry and high throughput screening to the drug discovery cycle. One third of these new compounds have aqueous solubility less than 20µg/mL [1]. Therefore, a great deal of interest has been forwarded to the salt formation technique to overcome solubility limitations. This study aims to improve the drug solubility of a Biopharmaceutical Classification System class II (BCS II) model drug (Indomethacin; IND) using basic amino acids (L-arginine, L-lysine and L-histidine) as counterions. Three new salts were prepared using freeze drying method and characterised by FT-IR spectroscopy, proton nuclear magnetic resonance ((1)HNMR), Differential Scanning Calorimetry (DSC) and Thermogravimetric analysis (TGA). The effect of pH on IND solubility was also investigated using pH-solubility profile. Both arginine and lysine formed novel salts with IND, while histidine failed to dissociate the free acid and in turn no salt was formed. Arginine and lysine increased IND solubility by 10,000 and 2296 fold, respectively. An increase in dissolution rate was also observed for the novel salts. Since these new salts have improved IND solubility to that similar to BCS class I drugs, IND salts could be considered for possible waivers of bioequivalence.
Resumo:
This paper examines the implications of the EEC common energy policy for the UK energy sector as represented by a long-term programming model. The model suggests that the UK will be a substantial net exporter of energy in 1985 and will therefore make an important contribution towards the EEC's efforts to meet its import dependency target of 50% or less of gross inland consumption. Furthermore, the UK energy sector could operate within the 1985 EEC energy policy constraints with relatively low extra cost up to the year 2020 (the end of the period covered by the model). The main effect of the constraints would be to bring forward the production of synthetic gas and oil from coal.
Resumo:
In this paper we investigate whether consideration of store-level heterogeneity in marketing mix effects improves the accuracy of the marketing mix elasticities, fit, and forecasting accuracy of the widely-applied SCAN*PRO model of store sales. Models with continuous and discrete representations of heterogeneity, estimated using hierarchical Bayes (HB) and finite mixture (FM) techniques, respectively, are empirically compared to the original model, which does not account for store-level heterogeneity in marketing mix effects, and is estimated using ordinary least squares (OLS). The empirical comparisons are conducted in two contexts: Dutch store-level scanner data for the shampoo product category, and an extensive simulation experiment. The simulation investigates how between- and within-segment variance in marketing mix effects, error variance, the number of weeks of data, and the number of stores impact the accuracy of marketing mix elasticities, model fit, and forecasting accuracy. Contrary to expectations, accommodating store-level heterogeneity does not improve the accuracy of marketing mix elasticities relative to the homogeneous SCAN*PRO model, suggesting that little may be lost by employing the original homogeneous SCAN*PRO model estimated using ordinary least squares. Improvements in fit and forecasting accuracy are also fairly modest. We pursue an explanation for this result since research in other contexts has shown clear advantages from assuming some type of heterogeneity in market response models. In an Afterthought section, we comment on the controversial nature of our result, distinguishing factors inherent to household-level data and associated models vs. general store-level data and associated models vs. the unique SCAN*PRO model specification.
Resumo:
This chapter explains a functional integral approach about impurity in the Tomonaga–Luttinger model. The Tomonaga–Luttinger model of one-dimensional (1D) strongly correlates electrons gives a striking example of non-Fermi-liquid behavior. For simplicity, the chapter considers only a single-mode Tomonaga–Luttinger model, with one species of right- and left-moving electrons, thus, omitting spin indices and considering eventually the simplest linearized model of a single-valley parabolic electron band. The standard operator bosonization is one of the most elegant methods developed in theoretical physics. The main advantage of the bosonization, either in standard or functional form, is that including the quadric electron–electron interaction does not substantially change the free action. The chapter demonstrates the way to develop the formalism of bosonization based on the functional integral representation of observable quantities within the Keldysh formalism.
Resumo:
Recent investigations into cross-country convergence follow Mankiw, Romer, and Weil (1992) in using a log-linear approximation to the Swan-Solow growth model to specify regressions. These studies tend to assume a common and exogenous technology. In contrast, the technology catch-up literature endogenises the growth of technology. The use of capital stock data renders the approximations and over-identification of the Mankiw model unnecessary and enables us, using dynamic panel estimation, to estimate the separate contributions of diminishing returns and technology transfer to the rate of conditional convergence. We find that both effects are important.
Resumo:
In this rejoinder, we provide a response to the three commentaries written by Diamantopoulos, Howell, and Rigdon (all this issue) on our paper The MIMIC Model and Formative Variables: Problems and Solutions (also this issue). We contrast the approach taken in the latter paper (where we focus on clarifying the assumptions required to reject the formative MIMIC model) by spending time discussing what assumptions would be necessary to accept the use of the formative MIMIC model as a viable approach. Importantly, we clarify the implications of entity realism and show how it is entirely logical that some theoretical constructs can be considered to have real existence independent of their indicators, and some cannot. We show how the formative model only logically holds when considering these ‘unreal’ entities. In doing so, we provide important counter-arguments for much of the criticisms made in Diamantopoulos’ commentary, and the distinction also helps clarify a number of issues in the commentaries of Howell and Rigdon (both of which in general agree with our original paper). We draw together these various threads to provide a set of conceptual tools researchers can use when thinking about the entities in their theoretical models.
Resumo:
UncertWeb is a European research project running from 2010-2013 that will realize the uncertainty enabled model web. The assumption is that data services, in order to be useful, need to provide information about the accuracy or uncertainty of the data in a machine-readable form. Models taking these data as imput should understand this and propagate errors through model computations, and quantify and communicate errors or uncertainties generated by the model approximations. The project will develop technology to realize this and provide demonstration case studies.
Resumo:
We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.
Resumo:
The small-scale energy-transfer mechanism in zero-temperature superfluid turbulence of helium-4 is still a widely debated topic. Currently, the main hypothesis is that weakly nonlinear interacting Kelvin waves (KWs) transfer energy to sufficiently small scales such that energy is dissipated as heat via phonon excitations. Theoretically, there are at least two proposed theories for Kelvin-wave interactions. We perform the most comprehensive numerical simulation of weakly nonlinear interacting KWs to date and show, using a specially designed numerical algorithm incorporating the full Biot-Savart equation, that our results are consistent with the nonlocal six-wave KW interactions as proposed by L'vov and Nazarenko.
Resumo:
In recent years there have been a number of high-profile plant closures in the UK. In several cases, the policy response has included setting up a task force to deal with the impacts of the closure. It can be hypothesised that task force involving multi-level working across territorial boundaries and tiers of government is crucial to devising a policy response tailored to people's needs and to ensuring success in dealing with the immediate impacts of a closure. This suggests that leadership, and vision, partnership working and community engagement, and delivery of high quality services are important. This paper looks at the case of the MG Rover closure in 2005, to examine the extent to which the policy response to the closure at the national, regional and local levels dealt effectively with the immediate impacts of the closure, and the lessons that can be learned from the experience. Such lessons are of particular relevance given the closure of the LDV van plant in Birmingham in 2009 and more broadly – such as in the case of the downsizing of the Opel operation in Europe following its takeover by Magna.