890 resultados para the EFQM excellence model
Resumo:
The nonequilibrium phase transition of the one-dimensional triplet-creation model is investigated using the n-site approximation scheme. We find that the phase diagram in the space of parameters (gamma, D), where gamma is the particle decay probability and D is the diffusion probability, exhibits a tricritical point for n >= 4. However, the fitting of the tricritical coordinates (gamma(t), D(t)) using data for 4 <= n <= 13 predicts that gamma(t) becomes negative for n >= 26, indicating thus that the phase transition is always continuous in the limit n -> infinity. However, the large discrepancies between the critical parameters obtained in this limit and those obtained by Monte Carlo simulations, as well as a puzzling non-monotonic dependence of these parameters on the order of the approximation n, argue for the inadequacy of the n-site approximation to study the triplet-creation model for computationally feasible values of n.
Resumo:
We study a long-range percolation model whose dynamics describe the spreading of an infection on an infinite graph. We obtain a sufficient condition for phase transition and prove all upper bound for the critical parameter of spherically symmetric trees. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Science centres are one of the best opportunities for informal study of natural science. There are many advantages to learn in the science centres compared with the traditional methods: it is possible to motivate and supply visitors with the social experience, to improve people’s understandings and attitudes, thereby bringing on and attaching wider interest towards natural science. In the science centres, pupils show interest, enthusiasm, motivation, self-confidence, sensitiveness and also they are more open and eager to learn. Traditional school-classes however mostly do not favour these capabilities. This research presents the qualitative study in the science centre. Data was gathered from observations and interviews at Science North science centre in Canada. Pupils’ learning behaviours were studied at different exhibits in the science centre. Learning behaviours are classified as follows: labels reading, experimenting with the exhibits, observing others or exhibit, using guide, repeating the activity, positive emotional response, acknowledged relevance, seeking and sharing information. In this research, it became clear that in general pupils do not read labels; in most cases pupils do not use the guides help; pupils prefer exhibits that enable high level of interactivity; pupils display more learning behaviours at exhibits that enable a high level of interactivity.
Chinese Basic Pension Substitution Rate: A Monte Carlo Demonstration of the Individual Account Model
Resumo:
At the end of 2005, the State Council of China passed ”The Decision on adjusting the Individual Account of Basic Pension System”, which adjusted the individual account in the 1997 basic pension system. In this essay, we will analyze the adjustment above, and use Life Annuity Actuarial Theory to establish the basic pension substitution rate model. Monte Carlo simulation is also used to prove the rationality of the model. Some suggestions are put forward associated with the substitution rate according to the current policy.
Resumo:
The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any 'thing', whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community-effort to achieve inter-operability in the Provenance Challenge series.
Resumo:
A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.
Resumo:
Excessive labor turnover may be considered, to a great extent, an undesirable feature of a given economy. This follows from considerations such as underinvestment in human capital by firms. Understanding the determinants and the evolution of turnover in a particular labor market is therefore of paramount importance, including policy considerations. The present paper proposes an econometric analysis of turnover in the Brazilian labor market, based on a partial observability bivariate probit model. This model considers the interdependence of decisions taken by workers and firms, helping to elucidate the causes that lead each of them to end an employment relationship. The Employment and Unemployment Survey (PED) conducted by the State System of Data Analysis (SEADE) and by the Inter-Union Department of Statistics and Socioeconomic Studies (DIEESE) provides data at the individual worker level, allowing for the estimation of the joint probabilities of decisions to quit or stay on the job on the worker’s side, and to maintain or fire the employee on the firm’s side, during a given time period. The estimated parameters relate these estimated probabilities to the characteristics of workers, job contracts, and to the potential macroeconomic determinants in different time periods. The results confirm the theoretical prediction that the probability of termination of an employment relationship tends to be smaller as the worker acquires specific skills. The results also show that the establishment of a formal employment relationship reduces the probability of a quit decision by the worker, and also the firm’s firing decision in non-industrial sectors. With regard to the evolution of quit probability over time, the results show that an increase in the unemployment rate inhibits quitting, although this tends to wane as the unemployment rate rises.
Resumo:
When the joint assumption of optimal risk sharing and coincidence of beliefs is added to the collective model of Browning and Chiappori (1998) income pooling and symmetry of the pseudo-Hicksian matrix are shown to be restored. Because these are also the features of the unitary model usually rejected in empirical studies one may argue that these assumptions are at odds with evidence. We argue that this needs not be the case. The use of cross-section data to generate price and income variation is based Oil a definition of income pooling or symmetry suitable for testing the unitary model, but not the collective model with risk sharing. AIso, by relaxing assumptions on beliefs, we show that symmetry and income pooling is lost. However, with usual assumptions on existence of assignable goods, we show that beliefs are identifiable. More importantly, if di:fferences in beliefs are not too extreme, the risk sharing hypothesis is still testable.
Resumo:
We study the effects of population size in the Peck-Shell analysis of bank runs. We find that a contract featuring equal-treatment for almost all depositors of the same type approximates the optimum. Because the approximation also satisfies Green-Lin incentive constraints, when the planner discloses positions in the queue, welfare in these alternative specifications are sandwiched. Disclosure, however, it is not needed since our approximating contract is not subject to runs.
Resumo:
I study the welfare cost of inflation and the effect on prices after a permanent increase in the interest rate. In the steady state, the real money demand is homogeneous of degree one in income and its interest-rate elasticity is approximately equal to −1/2. Consumers are indifferent between an economy with 10% p.a. inflation and one with zero inflation if their income is 1% higher in the first economy. A permanent increase in the interest rate makes the price level to drop initially and inflation to adjust slowly to its steady state level.