33 resultados para Decomposition algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent work, the concentration index has been widely used as a measure of income-related health inequality. The purpose of this note is to illustrate two different methods for decomposing the overall health concentration index using data collected from a Short Form (SF-36) survey of the general Australian population conducted in 1995. For simplicity, we focus on the physical functioning scale of the SF-36. Firstly we examine decomposition 'by component' by separating the concentration index for the physical functioning scale into the ten items on which it is based. The results show that the items contribute differently to the overall inequality measure, i.e. two of the items contributed 13% and 5%, respectively, to the overall measure. Second, to illustrate the 'by subgroup' method we decompose the concentration index by employment status. This involves separating the population into two groups: individuals currently in employment; and individuals not currently employed. We find that the inequality between these groups is about five times greater than the inequality within each group. These methods provide insights into the nature of inequality that can be used to inform policy design to reduce income related health inequalities. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of parameter-space size adjustment is pn,posed in order to enable successful application of genetic algorithms to continuous optimization problems. Performance of genetic algorithms with six different combinations of selection and reproduction mechanisms, with and without parameter-space size adjustment, were severely tested on eleven multiminima test functions. An algorithm with the best performance was employed for the determination of the model parameters of the optical constants of Pt, Ni and Cr.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical optimisation methods are being more commonly applied to agricultural systems models, to identify the most profitable management strategies. The available optimisation algorithms are reviewed and compared, with literature and our studies identifying evolutionary algorithms (including genetic algorithms) as superior in this regard to simulated annealing, tabu search, hill-climbing, and direct-search methods. Results of a complex beef property optimisation, using a real-value genetic algorithm, are presented. The relative contributions of the range of operational options and parameters of this method are discussed, and general recommendations listed to assist practitioners applying evolutionary algorithms to the solution of agricultural systems. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differential scanning calorimetric (DSC) and thermogravimetric analysis (TGA) have been used to study the thermal decomposition, the melting behavior and low-temperature transitions of copolymers obtained by radiation-induced grafting of styrene onto poly (tetrafluoroethylene- perfluoropropylvinylether) (PFA) substrates. PFA with different contents of perfluoropropylvinylether (PPVE) as a comonomer have been investigated. A two step degradation pattern was observed from TGA thermograms of all the grafted copolymers, which was attributed to degradation of PSTY followed by the degradation of the PFA backbone at higher temperature. One broad melting peak can be identified for all copolymers, which has two components in the samples with higher PPVE content. The melting peak, crystal-crystal transition and the degree of crystallinity of the grafted copolymers increases with radiation grafting up to 50 kGy, followed by a decrease at higher doses. No such decrease was observed in the ungrafted PFA samples after irradiation. This indicated that the changes in the heats of transitions and crystallinity at low doses are due to the radiation effects on the microstructure of PFA (chain scission), whereas at higher doses the grafted PSTY is the driving force behind these changes. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The principal aim of this paper is to measure the amount by which the profit of a multi-input, multi-output firm deviates from maximum short-run profit, and then to decompose this profit gap into components that are of practical use to managers. In particular, our interest is in the measurement of the contribution of unused capacity, along with measures of technical inefficiency, and allocative inefficiency, in this profit gap. We survey existing definitions of capacity and, after discussing their shortcomings, we propose a new ray economic capacity measure that involves short-run profit maximisation, with the output mix held constant. We go on to describe how the gap between observed profit and maximum profit can be calculated and decomposed using linear programming methods. The paper concludes with an empirical illustration, involving data on 28 international airline companies. The empirical results indicate that these airline companies achieve profit levels which are on average US$815m below potential levels, and that 70% of the gap may be attributed to unused capacity. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In an earlier paper [Journal of Mathematical Economics, 37 (2002) 17-38], we proved that if a preference relation on a commodity space is non-representable by a real-valued function then that chain is necessarily a long chain, a planar chain, an Aronszajn-like chain or a Souslin chain. In this paper, we study the class of planar chains, the simplest example of which is the Debreu chain (R-2, <(l)). (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fault detection and isolation (FDI) are important steps in the monitoring and supervision of industrial processes. Biological wastewater treatment (WWT) plants are difficult to model, and hence to monitor, because of the complexity of the biological reactions and because plant influent and disturbances are highly variable and/or unmeasured. Multivariate statistical models have been developed for a wide variety of situations over the past few decades, proving successful in many applications. In this paper we develop a new monitoring algorithm based on Principal Components Analysis (PCA). It can be seen equivalently as making Multiscale PCA (MSPCA) adaptive, or as a multiscale decomposition of adaptive PCA. Adaptive Multiscale PCA (AdMSPCA) exploits the changing multivariate relationships between variables at different time-scales. Adaptation of scale PCA models over time permits them to follow the evolution of the process, inputs or disturbances. Performance of AdMSPCA and adaptive PCA on a real WWT data set is compared and contrasted. The most significant difference observed was the ability of AdMSPCA to adapt to a much wider range of changes. This was mainly due to the flexibility afforded by allowing each scale model to adapt whenever it did not signal an abnormal event at that scale. Relative detection speeds were examined only summarily, but seemed to depend on the characteristics of the faults/disturbances. The results of the algorithms were similar for sudden changes, but AdMSPCA appeared more sensitive to slower changes.