14 resultados para Income distribution -- Mathematical models

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper formulates several mathematical models for determining the optimal sequence of component placements and assignment of component types to feeders simultaneously or the integrated scheduling problem for a type of surface mount technology placement machines, called the sequential pick-andplace (PAP) machine. A PAP machine has multiple stationary feeders storing components, a stationary working table holding a printed circuit board (PCB), and a movable placement head to pick up components from feeders and place them to a board. The objective of integrated problem is to minimize the total distance traveled by the placement head. Two integer nonlinear programming models are formulated first. Then, each of them is equivalently converted into an integer linear type. The models for the integrated problem are verified by two commercial packages. In addition, a hybrid genetic algorithm previously developed by the authors is adopted to solve the models. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total traveling distance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High velocity oxyfuel (HVOF) thermal spraying is one of the most significant developments in the thermal spray industry since the development of the original plasma spray technique. The first investigation deals with the combustion and discrete particle models within the general purpose commercial CFD code FLUENT to solve the combustion of kerosene and couple the motion of fuel droplets with the gas flow dynamics in a Lagrangian fashion. The effects of liquid fuel droplets on the thermodynamics of the combusting gas flow are examined thoroughly showing that combustion process of kerosene is independent on the initial fuel droplet sizes. The second analysis copes with the full water cooling numerical model, which can assist on thermal performance optimisation or to determine the best method for heat removal without the cost of building physical prototypes. The numerical results indicate that the water flow rate and direction has noticeable influence on the cooling efficiency but no noticeable effect on the gas flow dynamics within the thermal spraying gun. The third investigation deals with the development and implementation of discrete phase particle models. The results indicate that most powder particles are not melted upon hitting the substrate to be coated. The oxidation model confirms that HVOF guns can produce metallic coating with low oxidation within the typical standing-off distance about 30cm. Physical properties such as porosity, microstructure, surface roughness and adhesion strength of coatings produced by droplet deposition in a thermal spray process are determined to a large extent by the dynamics of deformation and solidification of the particles impinging on the substrate. Therefore, is one of the objectives of this study to present a complete numerical model of droplet impact and solidification. The modelling results show that solidification of droplets is significantly affected by the thermal contact resistance/substrate surface roughness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oxygen is a crucial molecule for cellular function. When oxygen demand exceeds supply, the oxygen sensing pathway centred on the hypoxia inducible factor (HIF) is switched on and promotes adaptation to hypoxia by up-regulating genes involved in angiogenesis, erythropoiesis and glycolysis. The regulation of HIF is tightly modulated through intricate regulatory mechanisms. Notably, its protein stability is controlled by the oxygen sensing prolyl hydroxylase domain (PHD) enzymes and its transcriptional activity is controlled by the asparaginyl hydroxylase FIH (factor inhibiting HIF-1).To probe the complexity of hypoxia-induced HIF signalling, efforts in mathematical modelling of the pathway have been underway for around a decade. In this paper, we review the existing mathematical models developed to describe and explain specific behaviours of the HIF pathway and how they have contributed new insights into our understanding of the network. Topics for modelling included the switch-like response to decreased oxygen gradient, the role of micro environmental factors, the regulation by FIH and the temporal dynamics of the HIF response. We will also discuss the technical aspects, extent and limitations of these models. Recently, HIF pathway has been implicated in other disease contexts such as hypoxic inflammation and cancer through crosstalking with pathways like NF?B and mTOR. We will examine how future mathematical modelling and simulation of interlinked networks can aid in understanding HIF behaviour in complex pathophysiological situations. Ultimately this would allow the identification of new pharmacological targets in different disease settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the 'global' mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy. © 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deposition of insoluble prion protein (PrP) in the brain in the form of protein aggregates or deposits is characteristic of the ‘transmissible spongiform encephalopathies’ (TSEs). Understanding the growth and development of these PrP aggregates is important both in attempting to the elucidate of the pathogenesis of prion disease and in the development of treatments designed to prevent or inhibit the spread of prion pathology within the brain. Aggregation and disaggregation of proteins and the diffusion of substances into the developing aggregates (surface diffusion) are important factors in the development of protein aggregates. Mathematical models suggest that if aggregation/disaggregation or surface diffusion is the predominant factor, the size frequency distribution of the resulting protein aggregates in the brain should be described by either a power-law or a log-normal model respectively. This study tested this hypothesis for two different types of PrP deposit, viz., the diffuse and florid-type PrP deposits in patients with variant Creutzfeldt-Jakob disease (vCJD). The size distributions of the florid and diffuse plaques were fitted by a power-law function in 100% and 42% of brain areas studied respectively. By contrast, the size distributions of both types of plaque deviated significantly from a log-normal model in all brain areas. Hence, protein aggregation and disaggregation may be the predominant factor in the development of the florid plaques. A more complex combination of factors appears to be involved in the pathogenesis of the diffuse plaques. These results may be useful in the design of treatments to inhibit the development of protein aggregates in vCJD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on minimizing printed circuit board (PCB) assembly time for a chipshootermachine, which has a movable feeder carrier holding components, a movable X–Y table carrying a PCB, and a rotary turret with multiple assembly heads. The assembly time of the machine depends on two inter-related optimization problems: the component sequencing problem and the feeder arrangement problem. Nevertheless, they were often regarded as two individual problems and solved separately. This paper proposes two complete mathematical models for the integrated problem of the machine. The models are verified by two commercial packages. Finally, a hybrid genetic algorithm previously developed by the authors is presented to solve the model. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total assembly time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deposition of insoluble prion protein (PrP) in the brain in the form of protein aggregates or deposits is characteristic of the ‘transmissible spongiform encephalopathies’ (TSEs). Understanding the growth and development of PrP aggregates is important both in attempting to elucidate the pathogenesis of prion disease and in the development of treatments designed to inhibit the spread of prion pathology within the brain. Aggregation and disaggregation of proteins and the diffusion of substances into the developing aggregates (surface diffusion) are important factors in the development of protein deposits. Mathematical models suggest that if either aggregation/disaggregation or surface diffusion is the predominant factor, then the size frequency distribution of the resulting protein aggregates will be described by either a power-law or a log-normal model respectively. This study tested this hypothesis for two different populations of PrP deposit, viz., the diffuse and florid-type PrP deposits characteristic of patients with variant Creutzfeldt-Jakob disease (vCJD). The size distributions of the florid and diffuse deposits were fitted by a power-law function in 100% and 42% of brain areas studied respectively. By contrast, the size distributions of both types of aggregate deviated significantly from a log-normal model in all areas. Hence, protein aggregation and disaggregation may be the predominant factor in the development of the florid deposits. A more complex combination of factors appears to be involved in the pathogenesis of the diffuse deposits. These results may be useful in the design of treatments to inhibit the development of PrP aggregates in vCJD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The literature on heat and mass transfer mechanisms in the convective drying of thick beds of solids has been critically reviewed. Related mathematical models of heat transfer are also considered. Experimental and theoretical studies were made of the temperature distribution within beds, and of drying rates, with various materials undergoing convective drying. The experimental work covered thick beds of hygroscopic and non-hygroscopic materials (glass beads of different diameters, polystyrene pellets, activated alumina and wood powder) at air temperatures of 54°C to 84°C. Tests were carried out in a laboratory drying apparatus comprising a wind tunnel through which the air, of controlled temperature and humidity, was passed over a sample suspended from a balance. Thermocouples were inserted at different depths within the sample bed. The temperature distribution profiles for both hygroscopic and non-hygroscopic beds exhibited a clear difference between the temperatures at the surface and bottom during the constant rate period. An effective method was introduced for predicting the critical moisture content. During the falling rate the profiles showed the existence of a receding evaporation plane; this divided the system into a hotter dry zone in the upper section and a wet zone near the bottom. A graphical procedure was established to predict accurately the position of the receding evaporation front at any time. A new mathematical model, based on the receding evaporation front phenomenon, was proposed to predict temperature distributions throughout a bed during drying. Good agreement was obtained when the model was validated by comparing its predictions with experimental data. The model was also able to predict the duration of each drying stage. In experiments using sample trays of different diameters, the drying rate was found to increase with a decrease in the effective length of the bed surface. During the constant rate period with trays of a small effective length, i.e. less than 0.08 m, an 'inversion' in temperature distribution occurred in the bed; the bottom temperature increased and became greater than that of the surface. Experimental measurements were verified in several ways to ensure this phenomenon was real. Theoretical explanations are given for both the effective length and temperature inversion phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The literature relating to haze formation, methods of separation, coalescence mechanisms, and models by which droplets <100 μm are collected, coalesced and transferred, have been reviewed with particular reference to particulate bed coalescers. The separation of secondary oil-water dispersions was studied experimentally using packed beds of monosized glass ballotini particles. The variables investigated were superficial velocity, bed depth, particle size, and the phase ratio and drop size distribution of inlet secondary dispersion. A modified pump loop was used to generate secondary dispersions of toluene or Clairsol 350 in water with phase ratios between 0.5-6.0 v/v%.Inlet drop size distributions were determined using a Malvern Particle Size Analyser;effluent, coalesced droplets were sized by photography. Single phase flow pressure drop data were correlated by means of a Carman-Kozeny type equation. Correlations were obtained relating single and two phase pressure drops, as (ΔP2/μc)/ΔP1/μd) = kp Ua Lb dcc dpd Cine A flow equation was derived to correlate the two phase pressure drop data as, ΔP2/(ρcU2) = 8.64*107 [dc/D]-0.27 [L/D]0.71 [dp/D]-0.17 [NRe]1.5 [e1]-0.14 [Cin]0.26  In a comparison between functions to characterise the inlet drop size distributions a modification of the Weibull function provided the best fit of experimental data. The general mean drop diameter was correlated by: q_p q_p p_q /β      Γ ((q-3/β) +1) d qp = d fr  .α        Γ ((P-3/β +1 The measured and predicted mean inlet drop diameters agreed within ±15%. Secondary dispersion separation depends largely upon drop capture within a bed. A theoretical analysis of drop capture mechanisms in this work indicated that indirect interception and London-van der Waal's mechanisms predominate. Mathematical models of dispersed phase concentration m the bed were developed by considering drop motion to be analogous to molecular diffusion.The number of possible channels in a bed was predicted from a model in which the pores comprised randomly-interconnected passage-ways between adjacent packing elements and axial flow occured in cylinders on an equilateral triangular pitch. An expression was derived for length of service channels in a queuing system leading to the prediction of filter coefficients. The insight provided into the mechanisms of drop collection and travel, and the correlations of operating parameters, should assist design of industrial particulate bed coalescers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a stochastic agent-based model for the distribution of personal incomes in a developing economy. We start with the assumption that incomes are determined both by individual labour and by stochastic effects of trading and investment. The income from personal effort alone is distributed about a mean, while the income from trade, which may be positive or negative, is proportional to the trader's income. These assumptions lead to a Langevin model with multiplicative noise, from which we derive a Fokker-Planck (FP) equation for the income probability density function (IPDF) and its variation in time. We find that high earners have a power law income distribution while the low-income groups have a Levy IPDF. Comparing our analysis with the Indian survey data (obtained from the world bank website: http://go.worldbank.org/SWGZB45DN0) taken over many years we obtain a near-perfect data collapse onto our model's equilibrium IPDF. Using survey data to relate the IPDF to actual food consumption we define a poverty index (Sen A. K., Econometrica., 44 (1976) 219; Kakwani N. C., Econometrica, 48 (1980) 437), which is consistent with traditional indices, but independent of an arbitrarily chosen "poverty line" and therefore less susceptible to manipulation. Copyright © EPLA, 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a two-dimensional approach of risk assessment method based on the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The risk is calculated using Monte Carlo simulation methods whereby synthetic contaminant source terms were generated to the same distribution as historically occurring pollution events or a priori potential probability distribution. The spatial and temporal distributions of the generated contaminant concentrations at pre-defined monitoring points within the aquifer were then simulated from repeated realisations using integrated mathematical models. The number of times when user defined ranges of concentration magnitudes were exceeded is quantified as risk. The utilities of the method were demonstrated using hypothetical scenarios, and the risk of pollution from a number of sources all occurring by chance together was evaluated. The results are presented in the form of charts and spatial maps. The generated risk maps show the risk of pollution at each observation borehole, as well as the trends within the study area. This capability to generate synthetic pollution events from numerous potential sources of pollution based on historical frequency of their occurrence proved to be a great asset to the method, and a large benefit over the contemporary methods.