12 resultados para Stochastic processes -- Mathematical models
em Aston University Research Archive
Resumo:
This paper formulates several mathematical models for determining the optimal sequence of component placements and assignment of component types to feeders simultaneously or the integrated scheduling problem for a type of surface mount technology placement machines, called the sequential pick-andplace (PAP) machine. A PAP machine has multiple stationary feeders storing components, a stationary working table holding a printed circuit board (PCB), and a movable placement head to pick up components from feeders and place them to a board. The objective of integrated problem is to minimize the total distance traveled by the placement head. Two integer nonlinear programming models are formulated first. Then, each of them is equivalently converted into an integer linear type. The models for the integrated problem are verified by two commercial packages. In addition, a hybrid genetic algorithm previously developed by the authors is adopted to solve the models. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total traveling distance.
Resumo:
High velocity oxyfuel (HVOF) thermal spraying is one of the most significant developments in the thermal spray industry since the development of the original plasma spray technique. The first investigation deals with the combustion and discrete particle models within the general purpose commercial CFD code FLUENT to solve the combustion of kerosene and couple the motion of fuel droplets with the gas flow dynamics in a Lagrangian fashion. The effects of liquid fuel droplets on the thermodynamics of the combusting gas flow are examined thoroughly showing that combustion process of kerosene is independent on the initial fuel droplet sizes. The second analysis copes with the full water cooling numerical model, which can assist on thermal performance optimisation or to determine the best method for heat removal without the cost of building physical prototypes. The numerical results indicate that the water flow rate and direction has noticeable influence on the cooling efficiency but no noticeable effect on the gas flow dynamics within the thermal spraying gun. The third investigation deals with the development and implementation of discrete phase particle models. The results indicate that most powder particles are not melted upon hitting the substrate to be coated. The oxidation model confirms that HVOF guns can produce metallic coating with low oxidation within the typical standing-off distance about 30cm. Physical properties such as porosity, microstructure, surface roughness and adhesion strength of coatings produced by droplet deposition in a thermal spray process are determined to a large extent by the dynamics of deformation and solidification of the particles impinging on the substrate. Therefore, is one of the objectives of this study to present a complete numerical model of droplet impact and solidification. The modelling results show that solidification of droplets is significantly affected by the thermal contact resistance/substrate surface roughness.
Resumo:
Oxygen is a crucial molecule for cellular function. When oxygen demand exceeds supply, the oxygen sensing pathway centred on the hypoxia inducible factor (HIF) is switched on and promotes adaptation to hypoxia by up-regulating genes involved in angiogenesis, erythropoiesis and glycolysis. The regulation of HIF is tightly modulated through intricate regulatory mechanisms. Notably, its protein stability is controlled by the oxygen sensing prolyl hydroxylase domain (PHD) enzymes and its transcriptional activity is controlled by the asparaginyl hydroxylase FIH (factor inhibiting HIF-1).To probe the complexity of hypoxia-induced HIF signalling, efforts in mathematical modelling of the pathway have been underway for around a decade. In this paper, we review the existing mathematical models developed to describe and explain specific behaviours of the HIF pathway and how they have contributed new insights into our understanding of the network. Topics for modelling included the switch-like response to decreased oxygen gradient, the role of micro environmental factors, the regulation by FIH and the temporal dynamics of the HIF response. We will also discuss the technical aspects, extent and limitations of these models. Recently, HIF pathway has been implicated in other disease contexts such as hypoxic inflammation and cancer through crosstalking with pathways like NF?B and mTOR. We will examine how future mathematical modelling and simulation of interlinked networks can aid in understanding HIF behaviour in complex pathophysiological situations. Ultimately this would allow the identification of new pharmacological targets in different disease settings.
Resumo:
Integrated supplier selection and order allocation is an important decision for both designing and operating supply chains. This decision is often influenced by the concerned stakeholders, suppliers, plant operators and customers in different tiers. As firms continue to seek competitive advantage through supply chain design and operations they aim to create optimized supply chains. This calls for on one hand consideration of multiple conflicting criteria and on the other hand consideration of uncertainties of demand and supply. Although there are studies on supplier selection using advanced mathematical models to cover a stochastic approach, multiple criteria decision making techniques and multiple stakeholder requirements separately, according to authors' knowledge there is no work that integrates these three aspects in a common framework. This paper proposes an integrated method for dealing with such problems using a combined Analytic Hierarchy Process-Quality Function Deployment (AHP-QFD) and chance constrained optimization algorithm approach that selects appropriate suppliers and allocates orders optimally between them. The effectiveness of the proposed decision support system has been demonstrated through application and validation in the bioenergy industry.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
This paper focuses on minimizing printed circuit board (PCB) assembly time for a chipshootermachine, which has a movable feeder carrier holding components, a movable X–Y table carrying a PCB, and a rotary turret with multiple assembly heads. The assembly time of the machine depends on two inter-related optimization problems: the component sequencing problem and the feeder arrangement problem. Nevertheless, they were often regarded as two individual problems and solved separately. This paper proposes two complete mathematical models for the integrated problem of the machine. The models are verified by two commercial packages. Finally, a hybrid genetic algorithm previously developed by the authors is presented to solve the model. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total assembly time.
Resumo:
Diffusion processes are a family of continuous-time continuous-state stochastic processes that are in general only partially observed. The joint estimation of the forcing parameters and the system noise (volatility) in these dynamical systems is a crucial, but non-trivial task, especially when the system is nonlinear and multimodal. We propose a variational treatment of diffusion processes, which allows us to compute type II maximum likelihood estimates of the parameters by simple gradient techniques and which is computationally less demanding than most MCMC approaches. We also show how a cheap estimate of the posterior over the parameters can be constructed based on the variational free energy.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.
Resumo:
The primary aim of the thesis is to provide a comprehensive investigation of the osmotic dehydration processes in plant tissue. Effort has been concentrated on the modelling for simulating the processes. Two mathematical models for simulating the mass transfer during osmotic dehydration processes in plant tissues are developed and verified using existing experimental data. Both models are based on the mechanism of diffusion and convection of any mobile material that can transport in plant tissues. The mass balance equation for the transport of each constituent is established separately for intracellular and extra-cellular volumes with taking into account the mass transfer across the cell membrane the intracellular and extra-cellular volumes and the shrinkage of the whole tissue. The contribution from turgor pressure is considered in both models. Model two uses Darcy’s law to build the relation between shrinkage velocity and hydrostatic pressure in each volume because the plant tissue can be considered as the porous medium. Moreover, it has been extended to solve the multi-dimensional problems. A lot of efforts have been made to the parameter study and the sensitivity analyses. The parameters investigated including the concentration of the osmotic solution, diffusion coefficient, permeability of the cell membrane, elastic modulus of the cell wall, critical cell volume etc. The models allow us to quantitatively simulate the time evolution of intracellular and extra-cellular volumes as well as the time evolution of concentrations in each cross-section.
Resumo:
Calibration of stochastic traffic microsimulation models is a challenging task. This paper proposes a fast iterative probabilistic precalibration framework and demonstrates how it can be successfully applied to a real-world traffic simulation model of a section of the M40 motorway and its surrounding area in the U.K. The efficiency of the method stems from the use of emulators of the stochastic microsimulator, which provides fast surrogates of the traffic model. The use of emulators minimizes the number of microsimulator runs required, and the emulators' probabilistic construction allows for the consideration of the extra uncertainty introduced by the approximation. It is shown that automatic precalibration of this real-world microsimulator, using turn-count observational data, is possible, considering all parameters at once, and that this precalibrated microsimulator improves on the fit to observations compared with the traditional expertly tuned microsimulation. © 2000-2011 IEEE.
Resumo:
This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies.