939 resultados para Markov process modeling
Resumo:
A steady state mathematical model for co-current spray drying was developed for sugar-rich foods with the application of the glass transition temperature concept. Maltodextrin-sucrose solution was used as a sugar-rich food model. The model included mass, heat and momentum balances for a single droplet drying as well as temperature and humidity profile of the drying medium. A log-normal volume distribution of the droplets was generated at the exit of the rotary atomizer. This generation created a certain number of bins to form a system of non-linear first-order differential equations as a function of the axial distance of the drying chamber. The model was used to calculate the changes of droplet diameter, density, temperature, moisture content and velocity in association with the change of air properties along the axial distance. The difference between the outlet air temperature and the glass transition temperature of the final products (AT) was considered as an indicator of stickiness of the particles in spray drying process. The calculated and experimental AT values were close, indicating successful validation of the model. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The leaching of elements from the surface of charged fly ash particles is known to be an unsteady process. The mass transfer resistance provided by the diffuse double layer has been quantified as one of the reasons for this delayed leaching. In this work, a model based on mass transfer principles for predicting the concentration of calcium hydroxide in the diffuse double layer is presented. The significant difference between predicted calcium hydroxide concentration and the experimentally measured is explained.
Resumo:
We derive necessary and sufficient conditions for the existence of bounded or summable solutions to systems of linear equations associated with Markov chains. This substantially extends a famous result of G. E. H. Reuter, which provides a convenient means of checking various uniqueness criteria for birth-death processes. Our result allows chains with much more general transition structures to be accommodated. One application is to give a new proof of an important result of M. F. Chen concerning upwardly skip-free processes. We then use our generalization of Reuter's lemma to prove new results for downwardly skip-free chains, such as the Markov branching process and several of its many generalizations. This permits us to establish uniqueness criteria for several models, including the general birth, death, and catastrophe process, extended branching processes, and asymptotic birth-death processes, the latter being neither upwardly skip-free nor downwardly skip-free.
Resumo:
We present a new method of modeling imaging of laser beams in the presence of diffraction. Our method is based on the concept of first orthogonally expanding the resultant diffraction field (that would have otherwise been obtained by the laborious application of the Huygens diffraction principle) and then representing it by an effective multimodal laser beam with different beam parameters. We show not only that the process of obtaining the new beam parameters is straightforward but also that it permits a different interpretation of the diffraction-caused focal shift in laser beams. All of the criteria that we have used to determine the minimum number of higher-order modes needed to accurately represent the diffraction field show that the mode-expansion method is numerically efficient. Finally, the characteristics of the mode-expansion method are such that it allows modeling of a vast array of diffraction problems, regardless of the characteristics of the incident laser beam, the diffracting element, or the observation plane. (C) 2005 Optical Society of America.
Resumo:
Increasingly, large areas of native tropical forests are being transformed into a mosaic of human dominated land uses with scattered mature remnants and secondary forests. In general, at the end of the land clearing process, the landscape will have two forest components: a stable component of surviving mature forests, and a dynamic component of secondary forests of different ages. As the proportion of mature forests continues to decline, secondary forests play an increasing role in the conservation and restoration of biodiversity. This paper aims to predict and explain spatial and temporal patterns in the age of remnant mature and secondary forests in lowland Colombian landscapes. We analyse the age distributions of forest fragments, using detailed temporal land cover data derived from aerial photographs. Ordinal logistic regression analysis was applied to model the spatial dynamics of mature and secondary forest patches. In particular, the effect of soil fertility, accessibility and auto-correlated neighbourhood terms on forest age and time of isolation of remnant patches was assessed. In heavily transformed landscapes, forests account for approximately 8% of the total landscape area, of which three quarters are comprised of secondary forests. Secondary forest growth adjacent to mature forest patches increases mean patch size and core area, and therefore plays an important ecological role in maintaining landscape structure. The regression models show that forest age is positively associated with the amount of neighbouring forest, and negatively associated with the amount of neighbouring secondary vegetation, so the older the forest is the less secondary vegetation there is adjacent to it. Accessibility and soil fertility also have a negative but variable influence on the age of forest remnants. The probability of future clearing if current conditions hold is higher for regenerated than mature forests. The challenge of biodiversity conservation and restoration in dynamic and spatially heterogeneous landscape mosaics composed of mature and secondary forests is discussed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Many populations have a negative impact on their habitat or upon other species in the environment if their numbers become too large. For this reason they are often subjected to some form of control. One common control regime is the reduction regime: when the population reaches a certain threshold it is controlled (for example culled) until it falls below a lower predefined level. The natural model for such a controlled population is a birth-death process with two phases, the phase determining which of two distinct sets of birth and death rates governs the process. We present formulae for the probability of extinction and the expected time to extinction, and discuss several applications. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Stochastic models based on Markov birth processes are constructed to describe the process of invasion of a fly larva by entomopathogenic nematodes. Various forms for the birth (invasion) rates are proposed. These models are then fitted to data sets describing the observed numbers of nematodes that have invaded a fly larval after a fixed period of time. Non-linear birthrates are required to achieve good fits to these data, with their precise form leading to different patterns of invasion being identified for three populations of nematodes considered. One of these (Nemasys) showed the greatest propensity for invasion. This form of modelling may be useful more generally for analysing data that show variation which is different from that expected from a binomial distribution.
Resumo:
Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.
Resumo:
This paper consides the problem of extracting the relationships between two time series in a non-linear non-stationary environment with Hidden Markov Models (HMMs). We describe an algorithm which is capable of identifying associations between variables. The method is applied both to synthetic data and real data. We show that HMMs are capable of modelling the oil drilling process and that they outperform existing methods.
Resumo:
Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models to identify the lag (or delay) between different variables for such data. Adopting an information-theoretic approach, we develop a procedure for training HMMs to maximise the mutual information (MMI) between delayed time series. The method is used to model the oil drilling process. We show that cross-correlation gives no information and that the MMI approach outperforms maximum likelihood.
Resumo:
The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.
Resumo:
Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. miniDVMS v1.8 provides a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualisation domain. The advantage of this interface is that the user is directly involved in the data mining process. Principled projection methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), are integrated with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, and user interaction facilities, to provide this integrated visual data mining framework. The software also supports conventional visualisation techniques such as principal component analysis (PCA), Neuroscale, and PhiVis. This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install and use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.
Resumo:
In recent work we have developed a novel variational inference method for partially observed systems governed by stochastic differential equations. In this paper we provide a comparison of the Variational Gaussian Process Smoother with an exact solution computed using a Hybrid Monte Carlo approach to path sampling, applied to a stochastic double well potential model. It is demonstrated that the variational smoother provides us a very accurate estimate of mean path while conditional variance is slightly underestimated. We conclude with some remarks as to the advantages and disadvantages of the variational smoother. © 2008 Springer Science + Business Media LLC.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.