939 resultados para Markov process modeling
Resumo:
The present study is an integral part of a broader study focused on the design and implementation of self-cleaning culverts, i.e., configurations that prevent the formation of sediment deposits after culvert construction or cleaning. Sediment deposition at culverts is influenced by many factors, including the size and characteristics of material of which the channel is composed, the hydraulic characteristics generated under different hydrology events, the culvert geometry design, channel transition design, and the vegetation around the channel. The multitude of combinations produced by this set of variables makes the investigation of practical situations a complex undertaking. In addition to the considerations above, the field and analytical observations have revealed flow complexities affecting the flow and sediment transport through culverts that further increase the dimensions of the investigation. The flow complexities investigated in this study entail: flow non-uniformity in the areas of transition to and from the culvert, flow unsteadiness due to the flood wave propagation through the channel, and the asynchronous correlation between the flow and sediment hydrographs resulting from storm events. To date, the literature contains no systematic studies on sediment transport through multi-box culverts or investigations on the adverse effects of sediment deposition at culverts. Moreover, there is limited knowledge about the non-uniform, unsteady sediment transport in channels of variable geometry. Furthermore, there are few readily useable (inexpensive and practical) numerical models that can reliably simulate flow and sediment transport in such complex situations. Given the current state of knowledge, the main goal of the present study is to investigate the above flow complexities in order to provide the needed insights for a series of ongoing culvert studies. The research was phased so that field observations were conducted first to understand the culvert behavior in Iowa landscape. Modeling through complementary hydraulic model and numerical experiments was subsequently carried out to gain the practical knowledge for the development of the self-cleaning culvert designs.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Ohjelmiston kehitystyökalut käyttävät infromaatiota kehittäjän tuottamasta lähdekoodista. Informaatiota hyödynnetään ohjelmistoprojektin eri vaiheissa ja eri tarkoituksissa. Moderneissa ohjelmistoprojekteissa käytetyn informaation määrä voi kasvaa erittäin suureksi. Ohjelmistotyökaluilla on omat informaatiomallinsa ja käyttömekanisminsa. Informaation määrä sekä erilliset työkaluinformaatiomallit tekevät erittäin hankalaksi rakentaa joustavaa työkaluympäristöä, erityisesti ongelma-aluekohtaiseen ohjelmiston kehitysprosessiin. Tässä työssä on analysoitu perusinformaatiometamalleja Unified Modeling language kielestä, Python ohjelmointikielestä ja C++ ohjelmointikielestä. Metainformaation taso on rajoitettu rakenteelliselle tasolle. Ajettavat rakenteet on jätetty pois. ModelBase metamalli on yhdistetty olemassa olevista analysoiduista metamalleista. Tätä metamallia voidaan käyttää tulevaisuudessa ohjelmistotyökalujen kehitykseen.
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy
Resumo:
A rigorous unit operation model is developed for vapor membrane separation. The new model is able to describe temperature, pressure, and concentration dependent permeation as wellreal fluid effects in vapor and gas separation with hydrocarbon selective rubbery polymeric membranes. The permeation through the membrane is described by a separate treatment of sorption and diffusion within the membrane. The chemical engineering thermodynamics is used to describe the equilibrium sorption of vapors and gases in rubbery membranes with equation of state models for polymeric systems. Also a new modification of the UNIFAC model is proposed for this purpose. Various thermodynamic models are extensively compared in order to verify the models' ability to predict and correlate experimental vapor-liquid equilibrium data. The penetrant transport through the selective layer of the membrane is described with the generalized Maxwell-Stefan equations, which are able to account for thebulk flux contribution as well as the diffusive coupling effect. A method is described to compute and correlate binary penetrant¿membrane diffusion coefficients from the experimental permeability coefficients at different temperatures and pressures. A fluid flow model for spiral-wound modules is derived from the conservation equation of mass, momentum, and energy. The conservation equations are presented in a discretized form by using the control volume approach. A combination of the permeation model and the fluid flow model yields the desired rigorous model for vapor membrane separation. The model is implemented into an inhouse process simulator and so vapor membrane separation may be evaluated as an integralpart of a process flowsheet.
Resumo:
In the European Union, the importance of mobile communications was realized early on. The process of mobile communications becoming ubiquitous has taken time, as the innovation of mobile communications diffused into the society. The aim of this study is to find out how the evolution and spatial patterns of the diffusion of mobile communications within the European Union could be taken into account in forecasting the diffusion process. There is relatively lot of research of innovation diffusion on the individual (micro) andthe country (macro) level, if compared to the territorial level. Territorial orspatial diffusion refers either to the intra-country or inter-country diffusionof an innovation. In both settings, the dif- fusion of a technological innovation has gained scarce attention. This study adds knowledge of the diffusion between countries, focusing especially on the role of location in this process. The main findings of the study are the following: The penetration rates of the European Union member countries have become more even in the period of observation, from the year 1981 to 2000. The common digital GSM system seems to have hastened this process. As to the role of location in the diffusion process, neighboring countries have had similar diffusion processes. They can be grouped into three, the Nordic countries, the central and southern European countries, and the remote southern European countries. The neighborhood effect is also domi- nating in thegravity model which is used for modeling the adoption timing of the countries. The subsequent diffusion within a country, measured by the logistic model in Finland, is af- fected positively by its economic situation, and it seems to level off at some 92 %. Considering the launch of future mobile communications systemsusing a common standard should implicate an equal development between the countries. The launching time should be carefully selected as the diffusion is probably delayed in economic downturns. The location of a country, measured by distance, can be used in forecasting the adoption and diffusion. Fi- nally, the result of penetration rates becoming more even implies that in a relatively homoge- nous set of countries, such as the European Union member countries, the estimated final pene- tration of a single country can be used for approximating the penetration of the others. The estimated eventual penetration of Finland, some 92 %, should thus also be the eventual level for all the European Union countries and for the European Union as a whole.
Resumo:
Kasvava kiinnostus ohjelmistojen laatua kohtaan on herättänyt ohjelmistoprosesseihin ja niiden kehittämiseen kohdistuvaa huomiota viime vuosina. Ohjelmistoyritykset ympäri maailmaa ovat ottaneet käyttöön ohjelmistoprosessin kehittämismalleja, kuten CMM ja SPICE, pyrkiessään kohti parempilaatuisia ohjelmistotuotteita. Samalla on huomattu, että tehokas prosessien parantaminen ja suorittaminen tarvitsee tuekseen kuvauksen prosessista, jotta prosessin perusteellinen ymmärtäminen ja kommunikointi olisi mahdollista. Ohjelmistoprosesseja voidaan kuvata monilla eri tavoilla. Prosessiopas on prosessin esitysmuoto, jonka päätarkoituksena on helpottaa prosessin ymmärtämistä ja kommunikointia. Elektroninen prosessiopas on Web-teknologiaa hyödyntävä prosessiopas. Tässä työssä luodaan kehitysympäristö elektronisille prosessioppaille, joiden tarkoituksena on tukea ohjelmistoprosessin kehittämistä ja suorittamista. Ympäristö mahdollistaa ohjelmistoprosessinmallintamisen sekä yksilöllisten oppaiden luomisen ja muokkaamisen. Kehitysympäristöä käytetään mallintamaan tietoliikenneohjelmistoja valmistavan yrityksen ohjelmistoprosessia sekä luomaan elektronisia prosessioppaita tukemaan prosessin kehitystä ja suorittamista. Lopuksi pohditaan prosessioppaiden tarjoamaa tukea sekä mahdollisuuksia kohdeyrityksessä.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
Many European states apply score systems to evaluate the disability severity of non-fatal motor victims under the law of third-party liability. The score is a non-negative integer with an upper bound at 100 that increases with severity. It may be automatically converted into financial terms and thus also reflects the compensation cost for disability. In this paper, discrete regression models are applied to analyze the factors that influence the disability severity score of victims. Standard and zero-altered regression models are compared from two perspectives: an interpretation of the data generating process and the level of statistical fit. The results have implications for traffic safety policy decisions aimed at reducing accident severity. An application using data from Spain is provided.
Resumo:
The literature part of the work reviews overall Fischer-Tropsch process, Fischer-Tropsch reactors and catalysts. Fundamentals of Fischer-Tropsch modeling are also presented. The emphasis is on the reactor unit. Comparison of the reactors and the catalysts is carried out to choose the suitable reactor setup for the modeling work. The effects of the operation conditions are also investigated. Slurry bubble column reactor model operating with cobalt catalyst is developed by taking into account the mass transfer of the reacting components (CO and H2) and the consumption of the reactants in the liquid phase. The effect of hydrostatic pressure and the change in total mole flow rate in gas phase are taken into account in calculation of the solubilities. The hydrodynamics, reaction kinetics and product composition are determined according to literature. The cooling system and furthermore the required heat transfer area and number of cooling tubes are also determined. The model is implemented in Matlab software. Commercial scale reactor setup is modeled and the behavior of the model is investigated. The possible inaccuraries are evaluated and the suggestions for the future work are presented. The model is also integrated to Aspen Plus process simulation software, which enables the usage of the model in more extensive Fischer-Tropsch process simulations. Commercial scale reactor of diameter of 7 m and height of 30 m was modeled. The capacity of the reactor was calculated to be about 9 800 barrels/day with CO conversion of 75 %. The behavior of the model was realistic and results were in the right range. The highest uncertainty to model was estimated to be caused by the determination of the kinetic rate.
Resumo:
The chemistry of gold dissolution in alkaline cyanide solution has continually received attention and new rate equations expressing the gold leaching are still developed. The effect of leaching parameters on gold gold cyanidation is studied in this work in order to optimize the leaching process. A gold leaching model, based on the well-known shrinking-core model, is presented in this work. It is proposed that the reaction takes place at the reacting particle surface which is continuously reduced as the reaction proceeds. The model parameters are estimated by comparing experimental data and simulations. The experimental data used in this work was obtained from Ling et al. (1996) and de Andrade Lima and Hodouin (2005). Two different rate equations, where the unreacted amount of gold is considered in one equation, are investigated. In this work, it is presented that the reaction at the surface is the rate controlling step since there is no internal diffusion limitation. The model considering the effect of non-reacting gold shows that the reaction orders are consistent with the experimental observations reported by Ling et al. (1996) and de Andrade Lima and Hodouin (2005). However, it should be noted that the model obtained in this work is based on assumptions of no side reactions, no solid-liquid mass transfer resistances and no effect from temperature.
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy