901 resultados para Process Modeling, Collaboration, Distributed Modeling, Collaborative Technology
Resumo:
In work-zone configurations where lane drops are present, merging of traffic at the taper presents an operational concern. In addition, as flow through the work zone is reduced, the relative traffic safety of the work zone is also reduced. Improving work-zone flow-through merge points depends on the behavior of individual drivers. By better understanding driver behavior, traffic control plans, work zone policies, and countermeasures can be better targeted to reinforce desirable lane closure merging behavior, leading to both improved safety and work-zone capacity. The researchers collected data for two work-zone scenarios that included lane drops with one scenario on the Interstate and the other on an urban arterial roadway. The researchers then modeled and calibrated these scenarios in VISSIM using real-world speeds, travel times, queue lengths, and merging behaviors (percentage of vehicles merging upstream and near the merge point). Once built and calibrated, the researchers modeled strategies for various countermeasures in the two work zones. The models were then used to test and evaluate how various merging strategies affect safety and operations at the merge areas in these two work zones.
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy
Resumo:
Much of the analytical modeling of morphogen profiles is based on simplistic scenarios, where the source is abstracted to be point-like and fixed in time, and where only the steady state solution of the morphogen gradient in one dimension is considered. Here we develop a general formalism allowing to model diffusive gradient formation from an arbitrary source. This mathematical framework, based on the Green's function method, applies to various diffusion problems. In this paper, we illustrate our theory with the explicit example of the Bicoid gradient establishment in Drosophila embryos. The gradient formation arises by protein translation from a mRNA distribution followed by morphogen diffusion with linear degradation. We investigate quantitatively the influence of spatial extension and time evolution of the source on the morphogen profile. For different biologically meaningful cases, we obtain explicit analytical expressions for both the steady state and time-dependent 1D problems. We show that extended sources, whether of finite size or normally distributed, give rise to more realistic gradients compared to a single point-source at the origin. Furthermore, the steady state solutions are fully compatible with a decreasing exponential behavior of the profile. We also consider the case of a dynamic source (e.g. bicoid mRNA diffusion) for which a protein profile similar to the ones obtained from static sources can be achieved.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Web-palvelut muodostavat keskeisen osan semanttista web:iä. Ne mahdollistavat nykyaikaisen ja tehokkaan välineistön hajautettuun laskentaan ja luovat perustan palveluperustaisille arkkitehtuureille. Verkottunut automatisoitu liiketoiminta edellyttää jatkuvaa aktiivisuutta kaikilta osapuolilta. Lisäksi sitä tukevan järjestelmäntulee olla joustava ja sen tulee tukea monipuolista toiminnallisuutta. Nämä tavoitteet voidaan saavuttamaan yhdistämällä web-palveluita. Yhdistämisprosessi muodostuu joukosta tehtäviä kuten esim. palveluiden mallintaminen, palveluiden koostaminen, palveluiden suorittaminen ja tarkistaminen. Työssä on toteutettu yksinkertainen liiketoimintaprosessi. Toteutuksen osalta tarkasteltiin vaihtoehtoisia standardeja ja toteutustekniikoita. Myös suorituksen optimointiin liittyvät näkökulmat pyrittiin ottamaan huomioon.
Resumo:
Tämän diplomityön päämääränä oli tutkia Perloksen teknologiaosaamisia. Perloksen tavoitteena on tulevaisuudessa yhdistää ja soveltaa uusia teknologioita ja älykkäitä materiaaleja muovimekaniikkaan.Ideana oli mallintaa Perloksen osaamisia ja osaamisgapeja ottaen huomioon heidän tulevaisuuden visionsa. Projektituotteena osaamisten mallintamisessa oli Perlos Healthcaren asiakkaan analysoiva mittauslaite. Tutkimuksen arvo on huomattava sillä tunnistamalla osaamisensa ja kyvykkyytensä yritys pystyy luomaan paremman tarjooman vastatessaan koko ajan kasvaviin asiakasvaatimuksiin. Tutkimus on osa TEKESin rahoittamaa LIIMA -projektia. Työn ensimmäisessä osassa esitellään osaamiseen ja partneroitumiseen liittyviä teorioita. Osaamisten mallintaminen tehtiin Excel -pohjaisella työkalulla. Se sisältää projektituotteeseen liittyen osaamisriippuvuuksien mallintamisen ja gap -analyysin. Yhtenä tutkimusmetodina käytettiin haastattelututkimusta. Työ ja sen tulokset antavat operatiivista hyötyä teknologioiden ja markkinoiden välisessä kentässä.
Resumo:
Usingof belt for high precision applications has become appropriate because of the rapid development in motor and drive technology as well as the implementation of timing belts in servo systems. Belt drive systems provide highspeed and acceleration, accurate and repeatable motion with high efficiency, long stroke lengths and low cost. Modeling of a linear belt-drive system and designing its position control are examined in this work. Friction phenomena and position dependent elasticity of the belt are analyzed. Computer simulated results show that the developed model is adequate. The PID control for accurate tracking control and accurate position control is designed and applied to the real test setup. Both the simulation and the experimental results demonstrate that the designed controller meets the specified performance specifications.
Resumo:
Diplomityön tavoitteena on innovaatioprosessien selventäminen ja yhteensovittaminen radikaalien teknologia innovaatioiden osalta. Prosessien yhteensovittamisen edellytyksenä on teknologia innovaation radikaalisuuden ymmärtäminen. Kohteena on alkuvaiheen innovaatioprosessi jonka aikana virallinen yhteistyö ei vielä ole mandollista. Kehitetyn mallinavulla voidaan arvioida yrityksen kokemaa radikaalisuutta innovaatioprosessin alkuvaiheessa. On tärkeää ymmärtää miten vuorovaikutuksessa olevat yritykset kokevat innovaation radikaalisuuden prosessin alkuvaiheessa. Prosessien eroavaisuudet voidaan sanoa riippuvan yritysten näkökulmasta, koska niiden toiminnot määrittyvät koetun radikaalisuuden mukaan.Yhtäläisenä koettu radikaalisuus luo pohjan avoimelle vuorovaikutukselle ja antaa viitteitä syvemmästä yhteistyöstä tulevaisuudessa. Vertikaalisti sijoittuneiden yritysten prosessien esitetty yhteensovitus perustuu molemminpuoliseen radikaalisuuteen ja sisäisen hyväksynnän vastaavuuteen. Tällöin osapuolet ovat mukana samalla sitoumuksella prosessin jatkumisen ollessa epävarma.
Resumo:
Ohjelmiston kehitystyökalut käyttävät infromaatiota kehittäjän tuottamasta lähdekoodista. Informaatiota hyödynnetään ohjelmistoprojektin eri vaiheissa ja eri tarkoituksissa. Moderneissa ohjelmistoprojekteissa käytetyn informaation määrä voi kasvaa erittäin suureksi. Ohjelmistotyökaluilla on omat informaatiomallinsa ja käyttömekanisminsa. Informaation määrä sekä erilliset työkaluinformaatiomallit tekevät erittäin hankalaksi rakentaa joustavaa työkaluympäristöä, erityisesti ongelma-aluekohtaiseen ohjelmiston kehitysprosessiin. Tässä työssä on analysoitu perusinformaatiometamalleja Unified Modeling language kielestä, Python ohjelmointikielestä ja C++ ohjelmointikielestä. Metainformaation taso on rajoitettu rakenteelliselle tasolle. Ajettavat rakenteet on jätetty pois. ModelBase metamalli on yhdistetty olemassa olevista analysoiduista metamalleista. Tätä metamallia voidaan käyttää tulevaisuudessa ohjelmistotyökalujen kehitykseen.
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy
Resumo:
A rigorous unit operation model is developed for vapor membrane separation. The new model is able to describe temperature, pressure, and concentration dependent permeation as wellreal fluid effects in vapor and gas separation with hydrocarbon selective rubbery polymeric membranes. The permeation through the membrane is described by a separate treatment of sorption and diffusion within the membrane. The chemical engineering thermodynamics is used to describe the equilibrium sorption of vapors and gases in rubbery membranes with equation of state models for polymeric systems. Also a new modification of the UNIFAC model is proposed for this purpose. Various thermodynamic models are extensively compared in order to verify the models' ability to predict and correlate experimental vapor-liquid equilibrium data. The penetrant transport through the selective layer of the membrane is described with the generalized Maxwell-Stefan equations, which are able to account for thebulk flux contribution as well as the diffusive coupling effect. A method is described to compute and correlate binary penetrant¿membrane diffusion coefficients from the experimental permeability coefficients at different temperatures and pressures. A fluid flow model for spiral-wound modules is derived from the conservation equation of mass, momentum, and energy. The conservation equations are presented in a discretized form by using the control volume approach. A combination of the permeation model and the fluid flow model yields the desired rigorous model for vapor membrane separation. The model is implemented into an inhouse process simulator and so vapor membrane separation may be evaluated as an integralpart of a process flowsheet.