958 resultados para Process Modeling
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
Many European states apply score systems to evaluate the disability severity of non-fatal motor victims under the law of third-party liability. The score is a non-negative integer with an upper bound at 100 that increases with severity. It may be automatically converted into financial terms and thus also reflects the compensation cost for disability. In this paper, discrete regression models are applied to analyze the factors that influence the disability severity score of victims. Standard and zero-altered regression models are compared from two perspectives: an interpretation of the data generating process and the level of statistical fit. The results have implications for traffic safety policy decisions aimed at reducing accident severity. An application using data from Spain is provided.
Resumo:
The literature part of the work reviews overall Fischer-Tropsch process, Fischer-Tropsch reactors and catalysts. Fundamentals of Fischer-Tropsch modeling are also presented. The emphasis is on the reactor unit. Comparison of the reactors and the catalysts is carried out to choose the suitable reactor setup for the modeling work. The effects of the operation conditions are also investigated. Slurry bubble column reactor model operating with cobalt catalyst is developed by taking into account the mass transfer of the reacting components (CO and H2) and the consumption of the reactants in the liquid phase. The effect of hydrostatic pressure and the change in total mole flow rate in gas phase are taken into account in calculation of the solubilities. The hydrodynamics, reaction kinetics and product composition are determined according to literature. The cooling system and furthermore the required heat transfer area and number of cooling tubes are also determined. The model is implemented in Matlab software. Commercial scale reactor setup is modeled and the behavior of the model is investigated. The possible inaccuraries are evaluated and the suggestions for the future work are presented. The model is also integrated to Aspen Plus process simulation software, which enables the usage of the model in more extensive Fischer-Tropsch process simulations. Commercial scale reactor of diameter of 7 m and height of 30 m was modeled. The capacity of the reactor was calculated to be about 9 800 barrels/day with CO conversion of 75 %. The behavior of the model was realistic and results were in the right range. The highest uncertainty to model was estimated to be caused by the determination of the kinetic rate.
Resumo:
The chemistry of gold dissolution in alkaline cyanide solution has continually received attention and new rate equations expressing the gold leaching are still developed. The effect of leaching parameters on gold gold cyanidation is studied in this work in order to optimize the leaching process. A gold leaching model, based on the well-known shrinking-core model, is presented in this work. It is proposed that the reaction takes place at the reacting particle surface which is continuously reduced as the reaction proceeds. The model parameters are estimated by comparing experimental data and simulations. The experimental data used in this work was obtained from Ling et al. (1996) and de Andrade Lima and Hodouin (2005). Two different rate equations, where the unreacted amount of gold is considered in one equation, are investigated. In this work, it is presented that the reaction at the surface is the rate controlling step since there is no internal diffusion limitation. The model considering the effect of non-reacting gold shows that the reaction orders are consistent with the experimental observations reported by Ling et al. (1996) and de Andrade Lima and Hodouin (2005). However, it should be noted that the model obtained in this work is based on assumptions of no side reactions, no solid-liquid mass transfer resistances and no effect from temperature.
Resumo:
The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy
Resumo:
This work presents the use of potentiometric measurements for kinetic studies of biosorption of Cd2+ ions from aqueous solutions on Eichhornia crassipes roots. The open circuit potential of the Cd/Cd2+ electrode of the first kind was measured during the bioadsorption process. The amount of Cd2+ ions accumulated was determined in real time. The data were fit to different models, with the pseudo-second-order model proving to be the best in describing the data. The advantages and limitations of the methodology proposed relative to the traditional method are discussed.
Resumo:
Agile software development has grown in popularity starting from the agile manifesto declared in 2001. However there is a strong belief that the agile methods are not suitable for embedded, critical or real-time software development, even though multiple studies and cases show differently. This thesis will present a custom agile process that can be used in embedded software development. The reasons for presumed unfitness of agile methods in embedded software development have mainly based on the feeling of these methods providing no real control, no strict discipline and less rigor engineering practices. One starting point is to provide a light process with disciplined approach to the embedded software development. Agile software development has gained popularity due to the fact that there are still big issues in software development as a whole. Projects fail due to schedule slips, budget surpassing or failing to meet the business needs. This does not change when talking about embedded software development. These issues are still valid, with multiple new ones rising from the quite complex and hard domain the embedded software developers work in. These issues are another starting point for this thesis. The thesis is based heavily on Feature Driven Development, a software development methodology that can be seen as a runner up to the most popular agile methodologies. The FDD as such is quite process oriented and is lacking few practices considered commonly as extremely important in agile development methodologies. In order for FDD to gain acceptance in the software development community it needs to be modified and enhanced. This thesis presents an improved custom agile process that can be used in embedded software development projects with size varying from 10 to 500 persons. This process is based on Feature Driven Development and by suitable parts to Extreme Programming, Scrum and Agile Modeling. Finally this thesis will present how the new process responds to the common issues in the embedded software development. The process of creating the new process is evaluated at the retrospective and guidelines for such process creation work are introduced. These emphasize the agility also in the process development through early and frequent deliveries and the team work needed to create suitable process.
Resumo:
Min avhandling behandlar hur oordnade material leder elektrisk ström. Bland materialen som studeras finns ledande polymerer, d.v.s. plaster som leder ström, och mer allmänt organiska halvledare. Av de här materialen har man kunnat bygga elektroniska komponenter, och man hoppas på att kunna trycka hela kretsar av organiska material. För de här tillämpningarna är det viktigt att förstå hur materialen själva leder elektrisk ström. Termen oordnade material syftar på material som saknar kristallstruktur. Oordningen gör att elektronernas tillstånd blir lokaliserade i rummet, så att en elektron i ett visst tillstånd är begränsad t.ex. till en molekyl eller ett segment av en polymer. Det här kan jämföras med kristallina material, där ett elektrontillstånd är utspritt över hela kristallen (men i stället har en väldefinierad rörelsemängd). Elektronerna (eller hålen) i det oordnade materialet kan röra sig genom att tunnelera mellan de lokaliserade tillstånden. Utgående från egenskaperna för den här tunneleringsprocessen, kan man bestämma transportegenskaperna för hela materialet. Det här är utgångspunkten för den så kallade hopptransportmodellen, som jag har använt mig av. Hopptransportmodellen innehåller flera drastiska förenklingar. Till exempel betraktas elektrontillstånden som punktformiga, så att tunneleringssannolikheten mellan två tillstånd endast beror på avståndet mellan dem, och inte på deras relativa orientation. En annan förenkling är att behandla det kvantmekaniska tunneleringsproblemet som en klassisk process, en slumpvandring. Trots de här grova approximationerna visar hopptransportmodellen ändå många av de fenomen som uppträder i de verkliga materialen som man vill modellera. Man kan kanske säga att hopptransportmodellen är den enklaste modell för oordnade material som fortfarande är intressant att studera. Man har inte hittat exakta analytiska lösningar för hopptransportmodellen, därför använder man approximationer och numeriska metoder, ofta i form av datorberäkningar. Vi har använt både analytiska metoder och numeriska beräkningar för att studera olika aspekter av hopptransportmodellen. En viktig del av artiklarna som min avhandling baserar sig på är att jämföra analytiska och numeriska resultat. Min andel av arbetet har främst varit att utveckla de numeriska metoderna och applicera dem på hopptransportmodellen. Därför fokuserar jag på den här delen av arbetet i avhandlingens introduktionsdel. Ett sätt att studera hopptransportmodellen numeriskt är att direkt utföra en slumpvandringsprocess med ett datorprogram. Genom att föra statisik över slumpvandringen kan man beräkna olika transportegenskaper i modellen. Det här är en så kallad Monte Carlo-metod, eftersom själva beräkningen är en slumpmässig process. I stället för att följa rörelsebanan för enskilda elektroner, kan man beräkna sannolikheten vid jämvikt för att hitta en elektron i olika tillstånd. Man ställer upp ett system av ekvationer, som relaterar sannolikheterna för att hitta elektronen i olika tillstånd i systemet med flödet, strömmen, mellan de olika tillstånden. Genom att lösa ekvationssystemet fås sannolikhetsfördelningen för elektronerna. Från sannolikhetsfördelningen kan sedan strömmen och materialets transportegenskaper beräknas. En aspekt av hopptransportmodellen som vi studerat är elektronernas diffusion, d.v.s. deras slumpmässiga rörelse. Om man betraktar en samling elektroner, så sprider den med tiden ut sig över ett större område. Det är känt att diffusionshastigheten beror av elfältet, så att elektronerna sprider sig fortare om de påverkas av ett elektriskt fält. Vi har undersökt den här processen, och visat att beteendet är väldigt olika i endimensionella system, jämfört med två- och tredimensionella. I två och tre dimensioner beror diffusionskoefficienten kvadratiskt av elfältet, medan beroendet i en dimension är linjärt. En annan aspekt vi studerat är negativ differentiell konduktivitet, d.v.s. att strömmen i ett material minskar då man ökar spänningen över det. Eftersom det här fenomenet har uppmätts i organiska minnesceller, ville vi undersöka om fenomenet också kan uppstå i hopptransportmodellen. Det visade sig att det i modellen finns två olika mekanismer som kan ge upphov till negativ differentiell konduktivitet. Dels kan elektronerna fastna i fällor, återvändsgränder i systemet, som är sådana att det är svårare att ta sig ur dem då elfältet är stort. Då kan elektronernas medelhastighet och därmed strömmen i materialet minska med ökande elfält. Elektrisk växelverkan mellan elektronerna kan också leda till samma beteende, genom en så kallad coulombblockad. En coulombblockad kan uppstå om antalet ledningselektroner i materialet ökar med ökande spänning. Elektronerna repellerar varandra och ett större antal elektroner kan leda till att transporten blir långsammare, d.v.s. att strömmen minskar.
Resumo:
This thesis considers modeling and analysis of noise and interconnects in onchip communication. Besides transistor count and speed, the capabilities of a modern design are often limited by on-chip communication links. These links typically consist of multiple interconnects that run parallel to each other for long distances between functional or memory blocks. Due to the scaling of technology, the interconnects have considerable electrical parasitics that affect their performance, power dissipation and signal integrity. Furthermore, because of electromagnetic coupling, the interconnects in the link need to be considered as an interacting group instead of as isolated signal paths. There is a need for accurate and computationally effective models in the early stages of the chip design process to assess or optimize issues affecting these interconnects. For this purpose, a set of analytical models is developed for on-chip data links in this thesis. First, a model is proposed for modeling crosstalk and intersymbol interference. The model takes into account the effects of inductance, initial states and bit sequences. Intersymbol interference is shown to affect crosstalk voltage and propagation delay depending on bus throughput and the amount of inductance. Next, a model is proposed for the switching current of a coupled bus. The model is combined with an existing model to evaluate power supply noise. The model is then applied to reduce both functional crosstalk and power supply noise caused by a bus as a trade-off with time. The proposed reduction method is shown to be effective in reducing long-range crosstalk noise. The effects of process variation on encoded signaling are then modeled. In encoded signaling, the input signals to a bus are encoded using additional signaling circuitry. The proposed model includes variation in both the signaling circuitry and in the wires to calculate the total delay variation of a bus. The model is applied to study level-encoded dual-rail and 1-of-4 signaling. In addition to regular voltage-mode and encoded voltage-mode signaling, current-mode signaling is a promising technique for global communication. A model for energy dissipation in RLC current-mode signaling is proposed in the thesis. The energy is derived separately for the driver, wire and receiver termination.
Resumo:
ABSTRACT This study aimed to verify the differences in radiation intensity as a function of distinct relief exposure surfaces and to quantify these effects on the leaf area index (LAI) and other variables expressing eucalyptus forest productivity for simulations in a process-based growth model. The study was carried out at two contrasting edaphoclimatic locations in the Rio Doce basin in Minas Gerais, Brazil. Two stands with 32-year-old plantations were used, allocating fixed plots in locations with northern and southern exposure surfaces. The meteorological data were obtained from two automated weather stations located near the study sites. Solar radiation was corrected for terrain inclination and exposure surfaces, as it is measured based on the plane, perpendicularly to the vertical location. The LAI values collected in the field were used. For the comparative simulations in productivity variation, the mechanistic 3PG model was used, considering the relief exposure surfaces. It was verified that during most of the year, the southern surfaces showed lower availability of incident solar radiation, resulting in up to 66% losses, compared to the same surface considered plane, probably related to its geographical location and higher declivity. Higher values were obtained for the plantings located on the northern surface for the variables LAI, volume and mean annual wood increase, with this tendency being repeated in the 3PG model simulations.
Resumo:
Traditionally limestone has been used for the flue gas desulfurization in fluidized bed combustion. Recently, several studies have been carried out to examine the use of limestone in applications which enable the removal of carbon dioxide from the combustion gases, such as calcium looping technology and oxy-fuel combustion. In these processes interlinked limestone reactions occur but the reaction mechanisms and kinetics are not yet fully understood. To examine these phenomena, analytical and numerical models have been created. In this work, the limestone reactions were studied with aid of one-dimensional numerical particle model. The model describes a single limestone particle in the process as a function of time, the progress of the reactions and the mass and energy transfer in the particle. The model-based results were compared with experimental laboratory scale BFB results. It was observed that by increasing the temperature from 850 °C to 950 °C the calcination was enhanced but the sulfate conversion was no more improved. A higher sulfur dioxide concentration accelerated the sulfation reaction and based on the modeling, the sulfation is first order with respect to SO2. The reaction order of O2 seems to become zero at high oxygen concentrations.
Application of simulated annealing in simulation and optimization of drying process of Zea mays malt
Resumo:
Kinetic simulation and drying process optimization of corn malt by Simulated Annealing (SA) for estimation of temperature and time parameters in order to preserve maximum amylase activity in the obtained product are presented here. Germinated corn seeds were dried at 54-76 °C in a convective dryer, with occasional measurement of moisture content and enzymatic activity. The experimental data obtained were submitted to modeling. Simulation and optimization of the drying process were made by using the SA method, a randomized improvement algorithm, analogous to the simulated annealing process. Results showed that seeds were best dried between 3h and 5h. Among the models used in this work, the kinetic model of water diffusion into corn seeds showed the best fitting. Drying temperature and time showed a square influence on the enzymatic activity. Optimization through SA showed the best condition at 54 ºC and between 5.6h and 6.4h of drying. Values of specific activity in the corn malt were found between 5.26±0.06 SKB/mg and 15.69±0,10% of remaining moisture.
Resumo:
In the forced-air cooling process of fruits occurs, besides the convective heat transfer, the mass transfer by evaporation. The energy need in the evaporation is taken from fruit that has its temperature lowered. In this study it has been proposed the use of empirical correlations for calculating the convective heat transfer coefficient as a function of surface temperature of the strawberry during the cooling process. The aim of this variation of the convective coefficient is to compensate the effect of evaporation in the heat transfer process. Linear and exponential correlations are tested, both with two adjustable parameters. The simulations are performed using experimental conditions reported in the literature for the cooling of strawberries. The results confirm the suitability of the proposed methodology.
Resumo:
This thesis discusses adaption of new project management tool at ABB Oy Motors and Generators business unit, Synchronous Machines profit centre. Thesis studies project modeling in general and buries in the Gate Model used at ABB Synchronous Machines. It is essential to understand Gate Model because this new project management tool, called Project Master Document, is created on the base of the existing project model. Thesis also analyzes goals and structure of Project Master Document in order to ease implementation of this new tool. Project Master Document aims to improved customer order fulfillment by clearing order handover interface. Office process, especially responsibilities and target dates, become also clearer after Master Document implementation. The document is built to be frame for whole order fulfillment process including check points for each gate of project model and updated memos from all project meetings. Furthermore, project progress will be clearly stated by status markings and visualized with colors.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.