124 resultados para Simulation Environments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally simulators have been used extensively in robotics to develop robotic systems without the need to build expensive hardware. However, simulators can be also be used as a “memory”for a robot. This allows the robot to try out actions in simulation before executing them for real. The key obstacle to this approach is an uncertainty of knowledge about the environment. The goal of the Master’s Thesis work was to develop a method, which allows updating the simulation model based on actual measurements to achieve a success of the planned task. OpenRAVE was chosen as an experimental simulation environment on planning,trial and update stages. Steepest Descent algorithm in conjunction with Golden Section search procedure form the principle part of optimization process. During experiments, the properties of the proposed method, such as sensitivity to different parameters, including gradient and error function, were examined. The limitations of the approach were established, based on analyzing the regions of convergence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solid processes are used for obtaining the valuable minerals. Due to their worth, it is obligatory to perform different experiments to determine the different values of these minerals. With the passage of time, it is becoming more difficult to carry out these experiments for each mineral for different characteristics due to high labor costs and consumption of time. Therefore, scientists and engineers have tried to overcome this issue. They made different software to handle this problem. Aspen is one of those software for the calculation of different parameters. Therefore, the aim of this report was to do simulation for solid processes to observe different effect for minerals. Different solid processes like crushing, screening; filtration and crystallization were simulated by Aspen Plus. The simulation results are obtained by using this simulation software and they are described in this thesis. It was noticed that the results were acceptable for all solid processes. Therefore, this software can be used for the designing of crushers by calculating the power consumption of crushers, can design the filter and for the calculation of material balance for all processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to simulate and to optimize integrated gasification for combine cycle (IGCC) for power generation and hydrogen (H2) production by using low grade Thar lignite coal and cotton stalk. Lignite coal is abundant of moisture and ash content, the idea of addition of cotton stalk is to increase the mass of combustible material per mass of feed use for the process, to reduce the consumption of coal and to increase the cotton stalk efficiently for IGCC process. Aspen plus software is used to simulate the process with different mass ratios of coal to cotton stalk and for optimization: process efficiencies, net power generation and H2 production etc. are considered while environmental hazard emissions are optimized to acceptance level. With the addition of cotton stalk in feed, process efficiencies started to decline along with the net power production. But for H2 production, it gave positive result at start but after 40% cotton stalk addition, H2 production also started to decline. It also affects negatively on environmental hazard emissions and mass of emissions/ net power production increases linearly with the addition of cotton stalk in feed mixture. In summation with the addition of cotton stalk, overall affects seemed to negative. But the effect is more negative after 40% cotton stalk addition so it is concluded that to get maximum process efficiencies and high production less amount of cotton stalk addition in feed is preferable and the maximum level of addition is estimated to 40%. Gasification temperature should keep lower around 1140 °C and prefer technique for studied feed in IGCC is fluidized bed (ash in dry form) rather than ash slagging gasifier

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transportation and warehousing are large and growing sectors in the society, and their efficiency is of high importance. Transportation also has a large share of global carbondioxide emissions, which are one the leading causes of anthropogenic climate warming. Various countries have agreed to decrease their carbon emissions according to the Kyoto protocol. Transportation is the only sector where emissions have steadily increased since the 1990s, which highlights the importance of transportation efficiency. The efficiency of transportation and warehousing can be improved with the help of simulations, but models alone are not sufficient. This research concentrates on the use of simulations in decision support systems. Three main simulation approaches are used in logistics: discrete-event simulation, systems dynamics, and agent-based modeling. However, individual simulation approaches have weaknesses of their own. Hybridization (combining two or more approaches) can improve the quality of the models, as it allows using a different method to overcome the weakness of one method. It is important to choose the correct approach (or a combination of approaches) when modeling transportation and warehousing issues. If an inappropriate method is chosen (this can occur if the modeler is proficient in only one approach or the model specification is not conducted thoroughly), the simulation model will have an inaccurate structure, which in turn will lead to misleading results. This issue can further escalate, as the decision-maker may assume that the presented simulation model gives the most useful results available, even though the whole model can be based on a poorly chosen structure. In this research it is argued that simulation- based decision support systems need to take various issues into account to make a functioning decision support system. The actual simulation model can be constructed using any (or multiple) approach, it can be combined with different optimization modules, and there needs to be a proper interface between the model and the user. These issues are presented in a framework, which simulation modelers can use when creating decision support systems. In order for decision-makers to fully benefit from the simulations, the user interface needs to clearly separate the model and the user, but at the same time, the user needs to be able to run the appropriate runs in order to analyze the problems correctly. This study recommends that simulation modelers should start to transfer their tacit knowledge to explicit knowledge. This would greatly benefit the whole simulation community and improve the quality of simulation-based decision support systems as well. More studies should also be conducted by using hybrid models and integrating simulations with Graphical Information Systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combating climate change is one of the key tasks of humanity in the 21st century. One of the leading causes is carbon dioxide emissions due to usage of fossil fuels. Renewable energy sources should be used instead of relying on oil, gas, and coal. In Finland a significant amount of energy is produced using wood. The usage of wood chips is expected to increase in the future significantly, over 60 %. The aim of this research is to improve understanding over the costs of wood chip supply chains. This is conducted by utilizing simulation as the main research method. The simulation model utilizes both agent-based modelling and discrete event simulation to imitate the wood chip supply chain. This thesis concentrates on the usage of simulation based decision support systems in strategic decision-making. The simulation model is part of a decision support system, which connects the simulation model to databases but also provides a graphical user interface for the decisionmaker. The main analysis conducted with the decision support system concentrates on comparing a traditional supply chain to a supply chain utilizing specialized containers. According to the analysis, the container supply chain is able to have smaller costs than the traditional supply chain. Also, a container supply chain can be more easily scaled up due to faster emptying operations. Initially the container operations would only supply part of the fuel needs of a power plant and it would complement the current supply chain. The model can be expanded to include intermodal supply chains as due to increased demand in the future there is not enough wood chips located close to current and future power plants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation presents studies on the environments of active galaxies. Paper I is a case study of a cluster of galaxies containing BL Lac object RGB 1745+398. We measured the velocity dispersion, mass, and richness of the cluster. This was one of the most thorough studies of the environments of a BL Lac object. Methods used in the paper could be used in the future for studying other clusters as well. In Paper II we studied the environments of nearby quasars in the Sloan Digital Sky Survey (SDSS). We found that quasars have less neighboring galaxies than luminous inactive galaxies. In the large-scale structure, quasars are usually located at the edges of superclusters or even in void regions. We concluded that these low-redshift quasars may have become active only recently because the galaxies in low-density environments evolve later to the phase where quasar activity can be triggered. In Paper III we extended the analysis of Paper II to other types of AGN besides quasars. We found that different types of AGN have different large-scale environments. Radio galaxies are more concentrated in superclusters, while quasars and Seyfert galaxies prefer low-density environments. Different environments indicate that AGN have different roles in galaxy evolution. Our results suggest that activity of galaxies may depend on their environment on the large scale. Our results in Paper III raised questions of the cause of the environment-dependency in the evolution of galaxies. Because high-density large-scale environments contain richer groups and clusters than the underdense environments, our results could reflect smaller-scale effects. In Paper IV we addressed this problem by studying the group and supercluster scale environments of galaxies together. We compared the galaxy populations in groups of different richnesses in different large-scale environments. We found that the large-scale environment affects the galaxies independently of the group richness. Galaxies in low-density environments on the large scale are more likely to be star-forming than those in superclusters even if they are in groups with the same richness. Based on these studies, the conclusion of this dissertation is that the large-scale environment affects the evolution of galaxies. This may be caused by different “speed” of galaxy evolution in low and high-density environments: galaxies in dense environments reach certain phases of evolution earlier than galaxies in underdense environments. As a result, the low-density regions at low redshifts are populated by galaxies in earlier phases of evolution than galaxies in high-density regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pro gradu -tutkielmani tavoitteena on analysoida ympäristön merkityksiä yhdysvaltalaiskirjailija Don DeLillon (1936-) romaanissa White Noise (1985). Lähestyn romaania ekokriittisen kirjallisuudentutkimuksen näkökulmasta ja kytken sen ekokriitikko Lawrence Buellin ajatukseen nk. ympäristöalitajunnasta. Analysoin DeLillon romaania myös yhteydessä filosofi Jean Baudrillardin ajatukseen postmodernin ajan länsimaisessa ja erityisesti amerikkalaisessa yhteiskunnassa vallalla olevasta simulaatioiden järjestelmästä. White Noise -romaanin todellisuus vastaa Baudrillardin ajatusta yhteiskunnasta, jossa representaatiot ja simulaatiot ovat korvanneet todellisuuden. Media, erityisesti televisio, tuottaa jatkuvasti kuvia ja simulaatioita, joiden kyllästämässä todellisuudessa aineellinen maailma ja luonto jäävät tavoittamattomiin. White Noise -romaanin henkilöiden yhteys aineelliseen ympäristöönsä ja luonnonilmiöihin on katkennut, sillä heidän arkensa pyörii pitkälti kuluttamisen ja televisionkatselun ympärillä. Romaanin todellisuudessa myös identiteetistä on tullut eräänlainen tuote, jonka jokainen voi rakentaa mieleisekseen kulutusvalinnoillaan. Identiteettiproblematiikan ohella myös kuolemalla on keskeinen asema tutkielmassani. White Noise -romaanin päähenkilö Jack Gladney kärsii paniikinomaisesta kuolemanpelosta, jota pyrkii torjumaan erilaisin keinoin siinä kuitenkaan onnistumatta. Tavoitteenani on osoittaa, että tämä piinaava pelko kuolemaa kohtaan on syntynyt simulaatioyhteiskunnan tuloksena. Vieraantuminen aineellisesta maailmasta ja luonnon prosesseista on johtanut vieraantumiseen ruumiista ja kuolemasta. Analysoin kuolemaa romaanissa eräänlaisena simulaatioiden maailman äärirajana, viimeisenä luonnollisena tapahtumana. White Noise -romaanin päähenkilö Jack Gladney ahdistuu kulutuskeskeisessä, simulaatioiden kyllästämässä elinympäristössään. Tulkitsen tämän ahdistuksen tarpeena tunnistaa tärkeä vuorovaikutussuhde yksilön ja hänen aineellisen ympäristönsä välillä. Jack ei ole vielä täysin sulautunut osaksi simulaatioiden maailmaa, vaan hän tiedostaa kytköksen itsensä ja aineellisen maailman välillä. Tämä romaanista implisiittisesti esiin nouseva tiedostamisen tunne korostaa ihmisen ja ympäristön sekä laajemmin kulttuurin ja luonnon välttämätöntä yhteyttä. DeLillon romaanista on löydettävissä ajatus ympäristöalitajunnasta, joka alleviivaa ympäristön ja luonnon merkitystä ihmiselle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.