55 resultados para GIS BASED SIMULATION
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.
Resumo:
This thesis introduces a real-time simulation environment based on the multibody simulation approach. The environment consists of components that are used in conventional product development, including computer aided drawing, visualization, dynamic simulation and finite element software architecture, data transfer and haptics. These components are combined to perform as a coupled system on one platform. The environment is used to simulate mobile and industrial machines at different stages of a product life time. Consequently, the demands of the simulated scenarios vary. In this thesis, a real-time simulation environment based on the multibody approach is used to study a reel mechanism of a paper machine and a gantry crane. These case systems are used to demonstrate the usability of the real-time simulation environment for fault detection purposes and in the context of a training simulator. In order to describe the dynamical performance of a mobile or industrial machine, the nonlinear equations of motion must be defined. In this thesis, the dynamical behaviour of machines is modelled using the multibody simulation approach. A multibody system may consist of rigid and flexible bodies which are joined using kinematic joint constraints while force components are used to describe the actuators. The strength of multibody dynamics relies upon its ability to describe nonlinearities arising from wearing of the components, friction, large rotations or contact forces in a systematic manner. For this reason, the interfaces between subsystems such as mechanics, hydraulics and control systems of the mechatronic machine can be defined and analyzed in a straightforward manner.
Resumo:
A coupled system simulator, based on analytical circuit equations and a finite element method (FEM) model of the motor has been developed and it is used to analyse a frequency-converterfed industrial squirrel-cage induction motor. Two control systems that emulate the behaviour of commercial direct-torque-controlled (DTC) and vector-controlled industrial frequency converters have been studied, implemented in the simulation software and verified by extensive laboratory tests. Numerous factors that affect the operation of a variable speed drive (VSD) and its energy efficiency have been investigated, and their significance in the simulation of the VSD results has been studied. The dependency of the frequency converter, induction motor and system losses on the switching frequency is investigated by simulations and measurements at different speeds for both the vector control and the DTC. Intensive laboratory measurements have been carried out to verify the simulation results.
Resumo:
A set of models in Aspen plus was built to simulate the direct synthesis process of hydrogen peroxide in a micro-reactor system. This process model can be used to carry out material balance calculation under various experimental conditions. Three thermodynamic property methods were compared by calculating gas solubility and Uniquac-RK method was finally selected for process model. Two different operation modes with corresponding operation conditions were proposed as the starting point of future experiments. Simulations for these two modes were carried out to get the information of material streams. Moreover, some hydrodynamic parameters such as gas/liquid superficial velocity, gas holdup were also calculated with improved process model. These parameters proved the proposed experimental conditions reasonable to some extent. The influence of operation conditions including temperature, pressure and circulation ratio was analyzed for the first operation mode, where pure oxygen was fed into dissolving tank and hydrogen-carbon dioxide mixture was fed into microreactor directly. The preferred operation conditions for the system are low temperature (2°C) and high pressure (30 bar) in dissolving tank. High circulation ratio might be good in the sense that more oxygen could be dissolved and fed into reactor for reactions, but meanwhile hydrodynamics of microreactor should be considered. Furthermore, more operation conditions of reactor gas/liquid feeds in both of two operation modes were proposed to provide guidance for future experiment design and corresponding hydrodynamic parameters were also calculated. Finally, safety issue was considered from thermodynamic point of view and there is no explosion danger at given experimental plan since the released reaction heat will not cause solvent vaporization inside the microchannels. The improvement of process model still needs further study based on the future experimental results.
Resumo:
The control of coating layer properties is becoming increasingly important as a result of an emerging demand for novel coated paper-based products and an increasing popularity of new coating application methods. The governing mechanisms of microstructure formation dynamics during consolidation and drying are nevertheless, still poorly understood. Some of the difficulties encountered by experimental methods can be overcome by the utilisation of numerical modelling and simulation-based studies of the consolidation process. The objective of this study was to improve the fundamental understanding of pigment coating consolidation and structure formation mechanisms taking place on the microscopic level. Furthermore, it is aimed to relate the impact of process and suspension properties to the microstructure of the coating layer. A mathematical model based on a modified Stokesian dynamics particle simulation technique was developed and applied in several studies of consolidation-related phenomena. The model includes particle-particle and particle-boundary hydrodynamics, colloidal interactions, Born repulsion, and a steric repulsion model. The Brownian motion and a free surface model were incorporated to enable the specific investigation of consolidation and drying. Filter cake stability was simulated in various particle systems, and subjected to a range of base substrate absorption rates and system temperatures. The stability of the filter cake was primarily affected by the absorption rate and size of particles. Temperature was also shown to have an influence. The consolidation of polydisperse systems, with varying wet coating thicknesses, was studied using imposed pilot trial and model-based drying conditions. The results show that drying methods have a clear influence on the microstructure development, on small particle distributions in the coating layer and also on the mobility of particles during consolidation. It is concluded that colloidal properties can significantly impact coating layer shrinkage as well as the internal solids concentration profile. Visualisations of particle system development in time and comparison of systems at different conditions are useful in illustrating coating layer structure formation mechanisms. The results aid in understanding the underlying mechanisms of pigment coating layer consolidation. Guidance is given regarding the relationship between coating process conditions and internal coating slurry properties and their effects on the microstructure of the coating.
Resumo:
The objective of the thesis was to create three tutorials for MeVEA Simulation Software to instruct the new users to the modeling methodology used in the MeVEA Simulation Software. MeVEA Simulation Software is a real-time simulation software based on multibody dynamics. The simulation software is designed to create simulation models of complete mechatronical system. The thesis begins with a more detail description of the MeVEA Simulation Software and its components. The thesis presents the three simulation models and written theory of the steps of model creation. The first tutorial introduces the basic features which are used in most simulation models. The basic features include bodies, constrains, forces, basic hydraulics and motors. The second tutorial introduces the power transmission components, tyres and user input definitions for the different components in power transmission systems. The third tutorial introduces the definitions of two different types of collisions and collision graphics used in MeVEA Simulation Software.
Resumo:
Over the recent years, development in mobile working machines has concentrated on reducing emissions owing to the tightening rules and needs to improve energy utilization and reduce power losses. This study focuses on energy utilization and regeneration in an electro-hydraulic forklift, which is a lifting equipment application. The study starts from the modelling and simulation of a hydraulic forklift. The energy regeneration from the potential energy of the load was studied. Also a flow-based electric motor speed control was suggested in this thesis instead of the throttle control method or the variable displacement pump control. Topics related to further development in the future are discussed. Finally, a summary and conclusions are presented.
Resumo:
Traditionally simulators have been used extensively in robotics to develop robotic systems without the need to build expensive hardware. However, simulators can be also be used as a “memory”for a robot. This allows the robot to try out actions in simulation before executing them for real. The key obstacle to this approach is an uncertainty of knowledge about the environment. The goal of the Master’s Thesis work was to develop a method, which allows updating the simulation model based on actual measurements to achieve a success of the planned task. OpenRAVE was chosen as an experimental simulation environment on planning,trial and update stages. Steepest Descent algorithm in conjunction with Golden Section search procedure form the principle part of optimization process. During experiments, the properties of the proposed method, such as sensitivity to different parameters, including gradient and error function, were examined. The limitations of the approach were established, based on analyzing the regions of convergence.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
The aim of this dissertation is to investigate if participation in business simulation gaming sessions can make different leadership styles visible and provide students with experiences beneficial for the development of leadership skills. Particularly, the focus is to describe the development of leadership styles when leading virtual teams in computer-supported collaborative game settings and to identify the outcomes of using computer simulation games as leadership training tools. To answer to the objectives of the study, three empirical experiments were conducted to explore if participation in business simulation gaming sessions (Study I and II), which integrate face-to-face and virtual communication (Study III and IV), can make different leadership styles visible and provide students with experiences beneficial for the development of leadership skills. In the first experiment, a group of multicultural graduate business students (N=41) participated in gaming sessions with a computerized business simulation game (Study III). In the second experiment, a group of graduate students (N=9) participated in the training with a ‘real estate’ computer game (Study I and II). In the third experiment, a business simulation gaming session was organized for graduate students group (N=26) and the participants played the simulation game in virtual teams, which were organizationally and geographically dispersed but connected via technology (Study IV). Each team in all experiments had three to four students and students were between 22 and 25 years old. The business computer games used for the empirical experiments presented an enormous number of complex operations in which a team leader needed to make the final decisions involved in leading the team to win the game. These gaming environments were interactive;; participants interacted by solving the given tasks in the game. Thus, strategy and appropriate leadership were needed to be successful. The training was competition-based and required implementation of leadership skills. The data of these studies consist of observations, participants’ reflective essays written after the gaming sessions, pre- and post-tests questionnaires and participants’ answers to open- ended questions. Participants’ interactions and collaboration were observed when they played the computer games. The transcripts of notes from observations and students dialogs were coded in terms of transactional, transformational, heroic and post-heroic leadership styles. For the data analysis of the transcribed notes from observations, content analysis and discourse analysis was implemented. The Multifactor Leadership Questionnaire (MLQ) was also utilized in the study to measure transformational and transactional leadership styles;; in addition, quantitative (one-way repeated measures ANOVA) and qualitative data analyses have been performed. The results of this study indicate that in the business simulation gaming environment, certain leadership characteristics emerged spontaneously. Experiences about leadership varied between the teams and were dependent on the role individual students had in their team. These four studies showed that simulation gaming environment has the potential to be used in higher education to exercise the leadership styles relevant in real-world work contexts. Further, the study indicated that given debriefing sessions, the simulation game context has much potential to benefit learning. The participants who showed interest in leadership roles were given the opportunity of developing leadership skills in practice. The study also provides evidence of unpredictable situations that participants can experience and learn from during the gaming sessions. The study illustrates the complex nature of experiences from the gaming environments and the need for the team leader and role divisions during the gaming sessions. It could be concluded that the experience of simulation game training illustrated the complexity of real life situations and provided participants with the challenges of virtual leadership experiences and the difficulties of using leadership styles in practice. As a result, the study offers playing computer simulation games in small teams as one way to exercise leadership styles in practice.
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.
Resumo:
In this doctoral thesis, methods to estimate the expected power cycling life of power semiconductor modules based on chip temperature modeling are developed. Frequency converters operate under dynamic loads in most electric drives. The varying loads cause thermal expansion and contraction, which stresses the internal boundaries between the material layers in the power module. Eventually, the stress wears out the semiconductor modules. The wear-out cannot be detected by traditional temperature or current measurements inside the frequency converter. Therefore, it is important to develop a method to predict the end of the converter lifetime. The thesis concentrates on power-cycling-related failures of insulated gate bipolar transistors. Two types of power modules are discussed: a direct bonded copper (DBC) sandwich structure with and without a baseplate. Most common failure mechanisms are reviewed, and methods to improve the power cycling lifetime of the power modules are presented. Power cycling curves are determined for a module with a lead-free solder by accelerated power cycling tests. A lifetime model is selected and the parameters are updated based on the power cycling test results. According to the measurements, the factor of improvement in the power cycling lifetime of modern IGBT power modules is greater than 10 during the last decade. Also, it is noticed that a 10 C increase in the chip temperature cycle amplitude decreases the lifetime by 40%. A thermal model for the chip temperature estimation is developed. The model is based on power loss estimation of the chip from the output current of the frequency converter. The model is verified with a purpose-built test equipment, which allows simultaneous measurement and simulation of the chip temperature with an arbitrary load waveform. The measurement system is shown to be convenient for studying the thermal behavior of the chip. It is found that the thermal model has a 5 C accuracy in the temperature estimation. The temperature cycles that the power semiconductor chip has experienced are counted by the rainflow algorithm. The counted cycles are compared with the experimentally verified power cycling curves to estimate the life consumption based on the mission profile of the drive. The methods are validated by the lifetime estimation of a power module in a direct-driven wind turbine. The estimated lifetime of the IGBT power module in a direct-driven wind turbine is 15 000 years, if the turbine is located in south-eastern Finland.
Resumo:
Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.