989 resultados para parallel simulation
Resumo:
Electrochromatography, numerical simulation, electrokinetics, electroosmosis, parallel computing
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
In this master's thesis a mechanical model that is driven with variable speed synchronous machine was developed. The developed mechanical model simulates the mechanics of power transmission and its torsional vibrations. The mechanical model was developed for the need of the branched mechanics of a rolling mill and the propulsion system of a tanker. First, the scope of the thesis was to clarify the concepts connected to the mechanical model. The clarified concepts are the variable speed drive, the mechanics of power transmission and the vibrationsin the power transmission. Next, the mechanical model with straight shaft line and twelve moments of inertia that existed in the beginning was developed to be branched considering the case of parallel machines and the case of parallel rolls. Additionally, the model was expanded for the need of moreaccurate simulation to up to thirty moments of inertia. The model was also enhanced to enable three phase short circuit situation of the simulated machine. After that the mechanical model was validated by comparing the results of the developed simulation tool to results of other simulation tools. The compared results are the natural frequencies and mode shapes of torsional vibration, the response of the load torque step and the stress in the mechanical system occurred by the permutation of the magnetic field that is arisen from the three phase short circuit situation. The comparisons were accomplished well and the mechanical model was validated for the compared cases. Further development to be made is to develop the load torque to be time-dependent and to install two frequency converters and two FEM modeled machines to be simulated parallel.
Resumo:
Numerical weather prediction and climate simulation have been among the computationally most demanding applications of high performance computing eversince they were started in the 1950's. Since the 1980's, the most powerful computers have featured an ever larger number of processors. By the early 2000's, this number is often several thousand. An operational weather model must use all these processors in a highly coordinated fashion. The critical resource in running such models is not computation, but the amount of necessary communication between the processors. The communication capacity of parallel computers often fallsfar short of their computational power. The articles in this thesis cover fourteen years of research into how to harness thousands of processors on a single weather forecast or climate simulation, so that the application can benefit as much as possible from the power of parallel high performance computers. The resultsattained in these articles have already been widely applied, so that currently most of the organizations that carry out global weather forecasting or climate simulation anywhere in the world use methods introduced in them. Some further studies extend parallelization opportunities into other parts of the weather forecasting environment, in particular to data assimilation of satellite observations.
Resumo:
Diplomityössä käsitellään Nokia Mobile Phonesin matkapuhelimien käyttöliittymäohjelmistojen suunnittelu-ja testausympäristön kehitystä. Ympäristöön lisättiin kaksi ohjelmistomodulia avustamaan simulointia ja versionhallintaa. Visualisointityökalulla matkapuhelimen toiminta voidaan jäljittää suunnittelu- kaavioihin tilasiirtyminä, kun taas vertailusovelluksella kaavioiden väliset erot nähdään graafisesti. Kehitetyt sovellukset parantavat käyttöliittymien suunnitteluprosessia tehostaen virheiden etsintää, optimointia ja versionhallintaa. Visualisointityökalun edut ovat merkittävät, koska käyttöliittymäsovellusten toiminta on havaittavissa suunnittelu- kaavioista reaaliaikaisen simuloinnin yhteydessä. Näin virheet ovat välittömästi paikannettavissa. Lisäksi työkalua voidaan hyödyntää kaavioita optimoitaessa, jolloin sovellusten kokoja muistintarve pienenee. Graafinen vertailutyökalu tuo edun rinnakkaiseen ohjelmistosuunnitteluun. Eri versioisten suunnittelukaavioiden erot ovat nähtävissä suoraan kaaviosta manuaalisen vertailun sijaan. Molemmat työkalut otettiin onnistuneesti käyttöön NMP:llä vuoden 2001 alussa.
Resumo:
The structure of the electric double layer in contact with discrete and continuously charged planar surfaces is studied within the framework of the primitive model through Monte Carlo simulations. Three different discretization models are considered together with the case of uniform distribution. The effect of discreteness is analyzed in terms of charge density profiles. For point surface groups,a complete equivalence with the situation of uniformly distributed charge is found if profiles are exclusively analyzed as a function of the distance to the charged surface. However, some differences are observed moving parallel to the surface. Significant discrepancies with approaches that do not account for discreteness are reported if charge sites of finite size placed on the surface are considered.
Resumo:
This thesis presents briefly the basic operation and use of centrifugal pumps and parallel pumping applications. The characteristics of parallel pumping applications are compared to circuitry, in order to search analogy between these technical fields. The purpose of studying circuitry is to find out if common software tools for solving circuit performance could be used to observe parallel pumping applications. The empirical part of the thesis introduces a simulation environment for parallel pumping systems, which is based on circuit components of Matlab Simulink —software. The created simulation environment ensures the observation of variable speed controlled parallel pumping systems in case of different controlling methods. The introduced simulation environment was evaluated by building a simulation model for actual parallel pumping system at Lappeenranta University of Technology. The simulated performance of the parallel pumps was compared to measured values of the actual system. The gathered information shows, that if the initial data of the system and pump perfonnance is adequate, the circuitry based simulation environment can be exploited to observe parallel pumping systems. The introduced simulation environment can represent the actual operation of parallel pumps in reasonably accuracy. There by the circuitry based simulation can be used as a researching tool to develop new controlling ways for parallel pumps.
Resumo:
There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.
Resumo:
Over the last decades, calibration techniques have been widely used to improve the accuracy of robots and machine tools since they only involve software modification instead of changing the design and manufacture of the hardware. Traditionally, there are four steps are required for a calibration, i.e. error modeling, measurement, parameter identification and compensation. The objective of this thesis is to propose a method for the kinematics analysis and error modeling of a newly developed hybrid redundant robot IWR (Intersector Welding Robot), which possesses ten degrees of freedom (DOF) where 6-DOF in parallel and additional 4-DOF in serial. In this article, the problem of kinematics modeling and error modeling of the proposed IWR robot are discussed. Based on the vector arithmetic method, the kinematics model and the sensitivity model of the end-effector subject to the structure parameters is derived and analyzed. The relations between the pose (position and orientation) accuracy and manufacturing tolerances, actuation errors, and connection errors are formulated. Computer simulation is performed to examine the validity and effectiveness of the proposed method.
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.
Resumo:
Global energy consumption has been increasing yearly and a big portion of it is used in rotating electrical machineries. It is clear that in these machines energy should be used efficiently. In this dissertation the aim is to improve the design process of high-speed electrical machines especially from the mechanical engineering perspective in order to achieve more reliable and efficient machines. The design process of high-speed machines is challenging due to high demands and several interactions between different engineering disciplines such as mechanical, electrical and energy engineering. A multidisciplinary design flow chart for a specific type of high-speed machine in which computer simulation is utilized is proposed. In addition to utilizing simulation parallel with the design process, two simulation studies are presented. The first is used to find the limits of two ball bearing models. The second is used to study the improvement of machine load capacity in a compressor application to exceed the limits of current machinery. The proposed flow chart and simulation studies show clearly that improvements in the high-speed machinery design process can be achieved. Engineers designing in high-speed machines can utilize the flow chart and simulation results as a guideline during the design phase to achieve more reliable and efficient machines that use energy efficiently in required different operation conditions.
Resumo:
Dans cette thèse, nous présentons une nouvelle méthode smoothed particle hydrodynamics (SPH) pour la résolution des équations de Navier-Stokes incompressibles, même en présence des forces singulières. Les termes de sources singulières sont traités d'une manière similaire à celle que l'on retrouve dans la méthode Immersed Boundary (IB) de Peskin (2002) ou de la méthode régularisée de Stokeslets (Cortez, 2001). Dans notre schéma numérique, nous mettons en oeuvre une méthode de projection sans pression de second ordre inspirée de Kim et Moin (1985). Ce schéma évite complètement les difficultés qui peuvent être rencontrées avec la prescription des conditions aux frontières de Neumann sur la pression. Nous présentons deux variantes de cette approche: l'une, Lagrangienne, qui est communément utilisée et l'autre, Eulerienne, car nous considérons simplement que les particules SPH sont des points de quadrature où les propriétés du fluide sont calculées, donc, ces points peuvent être laissés fixes dans le temps. Notre méthode SPH est d'abord testée à la résolution du problème de Poiseuille bidimensionnel entre deux plaques infinies et nous effectuons une analyse détaillée de l'erreur des calculs. Pour ce problème, les résultats sont similaires autant lorsque les particules SPH sont libres de se déplacer que lorsqu'elles sont fixes. Nous traitons, par ailleurs, du problème de la dynamique d'une membrane immergée dans un fluide visqueux et incompressible avec notre méthode SPH. La membrane est représentée par une spline cubique le long de laquelle la tension présente dans la membrane est calculée et transmise au fluide environnant. Les équations de Navier-Stokes, avec une force singulière issue de la membrane sont ensuite résolues pour déterminer la vitesse du fluide dans lequel est immergée la membrane. La vitesse du fluide, ainsi obtenue, est interpolée sur l'interface, afin de déterminer son déplacement. Nous discutons des avantages à maintenir les particules SPH fixes au lieu de les laisser libres de se déplacer. Nous appliquons ensuite notre méthode SPH à la simulation des écoulements confinés des solutions de polymères non dilués avec une interaction hydrodynamique et des forces d'exclusion de volume. Le point de départ de l'algorithme est le système couplé des équations de Langevin pour les polymères et le solvant (CLEPS) (voir par exemple Oono et Freed (1981) et Öttinger et Rabin (1989)) décrivant, dans le cas présent, les dynamiques microscopiques d'une solution de polymère en écoulement avec une représentation bille-ressort des macromolécules. Des tests numériques de certains écoulements dans des canaux bidimensionnels révèlent que l'utilisation de la méthode de projection d'ordre deux couplée à des points de quadrature SPH fixes conduit à un ordre de convergence de la vitesse qui est de deux et à une convergence d'ordre sensiblement égale à deux pour la pression, pourvu que la solution soit suffisamment lisse. Dans le cas des calculs à grandes échelles pour les altères et pour les chaînes de bille-ressort, un choix approprié du nombre de particules SPH en fonction du nombre des billes N permet, en l'absence des forces d'exclusion de volume, de montrer que le coût de notre algorithme est d'ordre O(N). Enfin, nous amorçons des calculs tridimensionnels avec notre modèle SPH. Dans cette optique, nous résolvons le problème de l'écoulement de Poiseuille tridimensionnel entre deux plaques parallèles infinies et le problème de l'écoulement de Poiseuille dans une conduite rectangulaire infiniment longue. De plus, nous simulons en dimension trois des écoulements confinés entre deux plaques infinies des solutions de polymères non diluées avec une interaction hydrodynamique et des forces d'exclusion de volume.
Resumo:
En radiothérapie, la tomodensitométrie (CT) fournit l’information anatomique du patient utile au calcul de dose durant la planification de traitement. Afin de considérer la composition hétérogène des tissus, des techniques de calcul telles que la méthode Monte Carlo sont nécessaires pour calculer la dose de manière exacte. L’importation des images CT dans un tel calcul exige que chaque voxel exprimé en unité Hounsfield (HU) soit converti en une valeur physique telle que la densité électronique (ED). Cette conversion est habituellement effectuée à l’aide d’une courbe d’étalonnage HU-ED. Une anomalie ou artefact qui apparaît dans une image CT avant l’étalonnage est susceptible d’assigner un mauvais tissu à un voxel. Ces erreurs peuvent causer une perte cruciale de fiabilité du calcul de dose. Ce travail vise à attribuer une valeur exacte aux voxels d’images CT afin d’assurer la fiabilité des calculs de dose durant la planification de traitement en radiothérapie. Pour y parvenir, une étude est réalisée sur les artefacts qui sont reproduits par simulation Monte Carlo. Pour réduire le temps de calcul, les simulations sont parallélisées et transposées sur un superordinateur. Une étude de sensibilité des nombres HU en présence d’artefacts est ensuite réalisée par une analyse statistique des histogrammes. À l’origine de nombreux artefacts, le durcissement de faisceau est étudié davantage. Une revue sur l’état de l’art en matière de correction du durcissement de faisceau est présentée suivi d’une démonstration explicite d’une correction empirique.
Resumo:
The motion instability is an important issue that occurs during the operation of towed underwater vehicles (TUV), which considerably affects the accuracy of high precision acoustic instrumentations housed inside the same. Out of the various parameters responsible for this, the disturbances from the tow-ship are the most significant one. The present study focus on the motion dynamics of an underwater towing system with ship induced disturbances as the input. The study focus on an innovative system called two-part towing. The methodology involves numerical modeling of the tow system, which consists of modeling of the tow-cables and vehicles formulation. Previous study in this direction used a segmental approach for the modeling of the cable. Even though, the model was successful in predicting the heave response of the tow-body, instabilities were observed in the numerical solution. The present study devises a simple approach called lumped mass spring model (LMSM) for the cable formulation. In this work, the traditional LMSM has been modified in two ways. First, by implementing advanced time integration procedures and secondly, use of a modified beam model which uses only translational degrees of freedoms for solving beam equation. A number of time integration procedures, such as Euler, Houbolt, Newmark and HHT-α were implemented in the traditional LMSM and the strength and weakness of each scheme were numerically estimated. In most of the previous studies, hydrodynamic forces acting on the tow-system such as drag and lift etc. are approximated as analytical expression of velocities. This approach restricts these models to use simple cylindrical shaped towed bodies and may not be applicable modern tow systems which are diversed in shape and complexity. Hence, this particular study, hydrodynamic parameters such as drag and lift of the tow-system are estimated using CFD techniques. To achieve this, a RANS based CFD code has been developed. Further, a new convection interpolation scheme for CFD simulation, called BNCUS, which is blend of cell based and node based formulation, was proposed in the study and numerically tested. To account for the fact that simulation takes considerable time in solving fluid dynamic equations, a dedicated parallel computing setup has been developed. Two types of computational parallelisms are explored in the current study, viz; the model for shared memory processors and distributed memory processors. In the present study, shared memory model was used for structural dynamic analysis of towing system, distributed memory one was devised in solving fluid dynamic equations.