950 resultados para Simulation methods.
Resumo:
BACKGROUND: Simulation techniques are spreading rapidly in medicine. Suc h resources are increasingly concentrated in Simulation Laboratories. The MSRP-USP is structuring such a laboratory and is interested in the prevalence of individual initiatives that could be centralized there. The MSRP-USP currently has five full-curriculum courses in the health sciences: Medicine, Speech Therapy, Physical Therapy, Nutrition, and Occupational Therapy, all consisting of core disciplines. GOAL: To determine the prevalence of simulation techniques in the regular courses at MSRP-USP. METHODS: Coordinators of disciplines in the various courses were interviewed using a specifically designed semi-structured questionnaire, and all the collected data were stored in a dedicated database. The disciplines were grouped according to whether they used (GI) or did not use (GII) simulation resources. RESULTS AND DISCUSSION: 256 disciplines were analyzed, of which only 18.3% used simulation techniques, varying according to course: Medicine (24.7.3%), Occupational Therapy (23.0%), Nutrition (15.9%), Physical Therapy (9.8%), and Speech Therapy (9.1%). Computer simulation programs predominated (42.5%) in all five courses. The resources were provided mainly by MSRP-USP (56.3%), with additional funding coming from other sources based on individual initiatives. The same pattern was observed for maintenance. There was great interest in centralizing the resources in the new Simulation Laboratory in order to facilitate maintenance, but there was concern about training and access to the material. CONCLUSIONS: 1) The MSRP-USP simulation resources show low complexity and are mainly limited to computer programs; 2) Use of simulation varies according to course, and is most prevalent in Medicine; 3) Resources are scattered across several locations, and their acquisition and maintenance depend on individual initiatives rather than central coordination or curricular guidelines
Resumo:
The objective of the this research project is to develop a novel force control scheme for the teleoperation of a hydraulically driven manipulator, and to implement an ideal transparent mapping between human and machine interaction, and machine and task environment interaction. This master‘s thesis provides a preparatory study for the present research project. The research is limited into a single degree of freedom hydraulic slider with 6-DOF Phantom haptic device. The key contribution of the thesis is to set up the experimental rig including electromechanical haptic device, hydraulic servo and 6-DOF force sensor. The slider is firstly tested as a position servo by using previously developed intelligent switching control algorithm. Subsequently the teleoperated system is set up and the preliminary experiments are carried out. In addition to development of the single DOF experimental set up, methods such as passivity control in teleoperation are reviewed. The thesis also contains review of modeling of the servo slider in particular reference to the servo valve. Markov Chain Monte Carlo method is utilized in developing the robustness of the model in presence of noise.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
Tämä taktiikan tutkimus keskittyy tietokoneavusteisen simuloinnin laskennallisiin menetelmiin, joita voidaan käyttää taktisen tason sotapeleissä. Työn tärkeimmät tuotokset ovat laskennalliset mallit todennäköisyyspohjaisen analyysin mahdollistaviin taktisen tason taistelusimulaattoreihin, joita voidaan käyttää vertailevaan analyysiin joukkue-prikaatitason tarkastelutilanteissa. Laskentamallit keskittyvät vaikuttamiseen. Mallit liittyvät vahingoittavan osuman todennäköisyyteen, jonka perusteella vaikutus joukossa on mallinnettu tilakoneina ja Markovin ketjuina. Edelleen näiden tulokset siirretään tapahtumapuuanalyysiin operaation onnistumisen todennäköisyyden osalta. Pienimmän laskentayksikön mallinnustaso on joukkue- tai ryhmätasolla, jotta laskenta-aika prikaatitason sotapelitarkasteluissa pysyisi riittävän lyhyenä samalla, kun tulokset ovat riittävän tarkkoja suomalaiseen maastoon. Joukkueiden mies- ja asejärjestelmävahvuudet ovat jakaumamuodossa, eivätkä yksittäisiä lukuja. Simuloinnin integroinnissa voidaan käyttää asejärjestelmäkohtaisia predictor corrector –parametreja, mikä mahdollistaa aika-askelta lyhytaikaisempien taistelukentän ilmiöiden mallintamisen. Asemallien pohjana ovat aiemmat tutkimukset ja kenttäkokeet, joista osa kuuluu tähän väitöstutkimukseen. Laskentamallien ohjelmoitavuus ja käytettävyys osana simulointityökalua on osoitettu tekijän johtaman tutkijaryhmän ohjelmoiman ”Sandis”- taistelusimulointiohjelmiston avulla, jota on kehitetty ja käytetty Puolustusvoimien Teknillisessä Tutkimuslaitoksessa. Sandikseen on ohjelmoitu karttakäyttöliittymä ja taistelun kulkua simuloivia laskennallisia malleja. Käyttäjä tai käyttäjäryhmä tekee taktiset päätökset ja syöttää nämä karttakäyttöliittymän avulla simulointiin, jonka tuloksena saadaan kunkin joukkuetason peliyksikön tappioiden jakauma, keskimääräisten tappioiden osalta kunkin asejärjestelmän aiheuttamat tappiot kuhunkin maaliin, ammuskulutus ja radioyhteydet ja niiden tila sekä haavoittuneiden evakuointi-tilanne joukkuetasolta evakuointisairaalaan asti. Tutkimuksen keskeisiä tuloksia (kontribuutio) ovat 1) uusi prikaatitason sotapelitilanteiden laskentamalli, jonka pienin yksikkö on joukkue tai ryhmä; 2) joukon murtumispisteen määritys tappioiden ja haavoittuneiden evakuointiin sitoutuvien taistelijoiden avulla; 3) todennäköisyyspohjaisen riskianalyysin käyttömahdollisuus vertailevassa tutkimuksessa sekä 4) kokeellisesti testatut tulen vaikutusmallit ja 5) toimivat integrointiratkaisut. Työ rajataan maavoimien taistelun joukkuetason todennäköisyysjakaumat luovaan laskentamalliin, kenttälääkinnän malliin ja epäsuoran tulen malliin integrointimenetelmineen sekä niiden antamien tulosten sovellettavuuteen. Ilmasta ja mereltä maahan -asevaikutusta voidaan tarkastella, mutta ei ilma- ja meritaistelua. Menetelmiä soveltavan Sandis -ohjelmiston malleja, käyttötapaa ja ohjelmistotekniikkaa kehitetään edelleen. Merkittäviä jatkotutkimuskohteita mallinnukseen osalta ovat muun muassa kaupunkitaistelu, vaunujen kaksintaistelu ja maaston vaikutus tykistön tuleen sekä materiaalikulutuksen arviointi.
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.
Resumo:
When modeling machines in their natural working environment collisions become a very important feature in terms of simulation accuracy. By expanding the simulation to include the operation environment, the need for a general collision model that is able to handle a wide variety of cases has become central in the development of simulation environments. With the addition of the operating environment the challenges for the collision modeling method also change. More simultaneous contacts with more objects occur in more complicated situations. This means that the real-time requirement becomes more difficult to meet. Common problems in current collision modeling methods include for example dependency on the geometry shape or mesh density, calculation need increasing exponentially in respect to the number of contacts, the lack of a proper friction model and failures due to certain configurations like closed kinematic loops. All these problems mean that the current modeling methods will fail in certain situations. A method that would not fail in any situation is not very realistic but improvements can be made over the current methods.
Resumo:
The increasing complexity of controller systems, applied in modern passenger cars, requires adequate simulation tools. The toolset FASIM_C++, described in the following, uses complex vehicle models in three-dimensional vehicle dynamics simulation. The structure of the implemented dynamic models and the generation of the equations of motion applying the method of kinematic differentials is explained briefly. After a short introduction in methods of event handling, several vehicle models and applications like controller development, roll-over simulation and real-time-simulation are explained. Finally some simulation results are presented.
Resumo:
Numerical simulation of machining processes can be traced back to the early seventies when finite element models for continuous chip formation were proposed. The advent of fast computers and development of new techniques to model large plastic deformations have favoured machining simulation. Relevant aspects of finite element simulation of machining processes are discussed in this paper, such as solution methods, material models, thermo-mechanical coupling, friction models, chip separation and breakage strategies and meshing/re-meshing strategies.
Resumo:
Stochastic differential equation (SDE) is a differential equation in which some of the terms and its solution are stochastic processes. SDEs play a central role in modeling physical systems like finance, Biology, Engineering, to mention some. In modeling process, the computation of the trajectories (sample paths) of solutions to SDEs is very important. However, the exact solution to a SDE is generally difficult to obtain due to non-differentiability character of realizations of the Brownian motion. There exist approximation methods of solutions of SDE. The solutions will be continuous stochastic processes that represent diffusive dynamics, a common modeling assumption for financial, Biology, physical, environmental systems. This Masters' thesis is an introduction and survey of numerical solution methods for stochastic differential equations. Standard numerical methods, local linearization methods and filtering methods are well described. We compute the root mean square errors for each method from which we propose a better numerical scheme. Stochastic differential equations can be formulated from a given ordinary differential equations. In this thesis, we describe two kind of formulations: parametric and non-parametric techniques. The formulation is based on epidemiological SEIR model. This methods have a tendency of increasing parameters in the constructed SDEs, hence, it requires more data. We compare the two techniques numerically.
Resumo:
The main objective of this master’s thesis is to examine if Weibull analysis is suitable method for warranty forecasting in the Case Company. The Case Company has used Reliasoft’s Weibull++ software, which is basing on the Weibull method, but the Company has noticed that the analysis has not given right results. This study was conducted making Weibull simulations in different profit centers of the Case Company and then comparing actual cost and forecasted cost. Simula-tions were made using different time frames and two methods for determining future deliveries. The first sub objective is to examine, which parameters of simulations will give the best result to each profit center. The second sub objective of this study is to create a simple control model for following forecasted costs and actual realized costs. The third sub objective is to document all Qlikview-parameters of profit centers. This study is a constructive research, and solutions for company’s problems are figured out in this master’s thesis. In the theory parts were introduced quality issues, for example; what is quality, quality costing and cost of poor quality. Quality is one of the major aspects in the Case Company, so understand-ing the link between quality and warranty forecasting is important. Warranty management was also introduced and other different tools for warranty forecasting. The Weibull method and its mathematical properties and reliability engineering were introduced. The main results of this master’s thesis are that the Weibull analysis forecasted too high costs, when calculating provision. Although, some forecasted values of profit centers were lower than actual values, the method works better for planning purposes. One of the reasons is that quality improving or alternatively quality decreasing is not showing in the results of the analysis in the short run. The other reason for too high values is that the products of the Case Company are com-plex and analyses were made in the profit center-level. The Weibull method was developed for standard products, but products of the Case Company consists of many complex components. According to the theory, this method was developed for homogeneous-data. So the most im-portant notification is that the analysis should be made in the product level, not the profit center level, when the data is more homogeneous.
Resumo:
A high-frequency cyclonverter acts as a direct ac-to-ac power converter circuit that does not require a diode bidge rectifier. Bridgeless topology makes it possible to remove forward voltage drop losses that are present in a diode bridge. In addition, the on-state losses can be reduced to 1.5 times the on-state resistance of switches in half-bridge operation of the cycloconverter. A high-frequency cycloconverter is reviewed and the charging effect of the dc-capacitors in ``back-to-back'' or synchronous mode operation operation is analyzed. In addition, a control method is introduced for regulating dc-voltage of the ac-side capacitors in synchronous operation mode. The controller regulates the dc-capacitors and prevents switches from reaching overvoltage level. This can be accomplished by variating phase-shift between the upper and the lower gate signals. By adding phase-shift between the gate signal pairs, the charge stored in the energy storage capacitors can be discharged through the resonant load and substantially, the output resonant current amplitude can be improved. The above goals are analyzed and illustrated with simulation. Theory is supported with practical measurements where the proposed control method is implemented in an FPGA device and tested with a high-frequency cycloconverter using super-junction power MOSFETs as switching devices.