994 resultados para calculation tool
Resumo:
APROS (Advanced Process Simulation Environment) is a computer simulation program developed to simulate thermal hydraulic processes in nuclear and conventional power plants. Earlier research at VTT Technological Research Centre of Finland had found the current version of APROS to produce inaccurate simulation results for a certain case of loop seal clearing. The objective of this Master’s thesis is to find and implement an alternative method for calculating the rate of stratification in APROS, which was found to be the reason for the inaccuracies. Brief literature study was performed and a promising candidate for the new method was found. The new method was implemented into APROS and tested against experiments and simulations from two test facilities and the current version of APROS. Simulation results with the new version were partially conflicting; in some cases the new method was more accurate than the current version, in some the current method was better. Overall, the new method can be assessed as an improvement.
Resumo:
Tässä diplomityössä käsitellään monikappalesysteeminä mallinnetun toimilaitteen tai mekaanisen systeemin kappaleissa vaikuttavien rasitusten, siirtymien ja jännitysten laskentamenetelmiä. Työhön sisällytettyjen menetelmien valinta on toteutettu 2000-luvulla virtuaalisuunnittelua käsittelevissä tiedelehdissä julkaistujen artikkelien pohjalta. Työn tarkoituksena on muodostaa kirjallisuuskatsaus uusien laskentamenetelmien ominaisuuksista ja metodiikasta, mitä voidaan tarvittaessa soveltaa virtuaalisuunnittelun tarpeisiin. Kaksi esiteltävistä menetelmistä on optimointimenetelmiä (RBDO ja ESL). Muissa menetelmissä käsitellään muun muassa venymien rekonstruointia ja hankauskitkasta komponentteihin kohdistuvia jännityksiä. Moving frame-menetelmässä sovelletaan kelluvan koordinaatiston periaatetta, yksi menetelmistä perustuu selkeästi osarakennetekniikkaan ja yhdessä kappaleiden joustokäyttäytymistä mallinnetaan muotofunktioiden avulla. Lisäksi on kolme soveltavaa esimerkkiä rasitusten seurannasta teollisuuskoneissa. Laskentamenetelmät ovat luonteeltaan ja sovelluskelpoisuudeltaan erilaisia. Optimointimenetelmät ovat parhaimmillaan rakenteiden jatkokehitystyössä, siinä missä muut menetelmät soveltuvat joko olemassa olevien rakenteiden mallintamiseen tai kokonaan uusien systeemien suunnittelutyökaluiksi. Tätä eroavuutta voidaan pitää hyvänä asiana, jotta voidaan valita parhaiten omiin tarkoituksiin soveltuva menetelmä.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Early identification of beginning readers at risk of developing reading and writing difficulties plays an important role in the prevention and provision of appropriate intervention. In Tanzania, as in other countries, there are children in schools who are at risk of developing reading and writing difficulties. Many of these children complete school without being identified and without proper and relevant support. The main language in Tanzania is Kiswahili, a transparent language. Contextually relevant, reliable and valid instruments of identification are needed in Tanzanian schools. This study aimed at the construction and validation of a group-based screening instrument in the Kiswahili language for identifying beginning readers at risk of reading and writing difficulties. In studying the function of the test there was special interest in analyzing the explanatory power of certain contextual factors related to the home and school. Halfway through grade one, 337 children from four purposively selected primary schools in Morogoro municipality were screened with a group test consisting of 7 subscales measuring phonological awareness, word and letter knowledge and spelling. A questionnaire about background factors and the home and school environments related to literacy was also used. The schools were chosen based on performance status (i.e. high, good, average and low performing schools) in order to include variation. For validation, 64 children were chosen from the original sample to take an individual test measuring nonsense word reading, word reading, actual text reading, one-minute reading and writing. School marks from grade one and a follow-up test half way through grade two were also used for validation. The correlations between the results from the group test and the three measures used for validation were very high (.83-.95). Content validity of the group test was established by using items drawn from authorized text books for reading in grade one. Construct validity was analyzed through item analysis and principal component analysis. The difficulty level of most items in both the group test and the follow-up test was good. The items also discriminated well. Principal component analysis revealed one powerful latent dimension (initial literacy factor), accounting for 93% of the variance. This implies that it could be possible to use any set of the subtests of the group test for screening and prediction. The K-Means cluster analysis revealed four clusters: at-risk children, strugglers, readers and good readers. The main concern in this study was with the groups of at-risk children (24%) and strugglers (22%), who need the most assistance. The predictive validity of the group test was analyzed by correlating the measures from the two school years and by cross tabulating grade one and grade two clusters. All the correlations were positive and very high, and 94% of the at-risk children in grade two were already identified in the group test in grade one. The explanatory power of some of the home and school factors was very strong. The number of books at home accounted for 38% of the variance in reading and writing ability measured by the group test. Parents´ reading ability and the support children received at home for schoolwork were also influential factors. Among the studied school factors school attendance had the strongest explanatory power, accounting for 21% of the variance in reading and writing ability. Having been in nursery school was also of importance. Based on the findings in the study a short version of the group test was created. It is suggested for use in the screening processes in grade one aiming at identifying children at risk of reading and writing difficulties in the Tanzanian context. Suggestions for further research as well as for actions for improving the literacy skills of Tanzanian children are presented.
Resumo:
Työn tavoitteena oli luoda työkalu kestomagneettikoneiden roottoreiden väsymisen analysointia varten. Työkalu toteutettiin siten, että siihen voidaan liittää oikeasta koneesta mitattu kuormitusdata, sekä tarvittavat materiaalitiedot. Kuormitusdata muunnetaan työkalussa jännityshistoriaksi käyttämällä elementtimenetelmän avulla laskettavaa skaalauskerrointa. Kestoiän laskemiseen analyysityökalu käyttää jännitykseen perustuvaa menetelmää sekä rainflowmenetelmää ja Palmgren-Minerin kumulatiivista vauriosääntöä. Lisäksi työkalu tekee tutkittavalle tapaukselle Smithin väsymislujuuspiirroksen. Edellä mainittujen menetelmien lisäksi työn teoriaosassa esiteltiin väsymisanalyysimenetelmistä myös paikalliseen venymään perustuva menetelmä sekä murtumismekaniikka. Nämä menetelmät jäivät monimutkaisuutensa vuoksi toteuttamatta työkalussa. Väsymisanalyysityökalulla laskettiin kestoiät kahdelle esimerkkitapaukselle. Kummassakin tapauksessa saatiin tulokseksi ääretön kestoikä, mutta aksiaalivuokoneen roottorin dynaaminen varmuus oli pieni. Vaikka tulokset vaikuttavat järkeviltä, ne olisi vielä hyvä verifioida esimerkiksi kaupallisen ohjelmiston avulla täyden varmuuden saamiseksi.
Resumo:
This Master´s thesis investigates the performance of the Olkiluoto 1 and 2 APROS model in case of fast transients. The thesis includes a general description of the Olkiluoto 1 and 2 nuclear power plants and of the most important safety systems. The theoretical background of the APROS code as well as the scope and the content of the Olkiluoto 1 and 2 APROS model are also described. The event sequences of the anticipated operation transients considered in the thesis are presented in detail as they will form the basis for the analysis of the APROS calculation results. The calculated fast operational transient situations comprise loss-of-load cases and two cases related to a inadvertent closure of one main steam isolation valve. As part of the thesis work, the inaccurate initial data values found in the original 1-D reactor core model were corrected. The input data needed for the creation of a more accurate 3-D core model were defined. The analysis of the APROS calculation results showed that while the main results were in good accordance with the measured plant data, also differences were detected. These differences were found to be caused by deficiencies and uncertainties related to the calculation model. According to the results the reactor core and the feedwater systems cause most of the differences between the calculated and measured values. Based on these findings, it will be possible to develop the APROS model further to make it a reliable and accurate tool for the analysis of the operational transients and possible plant modifications.
Resumo:
Nordic forum for nursing teachers. Wednesday - Friday 9-11 November 2011 Ounasvaara Campus School of Heath Care and Sports Rovaniemi University of Applied Sciences
Resumo:
The interaction between the soil and tillage tool can be examined using different parameters for the soil and the tool. Among the soil parameters are the shear stress, cohesion, internal friction angle of the soil and the pre-compression stress. The tool parameters are mainly the tool geometry and depth of operation. Regarding to the soils of Rio Grande do Sul there are hardly any studies and evaluations of the parameters that have importance in the use of mathematical models to predict tensile loads. The objective was to obtain parameters related to the soils of Rio Grande do Sul, which are used in soil-tool analysis, more specifically on mathematical models that allow the calculation of tractive effort for symmetric and narrow tools. Two of the main soils of Rio Grande do Sul, an Albaqualf and a Paleudult were studied. Equations that relate the cohesion, internal friction angle of the soil, adhesion, soil-tool friction angle and pre-compression stress as a function of water content in the soil were obtained, leading to important information for use of mathematical models for tractive effort calculation.
Resumo:
The purpose of this study is to examine how well risk parity works in terms of risk, return and diversification relative to more traditional minimum variance, 1/N and 60/40 portfolios. Risk parity portfolios were constituted of five risk sources; three common asset classes and two alternative beta investment strategies. The three common asset classes were equities, bonds and commodities, and the alternative beta investment strategies were carry trade and trend following. Risk parity portfolios were constructed using five different risk measures of which four were tail risk measures. The risk measures were standard deviation, Value-at-Risk, Expected Shortfall, modified Value-at-Risk and modified Expected Shortfall. We studied also how sensitive risk parity is to the choice of risk measure. The hypothesis is that risk parity portfolios provide better return with the same amount of risk and are better diversified than the benchmark portfolios. We used two data sets, monthly and weekly data. The monthly data was from the years 1989-2011 and the weekly data was from the years 2000-2011. Empirical studies showed that risk parity portfolios provide better diversification since the diversification is made at the risk level. Risk based portfolios provided superior return compared to the asset based portfolios. Using tail risk measures in risk parity portfolios do not necessarily provide better hedge from tail events than standard deviation.
Resumo:
Tässä työssä on tutkittu prosessihöyryn tuotantokustannusten optimointia Neste Oil Oy:n Naantalin jalostamolla. Työssä on keskitytty ennen kaikkea prosessi-höyryn tuotantoon öljynjalostamolla. Tavoitteena oli luoda yksinkertainen työkalu, jonka avulla voidaan vertailla höyryntuotantokustannuksia jalostamon tuotan-toyksiköissä. Samalla oli tarkoituksena kartoittaa höyryn tuotantomahdollisuudet yksiköissä. Työn tuloksena tehty Excel-pohjainen laskentataulukko on esitelty tässä työssä. Ohjelman tarkkuutta on mahdollista parantaa prosessikoeajojen avul-la. Jo nykyisellä laskentatarkkuudella ohjelma täyttää kuitenkin asetetut tavoitteet eli antaa mahdollisuuden vertailla helposti ja havainnollisesti tuotantokustannuksia yksiköiden välillä. Oman höyryntuotannon lisäksi Naantalin jalostamo ostaa höyryä viereiseltä Fortumin Naantalin voimalaitokselta. Ostohöyryn hinta on las-kettu työkalussa mukaan vertailuun. Työn perusteella ostohöyry on Naantalin ja-lostamon tapauksessa selkeästi omatuotantoa edullisempi vaihtoehto. Ero kustan-nuksissa syntyy käytettävästä polttoaineesta, joka jalostamon tapauksessa on ja-lostamo- eli polttokaasu. Nykyisen tasoisella raakaöljyn hinnalla tilanne säilynee tällaisena tulevaisuudessakin. Jalostamolta löytyi myös kohteita, joissa energiatehokkuutta on mahdollista kehit-tää nykyisen höyryverkon ulkopuolella. Esimerkkeinä tällaisista kohteista tässä työssä on tarkasteltu toisen jätelämpökattilan rakentamista nykyisen toiminnassa olevan rinnalle, sekä höyryturbiinien asentamista paineenalennuslinjoihin. Näistä hankkeista jätelämpökattila osoittautui erittäin suositeltavaksi investoinniksi.
Resumo:
Diplomityön tarkoituksena on optimoida asiakkaiden sähkölaskun laskeminen hajautetun laskennan avulla. Älykkäiden etäluettavien energiamittareiden tullessa jokaiseen kotitalouteen, energiayhtiöt velvoitetaan laskemaan asiakkaiden sähkölaskut tuntiperusteiseen mittaustietoon perustuen. Kasvava tiedonmäärä lisää myös tarvittavien laskutehtävien määrää. Työssä arvioidaan vaihtoehtoja hajautetun laskennan toteuttamiseksi ja luodaan tarkempi katsaus pilvilaskennan mahdollisuuksiin. Lisäksi ajettiin simulaatioita, joiden avulla arvioitiin rinnakkaislaskennan ja peräkkäislaskennan eroja. Sähkölaskujen oikeinlaskemisen tueksi kehitettiin mittauspuu-algoritmi.
Resumo:
This three-phase study was conducted to examine the effect of the Breast Cancer Patient’s Pathway program (BCPP) on breast cancer patients’ empowering process from the viewpoint of the difference between knowledge expectations and perceptions of received knowledge, knowledge level, quality of life, anxiety and treatment-related side effects during the breast cancer treatment process. The BCPP is an Internet-based patient education tool describing a flow chart of the patient pathway during the breast treatment process, from breast cancer diagnostic tests to the follow-up after treatments. The ultimate goal of this study was to evaluate the effect of the BCPP to the breast cancer patient’s empowerment by using the patient pathway as a patient education tool. In phase I, a systematic literature review was carried out to chart the solutions and outcomes of Internet-based educational programs for breast cancer patients. In phase II, a Delphi study was conducted to evaluate the usability of web pages and adequacy of their content. In phase III, the BCPP program was piloted with 10 patients and patients were randomised to an intervention group (n=50) and control group (n=48). According to the results of this study, the Internet is an effective patient education tool for increasing knowledge, and BCPP can be used as a patient education method supporting other education methods. However, breast cancer patients’ perceptions of received knowledge were not fulfilled; their knowledge expectations exceed the perceived amount of received knowledge. Although control group patients’ knowledge expectations were met better with the knowledge they received in hospital compared to the patients in the intervention group, no statistical differences were found between the groups in terms of quality of life, anxiety and treatment-related side effects. However, anxiety decreased faster in the intervention group when looking at internal differences between the groups at different measurement times. In the intervention group the relationship between the difference between knowledge expectations and perceptions of received knowledge correlated significantly with quality of life and anxiety. Their knowledge level was also significant higher than in the control group. These results support the theory that the empowering process requires patient’s awareness of knowledge expectations and perceptions of received knowledge. There is a need to develop patient education to meet patients’ perceptions of received knowledge, including oral and written education and BCPP, to fulfil patient’s knowledge expectations and facilitate the empowering process. Further research is needed on the process of cognitive empowerment with breast cancer patients. There is a need for new patient education methods to increase breast cancer patients’ awareness of knowing.