19 resultados para Test data
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
To enable a mathematically and physically sound execution of the fatigue test and a correct interpretation of its results, statistical evaluation methods are used to assist in the analysis of fatigue testing data. The main objective of this work is to develop step-by-stepinstructions for statistical analysis of the laboratory fatigue data. The scopeof this project is to provide practical cases about answering the several questions raised in the treatment of test data with application of the methods and formulae in the document IIW-XIII-2138-06 (Best Practice Guide on the Statistical Analysis of Fatigue Data). Generally, the questions in the data sheets involve some aspects: estimation of necessary sample size, verification of the statistical equivalence of the collated sets of data, and determination of characteristic curves in different cases. The series of comprehensive examples which are given in this thesis serve as a demonstration of the various statistical methods to develop a sound procedure to create reliable calculation rules for the fatigue analysis.
Resumo:
The safe use of nuclear power plants (NPPs) requires a deep understanding of the functioning of physical processes and systems involved. Studies on thermal hydraulics have been carried out in various separate effects and integral test facilities at Lappeenranta University of Technology (LUT) either to ensure the functioning of safety systems of light water reactors (LWR) or to produce validation data for the computer codes used in safety analyses of NPPs. Several examples of safety studies on thermal hydraulics of the nuclear power plants are discussed. Studies are related to the physical phenomena existing in different processes in NPPs, such as rewetting of the fuel rods, emergency core cooling (ECC), natural circulation, small break loss-of-coolant accidents (SBLOCA), non-condensable gas release and transport, and passive safety systems. Studies on both VVER and advanced light water reactor (ALWR) systems are included. The set of cases include separate effects tests for understanding and modeling a single physical phenomenon, separate effects tests to study the behavior of a NPP component or a single system, and integral tests to study the behavior of the whole system. In the studies following steps can be found, not necessarily in the same study. Experimental studies as such have provided solutions to existing design problems. Experimental data have been created to validate a single model in a computer code. Validated models are used in various transient analyses of scaled facilities or NPPs. Integral test data are used to validate the computer codes as whole, to see how the implemented models work together in a code. In the final stage test results from the facilities are transferred to the NPP scale using computer codes. Some of the experiments have confirmed the expected behavior of the system or procedure to be studied; in some experiments there have been certain unexpected phenomena that have caused changes to the original design to avoid the recognized problems. This is the main motivation for experimental studies on thermal hydraulics of the NPP safety systems. Naturally the behavior of the new system designs have to be checked with experiments, but also the existing designs, if they are applied in the conditions that differ from what they were originally designed for. New procedures for existing reactors and new safety related systems have been developed for new nuclear power plant concepts. New experiments have been continuously needed.
Resumo:
Tämän diplomityön tavoitteena oli kehittää menetelmiä ja ohjeitataajuusmuuttajan sulautetun ohjelmiston kehityksen aikaiseen testaukseen. Soveltuvia menetelmiä etsittiin tutkimalla laajasti kirjallisuutta sekä selvittämälläyrityksen testauskäytäntöä. Tutkittuja kirjallisuudesta löytyneitä menetelmä olivat testauskehykset, simulointi ja staattinen sekä automaattinen testaus. Kirjallisuudesta etsittiin myös menetelmiä, joiden avulla testausprosessia voidaan helpottaa tai muuten parantaa. Tällaisista menetelmistä tutkittiin muun muassa testidatan valintaa, testauslähtöistä kehitystä sekä testattavuuden parantamista. Lisäksi selvitettiin uudelleenkäytettävien testien ohjelmointiin soveltuvia ohjelmointikieliä. Haastatteluiden ja dokumentaation avulla saatiin hyvä käsitys yrityksessä vallitsevasta testauskäytännöstä sekä sen ongelmakohdista. Testauksen ongelmiksi havaittiin testausprosessin järjestelmällisyyden puute sekä tarve suunnittelijoiden testauskoulutukseen. Testausprosessin parantamiseksi esitetään moduulitestauskehyksen käyttöönottoa. Lisäksi suunnittelijoiden testauskoulutuksella arvioidaan olevan suuri vaikutus koko testausprosessiin. Testitapausten suunnitteluun esitetään menetelmiä, joiden avulla voidaan suunnitella kattavampia testejä.
Resumo:
The purpose of this thesis is to reveal how the laser cutting parameters influence lasercutting of particleboard, HDF and MDF. The literature review introduces the basic principle of CO2 laser, CO2 laser equipment and its usage in cutting of wood-based materials. The experimental part focuses on the discussion and analysis ofthe test data and attempts to draw conclusions on the influence of various parameters, including laser power, focal length of the lens and cutting gas, on the cutting speed and kerf quality. The tested materials include various thicknesses of particleboard, HDF and MDF samples. A TRUMPF TLF2700 HQ laser equipment was used for the experiments. To obtain valid data, the test samples must be completely cut through without any bonding of wood fibre. The maximum cutting speed is linear dependent on the laser power in thecondition that the other parameters are constant. For each thickness of a specific material type, there is a minimum laser power for cutting. Normally, the topand bottom kerf widths increase with the enhancement of laser power. There may be a critical laser power which can generate the minimum cross-sectional kerf width. Lens of larger focal length may achieve higher cutting speed. As the focal length becomes larger, the top kerf width tends to increase while the bottom andcross-sectional kerf widths to the opposite. Of all cutting gases, oxygen can help achieve higher cutting speed. The gas pressure of nitrogen does not seem to have strong influence on the cutting result. Generally, 2 bar air is more preferable for higher cutting speed. For particleboard and MDF samples of larger thickness than 12 mm, 2 bar argon can be used to reach remarkably higher cutting speed than the 5 bar. Generally, the 190.5 mm lens can produce smallest total kerf width. The kerf sides of thicker samples are darker than the thinner ones. The sample darkness tends to be lower as laser power increased. 63.5 mm lens seemed tocause more darkness than other lens. 5 bar cutting gases can produce less dark side kerfs than 2 bar ones. Oxygen normally causes darker kerfs than other gases. No distinct differences were found between nitrogen and argon.
Resumo:
Diplomityössä tutkitaan verkkokäyttöisten harjattomasti magnetoitujen tahtimoottorien käynnistyshäiriötä, jossa moottori magnetoituu vasta useiden sekuntien kuluttua magnetoinnin kytkemisestä magnetointilaitteiston normaalista toiminnasta huolimatta. Syy magnetoitumisen viivästymiseen on magnetointikoneen oikosulkeutuminen roottorin ylijännitesuojana toimivan tyristorihaaran kautta siitä huolimatta, että tyristorihaaran tyristorien on tarkoitus olla johtamattomassa tilassa magnetointikoneen alkaessa syöttää magnetointivirtaa. Syitä tyristorien johtavana pysymiseen magnetoinnin kytkennän jälkeen etsitään tahtimoottorin käynnistyskokeista saatujen mittaustulosten sekä SMT- ja FCSMEK-laskentaohjelmilla tehtyjen käynnistyssimulointien avulla. Samalla arvioidaan ohjelmien käyttökelpoisuutta käynnistyshäiriön ennakoimisessa. Diplomityössä esitetään syyt kahden esimerkkikoneen magnetoitumisen viivästymiseen sekä muutoksia roottoripiiriin ja käynnistysproseduuriin, joiden avulla tutkittu käynnistyshäiriö voitaisiin tulevaisuudessa todennäköisesti välttää.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
The purpose of this work was to design and carry out thermal-hydraulic experiments dealing with overcooling transients of a VVER-440-type nuclear reactor pressure vessel. Sudden overcooling accident could have negative effect on the mechanical strength of the pressure vessel. If part of the pressure vessel is compromised, the intense pressure inside a pressurized water reactor could cause the wall to fracture. Information on the heat transfer along the outside of the pressure vessel wall is necessary for stress analysis. Basic knowledge of the overcooling accident and heat transfer types on the outside of the pressure vessel is presented as background information. Test facility was designed and built based to study and measure heat transfer during specific overcooling scenarios. Two test series were conducted with the first one concentrating on the very beginning of the transient and the second one concentrating on steady state heat transfer. Heat transfer coefficients are calculated from the test data using an inverse method, which yields better results in fast transients than direct calculation from the measurement results. The results show that heat transfer rate varies considerably during the transient, being very high in the beginning and dropping to steady state in a few minutes. The test results show that appropriate correlations can be used in future analysis.
Resumo:
Tämä työ vastaa tarpeeseen hallita korkeapainevesisumusuuttimen laatua virtausmekaniikan työkalujen avulla. Työssä tutkitaan suutinten testidatan lisäksi virtauksen käyttäytymistä suuttimen sisällä CFD-laskennan avulla. Virtausmallinnus tehdään Navier-Stokes –pohjaisella laskentamenetelmällä. Työn teoriaosassa käsitellään virtaustekniikkaa ja sen kehitystä yleisesti. Lisäksi esitetään suuttimen laskennassa käytettävää perusteoriaa sekä teknisiä ratkaisuja. Teoriaosassa käydään myös läpi laskennalliseen virtausmekaniikkaan (CFD-laskenta) liittyvää perusteoriaa. Tutkimusosiossa esitetään käsitellyt suutintestitulokset sekä mallinnetaan suutinvirtausta ajasta riippumattomaan virtauslaskentaan perustuvalla laskentamenetelmällä. Virtauslaskennassa käytetään OpenFOAM-laskentaohjelmiston SIMPLE-virtausratkaisijaa sekä k-omega SST –turbulenssimallia. Tehtiin virtausmallinnus kaikilla paineilla, joita suuttimen testauksessa myös todellisuudessa käytetään. Lisäksi selvitettiin mahdolliset kavitaatiokohdat suuttimessa ja suunniteltiin kavitaatiota ehkäisevä suutingeometria. Todettiin myös lämpötilan ja epäpuhtauksien vaikuttavan kavitaatioon sekä mallinnettiin lämpötilan vaikutusta. Luotiin malli, jolla suuttimen suunnitteluun liittyviin haasteisiin voidaan vastata numeerisella laskennalla.
Resumo:
Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Työn tarkoituksena on kerätä yhteen tiedot kaikista maailmalta löytyvistä ison LOCA:n ulospuhallusvaiheen tutkimiseen käytetyistä koelaitteistoista. Työn tarkoituksena on myös antaa pohjaa päätökselle, onko tarpeellista rakentaa uusi koelaitteisto nesterakenne-vuorovaikutuskoodien laskennan validoimista varten. Ennen varsinaisen koelaitteiston rakentamista olisi tarkoituksenmukaista myös rakentaa pienempi pilottikoelaitteisto, jolla voitaisiin testata käytettäviä mittausmenetelmiä. Sopivaa mittausdataa tarvitaan uusien CFD-koodien ja rakenneanalyysikoodien kytketyn laskennan validoimisessa. Näitä koodeja voidaan käyttää esimerkiksi arvioitaessa reaktorin sisäosien rakenteellista kestävyyttä ison LOCA:n ulospuhallusvaiheen aikana. Raportti keskittyy maailmalta löytyviin koelaitteistoihin, uuden koelaitteiston suunnitteluperusteisiin sekä aiheeseen liittyviin yleisiin asioihin. Raportti ei korvaa olemassa olevia validointimatriiseja, mutta sitä voi käyttää apuna etsittäessä validointitarkoituksiin sopivaa ison LOCA:n ulospuhallusvaiheen koelaitteistoa.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
In this thesis, the components important for testing work and organisational test process are identified and analysed. This work focuses on the testing activities in reallife software organisations, identifying the important test process components, observing testing work in practice, and analysing how the organisational test process could be developed. Software professionals from 14 different software organisations were interviewed to collect data on organisational test process and testing‐related factors. Moreover, additional data on organisational aspects was collected with a survey conducted on 31 organisations. This data was further analysed with the Grounded Theory method to identify the important test process components, and to observe how real‐life test organisations develop their testing activities. The results indicate that the test management at the project level is an important factor; the organisations do have sufficient test resources available, but they are not necessarily applied efficiently. In addition, organisations in general are reactive; they develop their process mainly to correct problems, not to enhance their efficiency or output quality. The results of this study allows organisations to have a better understanding of the test processes, and develop towards better practices and a culture of preventing problems, not reacting to them.