12 resultados para Estudos de validação - Validation studies

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis gives an overview of the validation process for thermal hydraulic system codes and it presents in more detail the assessment and validation of the French code CATHARE for VVER calculations. Three assessment cases are presented: loop seal clearing, core reflooding and flow in a horizontal steam generator. The experience gained during these assessment and validation calculations has been used to analyze the behavior of the horizontal steam generator and the natural circulation in the geometry of the Loviisa nuclear power plant. The cases presented are not exhaustive, but they give a good overview of the work performed by the personnel of Lappeenranta University of Technology (LUT). Large part of the work has been performed in co-operation with the CATHARE-team in Grenoble, France. The design of a Russian type pressurized water reactor, VVER, differs from that of a Western-type PWR. Most of thermal-hydraulic system codes are validated only for the Western-type PWRs. Thus, the codes should be assessed and validated also for VVER design in order to establish any weaknesses in the models. This information is needed before codes can be used for the safety analysis. Theresults of the assessment and validation calculations presented here show that the CATHARE code can be used also for the thermal-hydraulic safety studies for VVER type plants. However, some areas have been indicated which need to be reassessed after further experimental data become available. These areas are mostly connected to the horizontal stem generators, like condensation and phase separation in primary side tubes. The work presented in this thesis covers a large numberof the phenomena included in the CSNI code validation matrices for small and intermediate leaks and for transients. Also some of the phenomena included in the matrix for large break LOCAs are covered. The matrices for code validation for VVER applications should be used when future experimental programs are planned for code validation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fytoestrogeenit ovat kasvimateriaalista peräisin olevia yhdisteitä, joilla on ihmisen estrogeeni-hormonin kaltaista aktiivisuutta. Fytoestrogeenit voidaan jakaa kolmeen pääryhmään, joista yksi merkittävä ryhmä on lignaanit. Lignaaneilla on todettu olevan antioxidatiivisia, antiviraalisia ja antibakteriaalisia ominaisuuksia. Niillä on todettu olevan myös positiivisia vaikutuksia hormoniperäisten syöpien ehkäisyssä. Näiden ominaisuuksien vuoksi lignaaneja pyritään hyödyntämään esimerkiksi aktiivisina ainesosina funktionaalisissa elintarvikkeissa. Tässä työssä tutkittiin lignaanin, hydroksimatairesinolin (HMRlignanTM) kemiallisia ominaisuuksia ja soveltuvuutta eri ruoka-aineisiin. Työn tarkoituksena oli selvittää ruoka-aineisiin lisätyn hydroksimatairesinolin kemiallista pysyvyyttä erilaisissa säilytys- ja prosessointioloissa sekä tutkia lignaanin liukoisuutta erilaisiin liuottimiin. Hydroksimatairesinolin analysoimiseksi ruoka-aineista käytettiin korkean erotuskyvyn omaavaa nestekromatografista menetelmää. Menetelmä validoitiin ennen varsinaista analysointia ICH-ohjeiston mukaisesti. Validoinnissa tutkittiin kromatografiamenetelmän spesifisyys, lineaarisuus, tarkkuus, oikeellisuus sekä detektointi- ja määritysrajat tutkittavalle lignaanille. Käytetty menetelmä soveltui hyvin lignaanin analysoimiseen ruoka-aineista. Hydroksimatairesinolin vesiliukoisuuden todettiin olevan noin 1 mg/ml. Tutkimukset osoittivat hydroksimatairesinolin olevan stabiili alle 50ºC:en lämpötiloissa. Korkeammissa lämpötiloissa hydroksimatairesinoli oli stabiili jauhemuodossa lisättynä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radiostereometric analysis (RSA) is a highly accurate method for the measurement of in vivo micromotion of orthopaedic implants. Validation of the RSA method is a prerequisite for performing clinical RSA studies. Only a limited number of studies have utilised the RSA method in the evaluation of migration and inducible micromotion during fracture healing. Volar plate fixation of distal radial fractures has increased in popularity. There is still very little prospective randomised evidence supporting the use of these implants over other treatments. The aim of this study was to investigate the precision, accuracy, and feasibility of using RSA in the evaluation of healing in distal radius fractures treated with a volar fixed-angle plate. A physical phantom model was used to validate the RSA method for simple distal radius fractures. A computer simulation model was then used to validate the RSA method for more complex interfragmentary motion in intra-articular fractures. A separate pre-clinical investigation was performed in order to evaluate the possibility of using novel resorbable markers for RSA. Based on the validation studies, a prospective RSA cohort study of fifteen patients with plated AO type-C distal radius fractures with a 1-year follow-up was performed. RSA was shown to be highly accurate and precise in the measurement of fracture micromotion using both physical and computer simulated models of distal radius fractures. Resorbable RSA markers demonstrated potential for use in RSA. The RSA method was found to have a high clinical precision. The fractures underwent significant translational and rotational migration during the first two weeks after surgery, but not thereafter. Maximal grip caused significant translational and rotational interfragmentary micromotion. This inducible micromotion was detectable up to eighteen weeks, even after the achievement of radiographic union. The application of RSA in the measurement of fracture fragment migration and inducible interfragmentary micromotion in AO type-C distal radius fractures is feasible but technically demanding. RSA may be a unique tool in defining the progress of fracture union.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coronary artery disease is an atherosclerotic disease, which leads to narrowing of coronary arteries, deteriorated myocardial blood flow and myocardial ischaemia. In acute myocardial infarction, a prolonged period of myocardial ischaemia leads to myocardial necrosis. Necrotic myocardium is replaced with scar tissue. Myocardial infarction results in various changes in cardiac structure and function over time that results in “adverse remodelling”. This remodelling may result in a progressive worsening of cardiac function and development of chronic heart failure. In this thesis, we developed and validated three different large animal models of coronary artery disease, myocardial ischaemia and infarction for translational studies. In the first study the coronary artery disease model had both induced diabetes and hypercholesterolemia. In the second study myocardial ischaemia and infarction were caused by a surgical method and in the third study by catheterisation. For model characterisation, we used non-invasive positron emission tomography (PET) methods for measurement of myocardial perfusion, oxidative metabolism and glucose utilisation. Additionally, cardiac function was measured by echocardiography and computed tomography. To study the metabolic changes that occur during atherosclerosis, a hypercholesterolemic and diabetic model was used with [18F] fluorodeoxyglucose ([18F]FDG) PET-imaging technology. Coronary occlusion models were used to evaluate metabolic and structural changes in the heart and the cardioprotective effects of levosimendan during post-infarction cardiac remodelling. Large animal models were used in testing of novel radiopharmaceuticals for myocardial perfusion imaging. In the coronary artery disease model, we observed atherosclerotic lesions that were associated with focally increased [18F]FDG uptake. In heart failure models, chronic myocardial infarction led to the worsening of systolic function, cardiac remodelling and decreased efficiency of cardiac pumping function. Levosimendan therapy reduced post-infarction myocardial infarct size and improved cardiac function. The novel 68Ga-labeled radiopharmaceuticals tested in this study were not successful for the determination of myocardial blood flow. In conclusion, diabetes and hypercholesterolemia lead to the development of early phase atherosclerotic lesions. Coronary artery occlusion produced considerable myocardial ischaemia and later infarction following myocardial remodelling. The experimental models evaluated in these studies will enable further studies concerning disease mechanisms, new radiopharmaceuticals and interventions in coronary artery disease and heart failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The safe use of nuclear power plants (NPPs) requires a deep understanding of the functioning of physical processes and systems involved. Studies on thermal hydraulics have been carried out in various separate effects and integral test facilities at Lappeenranta University of Technology (LUT) either to ensure the functioning of safety systems of light water reactors (LWR) or to produce validation data for the computer codes used in safety analyses of NPPs. Several examples of safety studies on thermal hydraulics of the nuclear power plants are discussed. Studies are related to the physical phenomena existing in different processes in NPPs, such as rewetting of the fuel rods, emergency core cooling (ECC), natural circulation, small break loss-of-coolant accidents (SBLOCA), non-condensable gas release and transport, and passive safety systems. Studies on both VVER and advanced light water reactor (ALWR) systems are included. The set of cases include separate effects tests for understanding and modeling a single physical phenomenon, separate effects tests to study the behavior of a NPP component or a single system, and integral tests to study the behavior of the whole system. In the studies following steps can be found, not necessarily in the same study. Experimental studies as such have provided solutions to existing design problems. Experimental data have been created to validate a single model in a computer code. Validated models are used in various transient analyses of scaled facilities or NPPs. Integral test data are used to validate the computer codes as whole, to see how the implemented models work together in a code. In the final stage test results from the facilities are transferred to the NPP scale using computer codes. Some of the experiments have confirmed the expected behavior of the system or procedure to be studied; in some experiments there have been certain unexpected phenomena that have caused changes to the original design to avoid the recognized problems. This is the main motivation for experimental studies on thermal hydraulics of the NPP safety systems. Naturally the behavior of the new system designs have to be checked with experiments, but also the existing designs, if they are applied in the conditions that differ from what they were originally designed for. New procedures for existing reactors and new safety related systems have been developed for new nuclear power plant concepts. New experiments have been continuously needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Validation and verification operations encounter various challenges in product development process. Requirements for increasing the development cycle pace set new requests for component development process. Verification and validation usually represent the largest activities, up to 40 50 % of R&D resources utilized. This research studies validation and verification as part of case company's component development process. The target is to define framework that can be used in improvement of the validation and verification capability evaluation and development in display module development projects. Validation and verification definition and background is studied in this research. Additionally, theories such as project management, system, organisational learning and causality is studied. Framework and key findings of this research are presented. Feedback system according of the framework is defined and implemented to the case company. This research is divided to the theory and empirical parts. Theory part is conducted in literature review. Empirical part is done in case study. Constructive methode and design research methode are used in this research A framework for capability evaluation and development was defined and developed as result of this research. Key findings of this study were that double loop learning approach with validation and verification V+ model enables defining a feedback reporting solution. Additional results, some minor changes in validation and verification process were proposed. There are a few concerns expressed on the results on validity and reliability of this study. The most important one was the selected research method and the selected model itself. The final state can be normative, the researcher may set study results before the actual study and in the initial state, the researcher may describe expectations for the study. Finally reliability of this study, and validity of this work are studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an experimental study and numerical study, based on the discrete element method (DEM), of bell-less charging in the blast furnace. The numerical models are based on the microscopic interaction between the particles in the blast furnace charging process. The emphasis is put on model validation, investigating several phenomena in the charging process, and on finding factors that influence the results. The study considers and simulates size segregation in the hopper discharging process, particle flow and behavior on the chute, which is the key equipment in the charging system, using mono-size spherical particles, multi-size spheres and nonspherical particles. The behavior of the particles at the burden surface and pellet percolation into a coke layer is also studied. Small-scale experiments are used to validate the DEM models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Positron emission tomography imaging has both academic and applied uses in revealing the distribution and density of different molecular targets in the central nervous system. Following the significant progress made with the dopamine D2 receptor, advances have been made in developing PET tracers to allow analysis of receptor occupancy of many other receptor types as well as evaluating changes in endogenous synaptic transmitter concentrations of transmitters e.g. serotonin and noradrenaline. Noradrenergic receptors are divided into α1-, α2- and β-adrenoceptor subfamilies, in humans each of which is composed of three receptor subtypes. The α2-adrenoceptors have an important presynaptic auto-inhibitory function on noradrenaline release but they also have postsynaptic roles in modulating the release of other neurotransmitters, such as serotonin and dopamine. One of the subtypes, the α2C-adrenoceptor, has been detected at distinct locations in the central nervous system, most notably the dorsal striatum. Several serious neurological conditions causing dementia, Alzheimer’s disease and Parkinson’s disease have been linked to disturbed noradrenergic signaling. Furthermore, altered noradrenergic signaling has also been implicated in conditions like ADHD, depression, anxiety and schizophrenia. In order to benefit future research into these central nervous system disorders as well as being useful in the clinical development of drugs affecting brain noradrenergic neurotransmission, validation work of a novel tracer for positron emission tomography studies in humans was performed. Altogether 85 PET imaging experiments were performed during four separate clinical trials. The repeatability of [11C]ORM-13070 binding was tested in healthy individuals, followed by a study to evaluate the dose-dependent displacement of [11C]ORM-13070 from α2C-adrenoceptors by a competing ligand, and the final two studies examined the sensitivity of [11C]ORM-13070 binding to reflect changes in endogenous noradrenaline levels. The repeatability of [11C]ORM-13070 binding was very high. The binding properties of the tracer allowed for a reliable estimation of α2C-AR occupancy by using the reference tissue ratio method with low test-retest variability. [11C]ORM-13070 was dose-dependently displaced from its specific binding sites by the subtype-nonselective α2-adrenoceptor antagonist atipamezole, and thus it proved suitable for use in clinical drug development of novel α2C-adrenoceptor ligands e.g. to determine the best doses and dosing intervals for clinical trials. Convincing experimental evidence was gained to support the suitability of [11C]ORM-13070 for detecting an increase in endogenous synaptic noradrenaline in the human brain. Tracer binding in the thalamus tended to increase in accordance with reduced activity of noradrenergic projections from the locus coeruleus, although statistical significance was not reached. Thus, the investigation was unable to fully validate [11C]ORM-13070 for the detection of pharmacologically evoked reductions in noradrenaline levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prostate cancer (PCa) has emerged as the most commonly diagnosed lethal cancer in European men. PCa is a heterogeneous cancer that in the majority of the cases is slow growing: consequently, these patients would not need any medical treatment. Currently, the measurement of prostate-specific antigen (PSA) from blood by immunoassay followed by digital rectal examination and a pathological examination of prostate tissue biopsies are the most widely used methods in the diagnosis of PCa. These methods suffer from a lack of sensitivity and specificity that may cause either missed cancers or overtreatment as a consequence of over-diagnosis. Therefore, more reliable biomarkers are needed for a better discrimination between indolent and potentially aggressive cancers. The aim of this thesis was the identification and validation of novel biomarkers for PCa. The mRNA expression level of 14 genes including AMACR, AR, PCA3, SPINK1, TMPRSS2-ERG, KLK3, ACSM1, CACNA1D, DLX1, LMNB1, PLA2G7, RHOU, SPON2, and TDRD1 was measured by a truly quantitative reverse transcription PCR in different prostate tissue samples from men with and without PCa. For the last eight genes the function of the genes in PCa progression was studied by a specific siRNA knockdown in PC-3 and VCaP cells. The results from radical prostatectomy and cystoprostatectomy samples showed statistically significant overexpression for all the target genes, except for KLK3 in men with PCa compared with men without PCa. Statistically significant difference was also observed in low versus high Gleason grade tumors (for PLA2G7), PSA relapse versus no relapse (for SPON2), and low versus high TNM stages (for CACNA1D and DLX1). Functional studies and siRNA silencing results revealed a cytotoxicity effect for the knock-down of DLX1, PLA2G7, and RHOU, and altered tumor cell invasion for PLA2G7, RHOU, ACSM1, and CACNA1D knock-down in 3D conditions. In addition, effects on tumor cell motility were observed after silencing PLA2G7 and RHOU in 2D monolayer cultures. Altogether, these findings indicate the possibility of utilizing these new markers as diagnostic and prognostic markers, and they may also represent therapeutic targets for PCa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional business models and the traditionally successful development methods that have been distinctive to the industrial era, do not satisfy the needs of modern IT companies. Due to the rapid nature of IT markets, the uncertainty of new innovations‟ success and the overwhelming competition with established companies, startups need to make quick decisions and eliminate wasted resources more effectively than ever before. There is a need for an empirical basis on which to build business models, as well as evaluate the presumptions regarding value and profit. Less than ten years ago, the Lean software development principles and practices became widely well-known in the academic circles. Those practices help startup entrepreneurs to validate their learning, test their assumptions and be more and more dynamical and flexible. What is special about today‟s software startups is that they are increasingly individual. There are quantitative research studies available regarding the details of Lean startups. Broad research with hundreds of companies presented in a few charts is informative, but a detailed study of fewer examples gives an insight to the way software entrepreneurs see Lean startup philosophy and how they describe it in their own words. This thesis focuses on Lean software startups‟ early phases, namely Customer Discovery (discovering a valuable solution to a real problem) and Customer Validation (being in a good market with a product which satisfies that market). The thesis first offers a sufficiently compact insight into the Lean software startup concept to a reader who is not previously familiar with the term. The Lean startup philosophy is then put into a real-life test, based on interviews with four Finnish Lean software startup entrepreneurs. The interviews reveal 1) whether the Lean startup philosophy is actually valuable for them, 2) how can the theory be practically implemented in real life and 3) does theoretical Lean startup knowledge compensate a lack of entrepreneurship experience. A reader gets familiar with the key elements and tools of Lean startups, as well as their mutual connections. The thesis explains why Lean startups waste less time and money than many other startups. The thesis, especially its research sections, aims at providing data and analysis simultaneously.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional business models and the traditionally successful development methods that have been distinctive to the industrial era, do not satisfy the needs of modern IT companies. Due to the rapid nature of IT markets, the uncertainty of new innovations‟ success and the overwhelming competition with established companies, startups need to make quick decisions and eliminate wasted resources more effectively than ever before. There is a need for an empirical basis on which to build business models, as well as evaluate the presumptions regarding value and profit. Less than ten years ago, the Lean software development principles and practices became widely well-known in the academic circles. Those practices help startup entrepreneurs to validate their learning, test their assumptions and be more and more dynamical and flexible. What is special about today‟s software startups is that they are increasingly individual. There are quantitative research studies available regarding the details of Lean startups. Broad research with hundreds of companies presented in a few charts is informative, but a detailed study of fewer examples gives an insight to the way software entrepreneurs see Lean startup philosophy and how they describe it in their own words. This thesis focuses on Lean software startups‟ early phases, namely Customer Discovery (discovering a valuable solution to a real problem) and Customer Validation (being in a good market with a product which satisfies that market). The thesis first offers a sufficiently compact insight into the Lean software startup concept to a reader who is not previously familiar with the term. The Lean startup philosophy is then put into a real-life test, based on interviews with four Finnish Lean software startup entrepreneurs. The interviews reveal 1) whether the Lean startup philosophy is actually valuable for them, 2) how can the theory be practically implemented in real life and 3) does theoretical Lean startup knowledge compensate a lack of entrepreneurship experience. A reader gets familiar with the key elements and tools of Lean startups, as well as their mutual connections. The thesis explains why Lean startups waste less time and money than many other startups. The thesis, especially its research sections, aims at providing data and analysis simultaneously.