945 resultados para Air Quality Modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many European countries, image quality for digital x-ray systems used in screening mammography is currently specified using a threshold-detail detectability method. This is a two-part study that proposes an alternative method based on calculated detectability for a model observer: the first part of the work presents a characterization of the systems. Eleven digital mammography systems were included in the study; four computed radiography (CR) systems, and a group of seven digital radiography (DR) detectors, composed of three amorphous selenium-based detectors, three caesium iodide scintillator systems and a silicon wafer-based photon counting system. The technical parameters assessed included the system response curve, detector uniformity error, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Approximate quantum noise limited exposure range was examined using a separation of noise sources based upon standard deviation. Noise separation showed that electronic noise was the dominant noise at low detector air kerma for three systems; the remaining systems showed quantum noise limited behaviour between 12.5 and 380 µGy. Greater variation in detector MTF was found for the DR group compared to the CR systems; MTF at 5 mm(-1) varied from 0.08 to 0.23 for the CR detectors against a range of 0.16-0.64 for the DR units. The needle CR detector had a higher MTF, lower NNPS and higher DQE at 5 mm(-1) than the powder CR phosphors. DQE at 5 mm(-1) ranged from 0.02 to 0.20 for the CR systems, while DQE at 5 mm(-1) for the DR group ranged from 0.04 to 0.41, indicating higher DQE for the DR detectors and needle CR system than for the powder CR phosphor systems. The technical evaluation section of the study showed that the digital mammography systems were well set up and exhibiting typical performance for the detector technology employed in the respective systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent research, both soil (root-zone) and air temperature have been used as predictors for the treeline position worldwide. In this study, we intended to (a) test the proposed temperature limitation at the treeline, and (b) investigate effects of season length for both heat sum and mean temperature variables in the Swiss Alps. As soil temperature data are available for a limited number of sites only, we developed an air-to-soil transfer model (ASTRAMO). The air-to-soil transfer model predicts daily mean root-zone temperatures (10cm below the surface) at the treeline exclusively from daily mean air temperatures. The model using calibrated air and root-zone temperature measurements at nine treeline sites in the Swiss Alps incorporates time lags to account for the damping effect between air and soil temperatures as well as the temporal autocorrelations typical for such chronological data sets. Based on the measured and modeled root-zone temperatures we analyzed. the suitability of the thermal treeline indicators seasonal mean and degree-days to describe the Alpine treeline position. The root-zone indicators were then compared to the respective indicators based on measured air temperatures, with all indicators calculated for two different indicator period lengths. For both temperature types (root-zone and air) and both indicator periods, seasonal mean temperature was the indicator with the lowest variation across all treeline sites. The resulting indicator values were 7.0 degrees C +/- 0.4 SD (short indicator period), respectively 7.1 degrees C +/- 0.5 SD (long indicator period) for root-zone temperature, and 8.0 degrees C +/- 0.6 SD (short indicator period), respectively 8.8 degrees C +/- 0.8 SD (long indicator period) for air temperature. Generally, a higher variation was found for all air based treeline indicators when compared to the root-zone temperature indicators. Despite this, we showed that treeline indicators calculated from both air and root-zone temperatures can be used to describe the Alpine treeline position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Drivers Scheduling Problem (DSP) consists of selecting a set of duties for vehicle drivers, for example buses, trains, plane or boat drivers or pilots, for the transportation of passengers or goods. This is a complex problem because it involves several constraints related to labour and company rules and can also present different evaluation criteria and objectives. Being able to develop an adequate model for this problem that can represent the real problem as close as possible is an important research area.The main objective of this research work is to present new mathematical models to the DSP problem that represent all the complexity of the drivers scheduling problem, and also demonstrate that the solutions of these models can be easily implemented in real situations. This issue has been recognized by several authors and as important problem in Public Transportation. The most well-known and general formulation for the DSP is a Set Partition/Set Covering Model (SPP/SCP). However, to a large extend these models simplify some of the specific business aspects and issues of real problems. This makes it difficult to use these models as automatic planning systems because the schedules obtained must be modified manually to be implemented in real situations. Based on extensive passenger transportation experience in bus companies in Portugal, we propose new alternative models to formulate the DSP problem. These models are also based on Set Partitioning/Covering Models; however, they take into account the bus operator issues and the perspective opinions and environment of the user.We follow the steps of the Operations Research Methodology which consist of: Identify the Problem; Understand the System; Formulate a Mathematical Model; Verify the Model; Select the Best Alternative; Present the Results of theAnalysis and Implement and Evaluate. All the processes are done with close participation and involvement of the final users from different transportation companies. The planner s opinion and main criticisms are used to improve the proposed model in a continuous enrichment process. The final objective is to have a model that can be incorporated into an information system to be used as an automatic tool to produce driver schedules. Therefore, the criteria for evaluating the models is the capacity to generate real and useful schedules that can be implemented without many manual adjustments or modifications. We have considered the following as measures of the quality of the model: simplicity, solution quality and applicability. We tested the alternative models with a set of real data obtained from several different transportation companies and analyzed the optimal schedules obtained with respect to the applicability of the solution to the real situation. To do this, the schedules were analyzed by the planners to determine their quality and applicability. The main result of this work is the proposition of new mathematical models for the DSP that better represent the realities of the passenger transportation operators and lead to better schedules that can be implemented directly in real situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contamination of weather radar echoes by anomalous propagation (anaprop) mechanisms remains a serious issue in quality control of radar precipitation estimates. Although significant progress has been made identifying clutter due to anaprop there is no unique method that solves the question of data reliability without removing genuine data. The work described here relates to the development of a software application that uses a numerical weather prediction (NWP) model to obtain the temperature, humidity and pressure fields to calculate the three dimensional structure of the atmospheric refractive index structure, from which a physically based prediction of the incidence of clutter can be made. This technique can be used in conjunction with existing methods for clutter removal by modifying parameters of detectors or filters according to the physical evidence for anomalous propagation conditions. The parabolic equation method (PEM) is a well established technique for solving the equations for beam propagation in a non-uniformly stratified atmosphere, but although intrinsically very efficient, is not sufficiently fast to be practicable for near real-time modelling of clutter over the entire area observed by a typical weather radar. We demonstrate a fast hybrid PEM technique that is capable of providing acceptable results in conjunction with a high-resolution terrain elevation model, using a standard desktop personal computer. We discuss the performance of the method and approaches for the improvement of the model profiles in the lowest levels of the troposphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asbestos is an industrial term to describe some fibrous silicate minerals, which belong to the amphiboles or serpentines group. Six minerals are defined as asbestos including: chrysotile (white asbestos), amosite (grunerite, brown asbestos), crocidolite (riebeckite, blue asbestos), anthophyllite, tremolite and actonolite, but only in their fibrous form. In 1973, the IARC (International Agency for Research on Cancer) classified the asbestos minerals as carcinogenic substances (IARC,1973). The Swiss threshold limit (VME) is 0.01 fibre/ml (SUVA, 2007). Asbestos in Switzerland has been prohibited since 1990, but this doesn't mean we are over asbestos. Up to 20'000 tonnes/year of asbestos was imported between the end of WWII and 1990. Today, all this asbestos is still present in buildings renovated or built during that period of time. During restorations, asbestos fibres can be emitted into the air. The quantification of the emission has to be evaluated accurately. To define the exact risk on workers or on the population is quite hard, as many factors must be considered. The methods to detect asbestos in the air or in materials are still being discussed today. Even though the EPA 600 method (EPA, 1993) has proved itself for the analysis of bulk materials, the method for air analysis is more problematic. In Switzerland, the recommended method is VDI 3492 using a scanning electron microscopy (SEM), but we have encountered many identifications problems with this method. For instance, overloaded filters or long-term exposed filters cannot be analysed. This is why the Institute for Work and Health (IST) has adapted the ISO10312 method: ambient air - determination of asbestos fibres - direct-transfer transmission electron microscopy (TEM) method (ISO, 1995). Quality controls have already be done at a French institute (INRS), which validate our practical experiences. The direct-transfer from MEC's filters on TEM's supports (grids) is a delicate part of the preparation for analysis and requires a lot of trials in the laboratory. IST managed to do proper grid preparations after about two years of development. In addition to the preparation of samples, the micro-analysis (EDX), the micro-diffraction and the morphologic analysis (figure 1.a-c) are also to be mastered. Theses are the three elements, which prove the different features of asbestos identification. The SEM isn't able to associate those three analyses. The TEM is also able to make the difference between artificial and natural fibres that have very similar chemical compositions as well as differentiate types of asbestos. Finally the experiments concluded by IST show that TEM is the best method to quantify and identify asbestos in the air.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The no-till system with complex cropping sequences may improve the structural quality and carbon (C) sequestration in soils of the tropics. Thus, the objective of this study was to evaluate the effects of cropping sequences after eight years under the no-till system on the physical properties and C sequestration in an Oxisol in the municipality of Jaboticabal, Sao Paulo, Brazil. A randomized split-block design with three replications was used. The treatments were combinations of three summer cropping sequences - corn/corn (Zea mays L.) (CC), soybean/soybean (Glycine max L. Merryll) (SS), and soybean-corn (SC); and seven winter crops - corn, sunflower (Helianthus annuus L.), oilseed radish (Raphanus sativus L.), pearl millet (Pennisetum americanum (L.) Leeke), pigeon pea (Cajanus cajan (L.) Millsp), grain sorghum (Sorghum bicolor (L.) Moench), and sunn hemp (Crotalaria juncea L.). Soil samples were taken at the 0-10 cm depth after eight years of experimentation. Soil under SC and CC had higher mean weight diameter (3.63 and 3.55 mm, respectively) and geometric mean diameter (3.55 and 2.92 mm) of the aggregates compared to soil under SS (3.18 and 2.46 mm). The CC resulted in the highest soil organic C content (17.07 g kg-1), soil C stock (15.70 Mg ha-1), and rate of C sequestration (0.70 Mg ha-1 yr-1) among the summer crops. Among the winter crops, soil under pigeon pea had the highest total porosity (0.50 m³ m-3), and that under sunn hemp had the highest water stable aggregates (93.74 %). In addition, sunn hemp did not differ from grain sorghum and contained the highest soil organic C content (16.82 g kg-1) and also had the highest rate of C sequestration (0.67 Mg ha-1 yr-1). The soil resistance to penetration was the lower limit of the least limiting water range, while the upper limit was air-filled porosity for soil bulk densities higher than 1.39 kg dm-3 for all cropping sequences. Within the SC sequence, soil under corn and pigeon pea increased least limiting water range by formation of biopores because soil resistance to penetration decreased with the increase in soil bulk density.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Longitudinal joint quality control/assurance is essential to the successful performance of asphalt pavements and it has received considerable amount of attention in recent years. The purpose of the study is to evaluate the level of compaction at the longitudinal joint and determine the effect of segregation on the longitudinal joint performance. Five paving projects with the use of traditional butt joint, infrared joint heater, edge restraint by milling and modified butt joint with the hot pinch longitudinal joint construction techniques were selected in this study. For each project, field density and permeability tests were made and cores from the pavement were obtained for in-lab permeability, air void and indirect tensile strength. Asphalt content and gradations were also obtained to determine the joint segregation. In general, this study finds that the minimum required joint density should be around 90.0% of the theoretical maximum density based on the AASHTO T166 method. The restrained-edge by milling and butt joint with the infrared heat treatment construction methods both create the joint density higher than this 90.0% limit. Traditional butt joint exhibits lower density and higher permeability than the criterion. In addition, all of the projects appear to have segregation at the longitudinal joint except for the edge-restraint by milling method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to investigate the effect of cement paste quality on the concrete performance, particularly fresh properties, by changing the water-to-cementitious materials ratio (w/cm), type and dosage of supplementary cementitious materials (SCM), and airvoid system in binary and ternary mixtures. In this experimental program, a total matrix of 54 mixtures with w/cm of 0.40 and 0.45; target air content of 2%, 4%, and 8%; a fixed cementitious content of 600 pounds per cubic yard (pcy), and the incorporation of three types of SCMs at different dosages was prepared. The fine aggregate-to- total aggregate ratio was fixed at 0.42. Workability, rheology, air-void system, setting time, strength, Wenner Probe surface resistivity, and shrinkage were determined. The effects of paste variables on workability are more marked at the higher w/cm. The compressive strength is strongly influenced by the paste quality, dominated by w/cm and air content. Surface resistivity is improved by inclusion of Class F fly ash and slag cement, especially at later ages. Ternary mixtures performed in accordance with their ingredients. The data collected will be used to develop models that will be part of an innovative mix proportioning procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report discusses the asphalt pavement recycling project designated Project HR-188 in Kossuth County, Iowa. Specific objectives were: (a) to determine the effectiveness of drum mixing plant modifications designed to control air pollution within limits specified by the Iowa Department of Environmental Quality; (b) to assess the impact of varying the proportions of recycled and virgin aggregates, (c) to assess the impact of varying the production rate of the plant, and (d) to assess the impact of varying the mixing temperature. The discussion includes information on the proposed use of research funds, project location and description, the project planning conference, plan development, bid letting, asphalt plant configuration, actual plant operation, why this method is successful, probable process limitations, pollution results, recycled pavement test results, and the cost of virgin vs. recycled asphalt pavements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MRI has evolved into an important diagnostic technique in medical imaging. However, reliability of the derived diagnosis can be degraded by artifacts, which challenge both radiologists and automatic computer-aided diagnosis. This work proposes a fully-automatic method for measuring image quality of three-dimensional (3D) structural MRI. Quality measures are derived by analyzing the air background of magnitude images and are capable of detecting image degradation from several sources, including bulk motion, residual magnetization from incomplete spoiling, blurring, and ghosting. The method has been validated on 749 3D T(1)-weighted 1.5T and 3T head scans acquired at 36 Alzheimer's Disease Neuroimaging Initiative (ADNI) study sites operating with various software and hardware combinations. Results are compared against qualitative grades assigned by the ADNI quality control center (taken as the reference standard). The derived quality indices are independent of the MRI system used and agree with the reference standard quality ratings with high sensitivity and specificity (>85%). The proposed procedures for quality assessment could be of great value for both research and routine clinical imaging. It could greatly improve workflow through its ability to rule out the need for a repeat scan while the patient is still in the magnet bore.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract One of the most important issues in molecular biology is to understand regulatory mechanisms that control gene expression. Gene expression is often regulated by proteins, called transcription factors which bind to short (5 to 20 base pairs),degenerate segments of DNA. Experimental efforts towards understanding the sequence specificity of transcription factors is laborious and expensive, but can be substantially accelerated with the use of computational predictions. This thesis describes the use of algorithms and resources for transcriptionfactor binding site analysis in addressing quantitative modelling, where probabilitic models are built to represent binding properties of a transcription factor and can be used to find new functional binding sites in genomes. Initially, an open-access database(HTPSELEX) was created, holding high quality binding sequences for two eukaryotic families of transcription factors namely CTF/NF1 and LEFT/TCF. The binding sequences were elucidated using a recently described experimental procedure called HTP-SELEX, that allows generation of large number (> 1000) of binding sites using mass sequencing technology. For each HTP-SELEX experiments we also provide accurate primary experimental information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, and assembled clone sequences of binding sequences. The database also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols.The database is available at http://wwwisrec.isb-sib.ch/htpselex/ and and ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex. The Expectation-Maximisation(EM) algorithm is one the frequently used methods to estimate probabilistic models to represent the sequence specificity of transcription factors. We present computer simulations in order to estimate the precision of EM estimated models as a function of data set parameters(like length of initial sequences, number of initial sequences, percentage of nonbinding sequences). We observed a remarkable robustness of the EM algorithm with regard to length of training sequences and the degree of contamination. The HTPSELEX database and the benchmarked results of the EM algorithm formed part of the foundation for the subsequent project, where a statistical framework called hidden Markov model has been developed to represent sequence specificity of the transcription factors CTF/NF1 and LEF1/TCF using the HTP-SELEX experiment data. The hidden Markov model framework is capable of both predicting and classifying CTF/NF1 and LEF1/TCF binding sites. A covariance analysis of the binding sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism. We next tested the LEF1/TCF model by computing binding scores for a set of LEF1/TCF binding sequences for which relative affinities were determined experimentally using non-linear regression. The predicted and experimentally determined binding affinities were in good correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ydinvoimalaitokset on suunniteltu ja rakennettu niin, että niillä on kyky selviytyä erilaisista käyttöhäiriöistä ja onnettomuuksista ilman laitoksen vahingoittumista sekä väestön ja ympäristön vaarantumista. On erittäin epätodennäköistä, että ydinvoimalaitosonnettomuus etenee reaktorisydämen vaurioitumiseen asti, minkä seurauksena sydänmateriaalien hapettuminen voi tuottaa vetyä. Jäädytyspiirin rikkoutumisen myötä vety saattaa kulkeutua ydinvoimalaitoksen suojarakennukseen, jossa se voi muodostaa palavan seoksen ilman hapen kanssa ja palaa tai jopa räjähtää. Vetypalosta aiheutuvat lämpötila- ja painekuormitukset vaarantavat suojarakennuksen eheyden ja suojarakennuksen sisällä olevien turvajärjestelmien toimivuuden, joten tehokas ja luotettava vedynhallintajärjestelmä on tarpeellinen. Passiivisia autokatalyyttisiä vetyrekombinaattoreita käytetäänyhä useammissa Euroopan ydinvoimaitoksissa vedynhallintaan. Nämä rekombinaattorit poistavat vetyä katalyyttisellä reaktiolla vedyn reagoidessa katalyytin pinnalla hapen kanssa muodostaen vesihöyryä. Rekombinaattorit ovat täysin passiivisiaeivätkä tarvitse ulkoista energiaa tai operaattoritoimintaa käynnistyäkseen taitoimiakseen. Rekombinaattoreiden käyttäytymisen tutkimisellatähdätään niiden toimivuuden selvittämiseen kaikissa mahdollisissa onnettomuustilanteissa, niiden suunnittelun optimoimiseen sekä niiden optimaalisen lukumäärän ja sijainnin määrittämiseen suojarakennuksessa. Suojarakennuksen mallintamiseen käytetään joko keskiarvoistavia ohjelmia (Lumped parameter (LP) code), moniulotteisia virtausmalliohjelmia (Computational Fluid Dynamics, CFD) tai näiden yhdistelmiä. Rekombinaattoreiden mallintaminen on toteutettu näissä ohjelmissa joko kokeellisella, teoreettisella tai yleisellä (eng. Global Approach) mallilla. Tämä diplomityö sisältää tulokset TONUS OD-ohjelman sisältämän Siemens FR90/1-150 rekombinaattorin mallin vedynkulutuksen tarkistuslaskuista ja TONUS OD-ohjelmalla suoritettujen laskujen tulokset Siemens rekombinaattoreiden vuorovaikutuksista. TONUS on CEA:n (Commissariat à 1'En¬ergie Atomique) kehittämä LP (OD) ja CFD -vetyanalyysiohjelma, jota käytetään vedyn jakautumisen, palamisenja detonaation mallintamiseen. TONUS:sta käytetään myös vedynpoiston mallintamiseen passiivisilla autokatalyyttisillä rekombinaattoreilla. Vedynkulutukseen vaikuttavat tekijät eroteltiin ja tutkittiin yksi kerrallaan. Rekombinaattoreiden vuorovaikutuksia tutkittaessa samaan tilavuuteen sijoitettiin eri kokoisia ja eri lukumäärä rekombinaattoreita. Siemens rekombinaattorimalli TONUS OD-ohjelmassa laskee vedynkulutuksen kuten oletettiin ja tulokset vahvistavat TONUS OD-ohjelman fysikaalisen laskennan luotettavuuden. Mahdollisia paikallisia jakautumia tutkitussa tilavuudessa ei voitu havaita LP-ohjelmalla, koska se käyttäälaskennassa suureiden tilavuuskeskiarvoja. Paikallisten jakautumien tutkintaan tarvitaan CFD -laskentaohjelma.