990 resultados para Safety assessments
Resumo:
This paper presents two simple simulation and modelling tools designed to aid in the safety assessment required for unmanned aircraft operations within unsegregated airspace. First, a fast pair-wise encounter generator is derived to simulate the See and Avoid environment. The utility of the encounter generator is demonstrated through the development of a hybrid database and a statistical performance evaluation of an autonomous See and Avoid decision and control strategy. Second, an unmanned aircraft mission generator is derived to help visualise the impact of multiple persistent unmanned operations on existing air traffic. The utility of the mission generator is demonstrated through an example analysis of a mixed airspace environment using real traffic data in Australia. These simulation and modelling approaches constitute a useful and extensible set of analysis tools, that can be leveraged to help explore some of the more fundamental and challenging problems facing civilian unmanned aircraft system integration.
Resumo:
Active vibration control (AVC) is a relatively new technology for the mitigation of annoying human-induced vibrations in floors. However, recent technological developments have demonstrated its great potential application in this field. Despite this, when a floor is found to have problematic floor vibrations after construction the unfamiliar technology of AVC is usually avoided in favour of more common techniques, such as Tuned Mass Dampers (TMDs) which have a proven track record of successful application, particularly for footbridges and staircases. This study aims to investigate the advantages and disadvantages that AVC has, when compared with TMDs, for the application of mitigation of pedestrian-induced floor vibrations in offices. Simulations are performed using the results from a finite element model of a typical office layout that has a high vibration response level. The vibration problems on this floor are then alleviated through the use of both AVC and TMDs and the results of each mitigation configuration compared. The results of this study will enable a more informed decision to be made by building owners and structural engineers regarding suitable technologies for reducing floor vibrations.
Resumo:
The current procedures in post-earthquake safety and structural assessment are performed manually by a skilled triage team of structural engineers/certified inspectors. These procedures, and particularly the physical measurement of the damage properties, are time-consuming and qualitative in nature. This paper proposes a novel method that automatically detects spalled regions on the surface of reinforced concrete columns and measures their properties in image data. Spalling has been accepted as an important indicator of significant damage to structural elements during an earthquake. According to this method, the region of spalling is first isolated by way of a local entropy-based thresholding algorithm. Following this, the exposure of longitudinal reinforcement (depth of spalling into the column) and length of spalling along the column are measured using a novel global adaptive thresholding algorithm in conjunction with image processing methods in template matching and morphological operations. The method was tested on a database of damaged RC column images collected after the 2010 Haiti earthquake, and comparison of the results with manual measurements indicate the validity of the method.
Resumo:
During the last 2 decades, the public and private sectors have made substantial international research progress toward improving the nutritional value of a wide range of food and feed crops. Nevertheless, significant numbers of people still suffer from the effects of undernutrition. In addition, the nutritional quality of feed is often a limiting factor in livestock production systems, particularly those in developing countries. As newly developed crops with nutritionally improved traits come closer to being available to producers and consumers, we must ensure that scientifically sound and efficient processes are used to assess the safety and nutritional quality of these crops. Such processes will facilitate deploying these crops to those world areas with large numbers of people who need them. This document describes 5 case studies of crops with improved nutritional value. These case studies examine the principles and recommendations published by the Intl. Life Sciences Inst. (ILSI) in 2004 for the safety and nutritional assessment of foods and feeds derived from nutritionally improved crops (ILSI 2004). One overarching conclusion that spans all 5 case studies is that the comparative safety assessment process is a valid approach. Such a process has been endorsed by many publications and organizations, including the 2004 ILSI publication. The type and extent of data that are appropriate for a scientifically sound comparative safety assessment are presented on a case-by-case basis in a manner that takes into account scientific results published since the 2004 ILSI report. This report will appear in the January issue of Comprehensive Reviews in Food Science and Food Safety.
Resumo:
The growing number of potential applications of Unmanned Aircraft Systems (UAS) in civilian operations and national security is putting pressure of National Airworthiness Authorities to provide a path for certification and allow UAS integration into the national airspace. The success of this integration depends not only on developments in improved UAS reliability and safety, but also on regulations for certification, and methodologies for operational performance and safety assessment. This paper focuses on the latter and describes progress in relation to a previously proposed framework for evaluating robust autonomy of UAS. The paper draws parallels between the proposed evaluation framework and the evaluation of pilots during the licensing process. It discusses how the data from the proposed evaluation can be used as an aid for decision making in certification and UAS designs. Finally, it discusses challenges associated with the evaluation.
Resumo:
En février, 2009 un rapport de PHRMA (Pharmaceutical Research and Manufacturers of America) confirmait que plus de 300 médicaments pour le traitement des maladies cardiaques étaient en phase d’essais cliniques ou en révision par les agences règlementaires. Malgré cette abondance de nouvelles thérapies cardiovasculaires, le nombre de nouveaux médicaments approuvés chaque année (toutes indications confondues) est en déclin avec seulement 17 et 24 nouveaux médicaments approuvés en 2007 et 2008, respectivement. Seulement 1 médicament sur 5000 sera approuvé après 10 à 15 ans de développement au coût moyen de 800 millions $. De nombreuses initiatives ont été lancées par les agences règlementaires afin d’augmenter le taux de succès lors du développement des nouveaux médicaments mais les résultats tardent. Cette stagnation est attribuée au manque d’efficacité du nouveau médicament dans bien des cas mais les évaluations d’innocuité remportent la palme des causes d’arrêt de développement. Primum non nocere, la maxime d’Hippocrate, père de la médecine, demeure d’actualité en développement préclinique et clinique des médicaments. Environ 3% des médicaments approuvés au cours des 20 dernières années ont, par la suite, été retirés du marché suite à l’identification d’effets adverses. Les effets adverses cardiovasculaires représentent la plus fréquente cause d’arrêt de développement ou de retrait de médicament (27%) suivi par les effets sur le système nerveux. Après avoir défini le contexte des évaluations de pharmacologie de sécurité et l’utilisation des bio-marqueurs, nous avons validé des modèles d’évaluation de l’innocuité des nouveaux médicaments sur les systèmes cardiovasculaires, respiratoires et nerveux. Évoluant parmi les contraintes et les défis des programmes de développements des médicaments, nous avons évalué l’efficacité et l’innocuité de l’oxytocine (OT), un peptide endogène à des fins thérapeutiques. L’OT, une hormone historiquement associée à la reproduction, a démontré la capacité d’induire la différentiation in vitro de lignées cellulaires (P19) mais aussi de cellules souches embryonnaires en cardiomyocytes battants. Ces observations nous ont amené à considérer l’utilisation de l’OT dans le traitement de l’infarctus du myocarde. Afin d’arriver à cet objectif ultime, nous avons d’abord évalué la pharmacocinétique de l’OT dans un modèle de rat anesthésié. Ces études ont mis en évidence des caractéristiques uniques de l’OT dont une courte demi-vie et un profil pharmacocinétique non-linéaire en relation avec la dose administrée. Ensuite, nous avons évalué les effets cardiovasculaires de l’OT sur des animaux sains de différentes espèces. En recherche préclinique, l’utilisation de plusieurs espèces ainsi que de différents états (conscients et anesthésiés) est reconnue comme étant une des meilleures approches afin d’accroître la valeur prédictive des résultats obtenus chez les animaux à la réponse chez l’humain. Des modèles de rats anesthésiés et éveillés, de chiens anesthésiés et éveillés et de singes éveillés avec suivi cardiovasculaire par télémétrie ont été utilisés. L’OT s’est avéré être un agent ayant d’importants effets hémodynamiques présentant une réponse variable selon l’état (anesthésié ou éveillé), la dose, le mode d’administration (bolus ou infusion) et l’espèce utilisée. Ces études nous ont permis d’établir les doses et régimes de traitement n’ayant pas d’effets cardiovasculaires adverses et pouvant être utilisées dans le cadre des études d’efficacité subséquentes. Un modèle porcin d’infarctus du myocarde avec reperfusion a été utilisé afin d’évaluer les effets de l’OT dans le traitement de l’infarctus du myocarde. Dans le cadre d’un projet pilote, l’infusion continue d’OT initiée immédiatement au moment de la reperfusion coronarienne a induit des effets cardiovasculaires adverses chez tous les animaux traités incluant une réduction de la fraction de raccourcissement ventriculaire gauche et une aggravation de la cardiomyopathie dilatée suite à l’infarctus. Considérant ces observations, l’approche thérapeutique fût révisée afin d’éviter le traitement pendant la période d’inflammation aigüe considérée maximale autour du 3ième jour suite à l’ischémie. Lorsqu’initié 8 jours après l’ischémie myocardique, l’infusion d’OT a engendré des effets adverses chez les animaux ayant des niveaux endogènes d’OT élevés. Par ailleurs, aucun effet adverse (amélioration non-significative) ne fût observé chez les animaux ayant un faible niveau endogène d’OT. Chez les animaux du groupe placebo, une tendance à observer une meilleure récupération chez ceux ayant des niveaux endogènes initiaux élevés fût notée. Bien que la taille de la zone ischémique à risque soit comparable à celle rencontrée chez les patients atteints d’infarctus, l’utilisation d’animaux juvéniles et l’absence de maladies coronariennes sont des limitations importantes du modèle porcin utilisé. Le potentiel de l’OT pour le traitement de l’infarctus du myocarde demeure mais nos résultats suggèrent qu’une administration systémique à titre de thérapie de remplacement de l’OT devrait être considérée en fonction du niveau endogène. De plus amples évaluations de la sécurité du traitement avec l’OT dans des modèles animaux d’infarctus du myocarde seront nécessaires avant de considérer l’utilisation d’OT dans une population de patients atteint d’un infarctus du myocarde. En contre partie, les niveaux endogènes d’OT pourraient posséder une valeur pronostique et des études cliniques à cet égard pourraient être d’intérêt.
Resumo:
Platelet-derived growth factor-BB (PDGF-BB) stimulates repair of healing-impaired chronic wounds such as diabetic ulcers and periodontal lesions. However, limitations in predictability of tissue regeneration occur due, in part, to transient growth factor bioavailability in vivo. Here, we report that gene delivery of PDGF-B stimulates repair of oral implant extraction socket defects. Alveolar ridge defects were created in rats and were treated at the time of titanium implant installation with a collagen matrix containing an adenoviral (Ad) vector encoding PDGF-B (5.5 x 10(8) or 5.5 x 10(9) pfu ml (1)), Ad encoding luciferase (Ad-Luc; 5.5 x 10(9) pfu ml (1); control) or recombinant human PDGF-BB protein (rhPDGF-BB, 0.3 mg ml (1)). Bone repair and osseointegration were measured through backscattered scanning electron microscopy, histomorphometry, microcomputed tomography and biomechanical assessments. Furthermore, a panel of local and systemic safety assessments was performed. Results indicated that bone repair was accelerated by Ad-PDGF-B and rhPDGF-BB delivery compared with Ad-Luc, with the high dose of Ad-PDGF-B more effective than the low dose. No significant dissemination of the vector construct or alteration of systemic parameters was noted. In summary, gene delivery of Ad-PDGF-B shows regenerative and safety capabilities for bone tissue engineering and osseointegration in alveolar bone defects comparable with rhPDGF-BB protein delivery in vivo. Gene Therapy (2010) 17, 95-104; doi: 10.1038/gt.2009.117; published online 10 September 2009
Resumo:
For the safety assessments of nuclear waste repositories, the possible migration of the radiotoxic waste into environment must be considered. Since plutonium is the major contribution at the radiotoxicity of spent nuclear waste, it requires special care with respect to its mobilization into the groundwater. Plutonium has one of the most complicated chemistry of all elements. It can coexist in 4 oxidation states parallel in one solution. In this work is shown that in the presence of humic substances it is reduced to the Pu(III) and Pu(IV). This work has the focus on the interaction of Pu(III) with natural occurring compounds (humic substances and clay minerals bzw. Kaolinite), while Pu(IV) was studied in a parallel doctoral work by Banik (in preparation). As plutonium is expected under extreme low concentrations in the environment, very sensitive methods are needed to monitor its presence and for its speciation. Resonance ionization mass spectrometry (RIMS), was used for determining the concentration of Pu in environmental samples, with a detection limit of 106- 107 atoms. For the speciation of plutonium CE-ICP-MS was routinely used to monitor the behaviour of Pu in the presence of humic substances. In order to reduce the detection limits of the speciation methods, the coupling of CE to RIMS was proposed. The first steps have shown that this can be a powerful tool for studies of pu under environmental conditions. Further, the first steps in the coupling of two parallel working detectors (DAD and ICP_MS ) to CE was performed, for the enabling a precise study of the complexation constants of plutonium with humic substances. The redox stabilization of Pu(III) was studied and it was determined that NH2OHHCl can maintain Pu(III) in the reduced form up to pH 5.5 – 6. The complexation constants of Pu(III) with Aldrich humic acid (AHA) were determined at pH 3 and 4. the logß = 6.2 – 6.8 found for these experiments was comparable with the literature. The sorption of Pu(III) onto kaolinite was studied in batch experiments and it was determine dthat the pH edge was at pH ~ 5.5. The speciation of plutonium on the surface of kaolinite was studied by EXAFS/XANES. It was determined that the sorbed species was Pu(IV). The influence of AHA on the sorption of Pu(III) onto kaolinite was also investigated. It was determined that at pH < 5 the adsorption is enhanced by the presence of AHA (25 mg/L), while at pH > 6 the adsorption is strongly impaired (depending also on the adding sequence of the components), leading to a mobilization of plutonium in solution.
Resumo:
In der vorliegenden Arbeit wurde der Nachweis des Isotops Np-237 mit Resonanzionisations-Massenspektrometrie (RIMS) entwickelt und optimiert. Bei RIMS werden Probenatome mehrstufig-resonant mit Laserstrahlung angeregt, ionisiert und anschließend massenspektrometrisch nachgewiesen. Die Bestimmung geeigneter Energiezustände für die Anregung und Ionisation von Np-237 erfolgte durch Resonanzionisationsspektroskopie (RIS), wobei über 300 bisher unbekannte Energieniveaus und autoionisierende Zustände von Np-237 identifiziert wurden. Mit in-source-RIMS wird für das Isotop eine Nachweisgrenze von 9E+5 Atome erreicht. rnrnDie Mobilität von Np in der Umwelt hängt stark von seiner Elementspeziation ab. Für Sicherheitsanalysen potentieller Endlagerstandorte werden daher Methoden benötigt, die Aussagen über die unter verschiedenen Bedingungen vorliegenden Neptuniumspezies ermöglichen. Hierzu wurde eine online-Kopplung aus Kapillarelektrophorese (CE) und ICP-MS (inductively coupled plasma mass spectrometry) genutzt, mit der die Np-Redoxspezies Np(IV) und Np(V) noch bei einer Konzentrationen von 1E-9 mol/L selektiv nachgewiesen werden können. Das Verfahren wurde eingesetzt, um die Wechselwirkung des Elements mit Opalinuston unter verschiedenen Bedingungen zu untersuchen. Dabei konnte gezeigt werden, dass bei Gegenwart von Fe(II) Np(V) zu Np(IV) reduziert wird und dieses am Tongestein sorbiert. Dies führt insgesamt zu einer deutlich erhöhten Sorption des Np am Ton.
Resumo:
Die vorliegende Arbeit wurde im Rahmen des BMWi-Verbundprojektes Wechselwirkung und Transport von Aktiniden im natürlichen Tongestein unter Berücksichtigung von Huminstoffen und Tonorganika – Wechselwirkung von Neptunium und Plutonium mit natürlichem Tongestein“ durchgeführt. Um die langfristige Sicherheit der nuklearen Endlager beurteilen zu können, muss eine mögliche Migration der radiotoxischen Abfälle in die Umwelt betrachtet werden. Wegen seiner langen Halbwertszeit (24000 a) leistet Pu-239 einen wesentlichen Beitrag zur Radiotoxizität abgebrannter Kernbrennstoffe in einem Endlager. Das redox-sensitive Pu tritt in Lösung unter umweltrelevanten Bedingungen in den Oxidationsstufen +III bis +VI auf und kann nebeneinander in bis zu vier Oxidationsstufen vorliegen. Tonsteinformationen werden als mögliches Wirtsgestein für Endlager hoch-radioaktiver Abfälle betrachtet. Deshalb sind ausführliche Informationen zur Mobilisierung und Immobilisierung des Pu durch/in das Grundwasser aus einem Endlager von besonderer Bedeutung. In dieser Arbeit wurden neue Erkenntnisse über die Wechselwirkung zwischen Pu und dem natürlichen Tongestein Opalinuston (OPA, Mont Terri, Schweiz) mit Hinblick auf die Endlagerung wärmeentwickelnder radioaktiver Abfälle in einem geologischen Tiefenlager gewonnen.rnDer Fokus der Arbeit lag dabei auf der Bestimmung der Speziation von Pu an der Mineraloberfläche nach Sorptions- und Diffusionsprozessen mittels verschiedener synchrotronbasierter Methoden (µ-XRF, µ-XANES/EXAFS, µ-XRD, XANES/EXAFS). rnDie Wechselwirkung zwischen Pu und OPA wurde zunächst in Batch- und Diffusionsexperimenten in Abhängigkeit verschiedener experimenteller Parameter (u.a. pH, Pu-Oxidationsstufe) untersucht. In Sorptionsexperimenten konnte gezeigt werden, dass einige Parameter (z.B. Temperatur, Huminsäure) einen deutlichen Einfluss auf die Sorption von Pu haben.rnDie Speziationsuntersuchungen wurden zum einen an Pulverproben aus Batchexperimenten und zum anderen an OPA-Dünnschliffen bzw. Diffusionsproben in Abhängigkeit verschiedener experimenteller Parameter durchgeführt. Die EXAFS-Messungen an der Pu LIII-Kante der Pulverproben ergaben, dass eine innersphäriche Sorption von Pu(IV) an Tongestein unabhängig von dem Ausgangsoxidationszustand des Plutoniums in Lösung stattgefunden hat. Durch die Kombination der ortsaufgelösten Methoden wurde erstmalig mittels μ-XRF die Verteilung von Pu und anderen in OPA enthaltenen Elementen bestimmt. µ-XANES-Spektren an Pu-Anreicherungen auf OPA-Dünnschliffen und in Diffusionsproben bestätigen, dass das weniger mobile Pu(IV) die dominierende Spezies nach den Sorptions- und Diffusionsprozessen ist. Darüber hinaus wurde zum ersten Mal ein Diffusionsprofil von Pu in OPA mittels µ-XRF gemessen. Die Speziationsuntersuchungen mittels μ-XANES zeigten, dass das eingesetzte Pu(V) entlang seines Diffusionspfades zunehmend zu Pu(IV) reduziert wird. Mit µ-XRD wurde Illit als dominierende Umgebung, in der Pu angereichert wurde, identifiziert und Siderit als eine redoxaktive Phase auftreten kann. Die Ergebnisse dieser Arbeit zeigen, dass die Sicherheit von OPA als Wirtsgestein eines Endlagers hoch-radioaktiver Abfälle positiv zu bewerten ist. rn
Resumo:
Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.
Resumo:
Carbon dioxide deep geological storage, especially in deep saline aquifers, is one of the preferred technological options to mitigate the effects of greenhouse gases emissions. Thus, in the last decade, studies characterising the behaviour of potential CO2 deep geological storage sites along with thorough safety assessments have been considered essential in order to minimise the risks associated with these sites. The study of natural analogues represents the best source of reliable information about the expected hydrogeochemical processes involved in the CO2 storage in such deep saline aquifers. In this work, a comprehensive study of the hydrogeochemical features and processes taking place at the natural analogue of the Alicún de las Torres thermal system (Betic Cordillera) has been conducted. Thus, the main water/CO2/rock interaction processes occurring at the thermal system have been identified, quantified and modelled, and a principle conclusion is that the hydrogeochemical evolution of the thermal system is controlled by a global dedolomitization process triggered by gypsum dissolution. This geochemical process generates a different geochemical environment to that which would result from the exclusive dissolution of carbonates from the deep aquifer, which is generally considered as the direct result of CO2 injection in a deep carbonate aquifer. Therefore, discounting of the dedolomitization process in any CO2 deep geological storage may lead to erroneous conclusions. This process will also influence the porosity evolution of the CO2 storage formation, which is a very relevant parameter when evaluating a reservoir for CO2 storage. The geothermometric calculation performed in this work leads to estimate that the thermal water reservoir is located between 650 and 800 m depth, which is very close to the minimum required to inject CO2 in a deep geological storage. It is clear that the proper characterisation of the features and hydrogeochemical processes taking place at a natural system analogous to a man-made deep geological storage will provide useful conceptual, semi-quantitative and even quantitative information about the processes and consequences that may occur at the artificial storage system.
Resumo:
Regional safety program managers face a daunting challenge in the attempt to reduce deaths, injuries, and economic losses that result from motor vehicle crashes. This difficult mission is complicated by the combination of a large perceived need, small budget, and uncertainty about how effective each proposed countermeasure would be if implemented. A manager can turn to the research record for insight, but the measured effect of a single countermeasure often varies widely from study to study and across jurisdictions. The challenge of converting widespread and conflicting research results into a regionally meaningful conclusion can be addressed by incorporating "subjective" information into a Bayesian analysis framework. Engineering evaluations of crashes provide the subjective input on countermeasure effectiveness in the proposed Bayesian analysis framework. Empirical Bayes approaches are widely used in before-and-after studies and "hot-spot" identification; however, in these cases, the prior information was typically obtained from the data (empirically), not subjective sources. The power and advantages of Bayesian methods for assessing countermeasure effectiveness are presented. Also, an engineering evaluation approach developed at the Georgia Institute of Technology is described. Results are presented from an experiment conducted to assess the repeatability and objectivity of subjective engineering evaluations. In particular, the focus is on the importance, methodology, and feasibility of the subjective engineering evaluation for assessing countermeasures.