991 resultados para Nuclear fuel
Resumo:
Over a time span of almost a decade, the FUELCON project in nuclear engineering has led to a fully functional expert system and spawned sequel projects. Its task is in-core fuel management, also called `refueling', i.e., good fuel-allocation for reloading the core of a given nuclear reactor, for a given operation cycle. The task is crucial for keeping down operation costs at nuclear power plants. Fuel comes in different types and is positioned in a grid representing the core of a reactor. The tool is useful for practitioners but also helps the expert in the domain to test his or her rules of thumb and to discover new ones.
Resumo:
Impedance spectroscopy and nuclear magnetic resonance (NMR) were used to investigate the mobility of water molecules located in the interlayer space of H(+) - exchanged bentonite clay. The conductivity obtained by ac measurements was 1.25 x 10(-4) S/cm at 298 K. Proton ((1)H) lineshapes and spin-lattice relaxation times were measured as a function of temperature over the temperature range 130-320 K. The NMR experiments exhibit the qualitative features associated with the proton motion, namely the presence of a (1)H NMR line narrowing and a well-defined spin-lattice relaxation rate maximum. The temperature dependence of the proton spin-lattice relaxation rates was analyzed with the spectral density function appropriate for proton dynamics in a two-dimensional system. The self-diffusion coefficient estimated from our NMR data, D similar to 2 x 10(-7) cm(2)/s at 300 K, is consistent with those reported for exchanged montmorillonite clay hydrates studied by NMR and quasi-elastic neutron scattering (QNS).
Resumo:
A silica surface chemically modified with [3-(2,2'-dipyridylamine) propyl] groups was prepared, characterized, and evaluated for its metal ion preconcentration in fuel ethanol. To our knowledge, we are the first authors who have reported the present modification on silica gel surface. The material was characterized using infrared spectra, scanning electronic microscopy, and 13C and 29Si solid-state NMR spectra. Batch and column experiments were conducted to investigate for metal ion removal from fuel ethanol. The results showed that the Langmuir model describes the sorption equilibrium data of the metal ions in a satisfactory way. From the Langmuir isotherms, the following maximum adsorption capacities (in mmolg -1) were determined: 1.81 for Fe(III), 1.75 for Cr(III), 1.30 for Cu(II), 1.25 for Co(II), 1.15 for Pb(II), 0.95 for Ni(II), and 0.87 for Zn(II). Thermodynamic functions, the change of free energy (ΔG), enthalpy (ΔH), and entropy (ΔS) showed that the adsorption of metal ions onto Si-Pr-DPA was feasible, spontaneous, and endothermic. The sorption-desorption of the metal ions made possible the development of a preconcentration and quantification method of metal ions in fuel ethanol. © 2012 Elsevier Inc.
Resumo:
The frequency of spontaneous micronucleus (MN) formation in fish species needs to be determined to evaluate their usefulness for genotoxic biomonitoring. The definition of a good bioindicator takes into account the current knowledge of its metabolic traits as well as other factors including its feeding behavior and relationship to the environment. In this study, we compared the basal frequencies of micronucleated erythrocytes and nuclear abnormalities (NA) among different species of the fish Order Gymnotiformes (Rhamphichthys marmoratus, Steatogenys elegans, Sternopygus macrurus, Parapteronotus hasemani, Gymnotus mamiraua, Gymnotus arapaima, Brachyhypopomus beebei, Brachyhypopomus n. sp. BENN) sampled in several localities of the Eastern Amazon. A baseline of MN and NA frequency in these fish was determined, enabling the identification of potentially useful species as models for genotoxicity studies. Only one impacted sample collected at a site in the River Caripetuba showed a significant number of NAs, which may be due to the release of wastewater by neighbouring mining industries and by the burnt fuel released by the small boats used by a local community. Our results may provide support for further studies in areas of the Eastern Amazon affected by mining, deforestation and other anthropogenic activities.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Vinylphosphonic acid (VPA) was polymerized at 80 ºC by free radical polymerization to give polymers (PVPA) of different molecular weight depending on the initiator concentration. The highest molecular weight, Mw, achieved was 6.2 x 104 g/mol as determined by static light scattering. High resolution nuclear magnetic resonance (NMR) spectroscopy was used to gain microstructure information about the polymer chain. Information based on tetrad probabilities was utilized to deduce an almost atactic configuration. In addition, 13C-NMR gave evidence for the presence of head-head and tail-tail links. Refined analysis of the 1H NMR spectra allowed for the quantitative determination of the fraction of these links (23.5 percent of all links). Experimental evidence suggested that the polymerization proceeded via cyclopolymerization of the vinylphosphonic acid anhydride as an intermediate. Titration curves indicated that high molecular weight poly(vinylphosphonic acid) PVPA behaved as a monoprotic acid. Proton conductors with phosphonic acid moieties as protogenic groups are promising due to their high charge carrier concentration, thermal stability, and oxidation resistivity. Blends and copolymers of PVPA have already been reported, but PVPA has not been characterized sufficiently with respect to its polymer properties. Therefore, we also studied the proton conductivity behaviour of a well-characterized PVPA. PVPA is a conductor; however, the conductivity depends strongly on the water content of the material. The phosphonic acid functionality in the resulting polymer, PVPA, undergoes condensation leading to the formation of phosphonic anhydride groups at elevated temperature. Anhydride formation was found to be temperature dependent by solid state NMR. Anhydride formation affects the proton conductivity to a large extent because not only the number of charge carriers but also the mobility of the charge carriers seems to change.
Resumo:
Previous studies have highlighted the severity of detrimental effects for life on earth after an assumed regionally limited nuclear war. These effects are caused by climatic, chemical and radiative changes persisting for up to one decade. However, so far only a very limited number of climate model simulations have been performed, giving rise to the question how realistic previous computations have been. This study uses the coupled chemistry climate model (CCM) SOCOL, which belongs to a different family of CCMs than previously used, to investigate the consequences of such a hypothetical nuclear conflict. In accordance with previous studies, the present work assumes a scenario of a nuclear conflict between India and Pakistan, each applying 50 warheads with an individual blasting power of 15 kt ("Hiroshima size") against the major population centers, resulting in the emission of tiny soot particles, which are generated in the firestorms expected in the aftermath of the detonations. Substantial uncertainties related to the calculation of likely soot emissions, particularly concerning assumptions of target fuel loading and targeting of weapons, have been addressed by simulating several scenarios, with soot emissions ranging from 1 to 12 Tg. Their high absorptivity with respect to solar radiation leads to a rapid self-lofting of the soot particles into the strato- and mesosphere within a few days after emission, where they remain for several years. Consequently, the model suggests earth's surface temperatures to drop by several degrees Celsius due to the shielding of solar irradiance by the soot, indicating a major global cooling. In addition, there is a substantial reduction of precipitation lasting 5 to 10 yr after the conflict, depending on the magnitude of the initial soot release. Extreme cold spells associated with an increase in sea ice formation are found during Northern Hemisphere winter, which expose the continental land masses of North America and Eurasia to a cooling of several degrees. In the stratosphere, the strong heating leads to an acceleration of catalytic ozone loss and, consequently, to enhancements of UV radiation at the ground. In contrast to surface temperature and precipitation changes, which show a linear dependence to the soot burden, there is a saturation effect with respect to stratospheric ozone chemistry. Soot emissions of 5 Tg lead to an ozone column reduction of almost 50% in northern high latitudes, while emitting 12 Tg only increases ozone loss by a further 10%. In summary, this study, though using a different chemistry climate model, corroborates the previous investigations with respect to the atmospheric impacts. In addition to these persistent effects, the present study draws attention to episodically cold phases, which would likely add to the severity of human harm worldwide. The best insurance against such a catastrophic development would be the delegitimization of nuclear weapons.
Resumo:
The conceptual design of a pebble bed gas-cooled transmutation device is shown with the aim to evaluate its potential for its deployment in the context of the sustainable nuclear energy development, which considers high temperature reactors for their operation in cogeneration mode, producing electricity, heat and Hydrogen. As differential characteristics our device operates in subcritical mode, driven by a neutron source activated by an accelerator that adds clear safety advantages and fuel flexibility opening the possibility to reduce the nuclear stockpile producing energy from actual LWR irradiated fuel with an efficiency of 45?46%, either in the form of Hydrogen, electricity, or both.
Resumo:
The aim of this paper is to study the importance of nuclear data uncertainties in the prediction of the uncertainties in keff for LWR (Light Water Reactor) unit-cells. The first part of this work is focused on the comparison of different sensitivity/uncertainty propagation methodologies based on TSUNAMI and MCNP codes; this study is undertaken for a fresh-fuel at different operational conditions. The second part of this work studies the burnup effect where the indirect contribution due to the uncertainty of the isotopic evolution is also analyzed.
Resumo:
Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.
Resumo:
May 1, 1959.