118 resultados para Nuclides


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sensitivity analysis on the multiplication factor, keffkeff, to the cross section data has been carried out for the MYRRHA critical configuration in order to show the most relevant reactions. With these results, a further analysis on the 238Pu and 56Fe cross sections has been performed, comparing the evaluations provided in the JEFF-3.1.2 and ENDF/B-VII.1 libraries for these nuclides. Then, the effect in MYRRHA of the differences between evaluations are analysed, presenting the source of the differences. With these results, recommendations for the 56Fe and 238Pu evaluations are suggested. These calculations have been performed with SCALE6.1 and MCNPX-2.7e.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La gestión de los residuos radiactivos de vida larga producidos en los reactores nucleares constituye uno de los principales desafíos de la tecnología nuclear en la actualidad. Una posible opción para su gestión es la transmutación de los nucleidos de vida larga en otros de vida más corta. Los sistemas subcríticos guiados por acelerador (ADS por sus siglas en inglés) son una de las tecnologías en desarrollo para logar este objetivo. Un ADS consiste en un reactor nuclear subcrítico mantenido en un estado estacionario mediante una fuente externa de neutrones guiada por un acelerador de partículas. El interés de estos sistemas radica en su capacidad para ser cargados con combustibles que tengan contenidos de actínidos minoritarios mayores que los reactores críticos convencionales, y de esta manera, incrementar las tasas de trasmutación de estos elementos, que son los principales responsables de la radiotoxicidad a largo plazo de los residuos nucleares. Uno de los puntos clave que han sido identificados para la operación de un ADS a escala industrial es la necesidad de monitorizar continuamente la reactividad del sistema subcrítico durante la operación. Por esta razón, desde los años 1990 se han realizado varios experimentos en conjuntos subcríticos de potencia cero (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) con el fin de validar experimentalmente estas técnicas. En este contexto, la presente tesis se ocupa de la validación de técnicas de monitorización de la reactividad en el conjunto subcrítico Yalina-Booster. Este conjunto pertenece al Joint Institute for Power and Nuclear Research (JIPNR-Sosny) de la Academia Nacional de Ciencias de Bielorrusia. Dentro del proyecto EUROTRANS del 6º Programa Marco de la UE, en el año 2008 se ha realizado una serie de experimentos en esta instalación concernientes a la monitorización de la reactividad bajo la dirección del CIEMAT. Se han realizado dos tipos de experimentos: experimentos con una fuente de neutrones pulsada (PNS) y experimentos con una fuente continua con interrupciones cortas (beam trips). En el caso de los primeros, experimentos con fuente pulsada, existen dos técnicas fundamentales para medir la reactividad, conocidas como la técnica del ratio bajo las áreas de los neutrones inmediatos y retardados (o técnica de Sjöstrand) y la técnica de la constante de decaimiento de los neutrones inmediatos. Sin embargo, varios experimentos han mostrado la necesidad de aplicar técnicas de corrección para tener en cuenta los efectos espaciales y energéticos presentes en un sistema real y obtener valores precisos de la reactividad. En esta tesis, se han investigado estas correcciones mediante simulaciones del sistema con el código de Montecarlo MCNPX. Esta investigación ha servido también para proponer una versión generalizada de estas técnicas donde se buscan relaciones entre la reactividad el sistema y las cantidades medidas a través de simulaciones de Monte Carlo. El segundo tipo de experimentos, experimentos con una fuente continua e interrupciones del haz, es más probable que sea empleado en un ADS industrial. La versión generalizada de las técnicas desarrolladas para los experimentos con fuente pulsada también ha sido aplicada a los resultados de estos experimentos. Además, el trabajo presentado en esta tesis es la primera vez, en mi conocimiento, en que la reactividad de un sistema subcrítico se monitoriza durante la operación con tres técnicas simultáneas: la técnica de la relación entre la corriente y el flujo (current-to-flux), la técnica de desconexión rápida de la fuente (source-jerk) y la técnica del decaimiento de los neutrones inmediatos. Los casos analizados incluyen la variación rápida de la reactividad del sistema (inserción y extracción de las barras de control) y la variación rápida de la fuente de neutrones (interrupción larga del haz y posterior recuperación). ABSTRACT The management of long-lived radioactive wastes produced by nuclear reactors constitutes one of the main challenges of nuclear technology nowadays. A possible option for its management consists in the transmutation of long lived nuclides into shorter lived ones. Accelerator Driven Subcritical Systems (ADS) are one of the technologies in development to achieve this goal. An ADS consists in a subcritical nuclear reactor maintained in a steady state by an external neutron source driven by a particle accelerator. The interest of these systems lays on its capacity to be loaded with fuels having larger contents of minor actinides than conventional critical reactors, and in this way, increasing the transmutation rates of these elements, that are the main responsible of the long-term radiotoxicity of nuclear waste. One of the key points that have been identified for the operation of an industrial-scale ADS is the need of continuously monitoring the reactivity of the subcritical system during operation. For this reason, since the 1990s a number of experiments have been conducted in zero-power subcritical assemblies (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) in order to experimentally validate these techniques. In this context, the present thesis is concerned with the validation of reactivity monitoring techniques at the Yalina-Booster subcritical assembly. This assembly belongs to the Joint Institute for Power and Nuclear Research (JIPNR-Sosny) of the National Academy of Sciences of Belarus. Experiments concerning reactivity monitoring have been performed in this facility under the EUROTRANS project of the 6th EU Framework Program in year 2008 under the direction of CIEMAT. Two types of experiments have been carried out: experiments with a pulsed neutron source (PNS) and experiments with a continuous source with short interruptions (beam trips). For the case of the first ones, PNS experiments, two fundamental techniques exist to measure the reactivity, known as the prompt-to-delayed neutron area-ratio technique (or Sjöstrand technique) and the prompt neutron decay constant technique. However, previous experiments have shown the need to apply correction techniques to take into account the spatial and energy effects present in a real system and thus obtain accurate values for the reactivity. In this thesis, these corrections have been investigated through simulations of the system with the Monte Carlo code MCNPX. This research has also served to propose a generalized version of these techniques where relationships between the reactivity of the system and the measured quantities are obtained through Monte Carlo simulations. The second type of experiments, with a continuous source with beam trips, is more likely to be employed in an industrial ADS. The generalized version of the techniques developed for the PNS experiments has also been applied to the result of these experiments. Furthermore, the work presented in this thesis is the first time, to my knowledge, that the reactivity of a subcritical system has been monitored during operation simultaneously with three different techniques: the current-to-flux, the source-jerk and the prompt neutron decay techniques. The cases analyzed include the fast variation of the system reactivity (insertion and extraction of a control rod) and the fast variation of the neutron source (long beam interruption and subsequent recovery).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work explores the multi-element capabilities of inductively coupled plasma - mass spectrometry with collision/reaction cell technology (CCT-ICP-MS) for the simultaneous determination of both spectrally interfered and non-interfered nuclides in wine samples using a single set of experimental conditions. The influence of the cell gas type (i.e. He, He+H2 and He+NH3), cell gas flow rate and sample pre-treatment (i.e. water dilution or acid digestion) on the background-equivalent concentration (BEC) of several nuclides covering the mass range from 7 to 238 u has been studied. Results obtained in this work show that, operating the collision/reaction cell with a compromise cell gas flow rate (i.e. 4 mL min−1) improves BEC values for interfered nuclides without a significant effect on the BECs for non-interfered nuclides, with the exception of the light elements Li and Be. Among the different cell gas mixtures tested, the use of He or He+H2 is preferred over He+NH3 because NH3 generates new spectral interferences. No significant influence of the sample pre-treatment methodology (i.e. dilution or digestion) on the multi-element capabilities of CCT-ICP-MS in the context of simultaneous analysis of interfered and non-interfered nuclides was observed. Nonetheless, sample dilution should be kept at minimum to ensure that light nuclides (e.g. Li and Be) could be quantified in wine. Finally, a direct 5-fold aqueous dilution is recommended for the simultaneous trace and ultra-trace determination of spectrally interfered and non-interfered elements in wine by means of CCT-ICP-MS. The use of the CCT is mandatory for interference-free ultra-trace determination of Ti and Cr. Only Be could not be determined when using the CCT due to a deteriorated limit of detection when compared to conventional ICP-MS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Results of a systematic study concerning non-spectral interferences from sulfuric acid containing matrices on a large number of elements in inductively coupled plasma–mass spectrometry (ICP-MS) are presented in this work. The signals obtained with sulfuric acid solutions of different concentrations (up to 5% w w− 1) have been compared with the corresponding signals for a 1% w w− 1− nitric acid solution at different experimental conditions (i.e., sample uptake rates, nebulizer gas flows and r.f. powers). The signals observed for 128Te+, 78Se+ and 75As+ were significantly higher when using sulfuric acid matrices (up to 2.2-fold for 128Te+ and 78Se+ and 1.8-fold for 75As+ in the presence of 5 w w-1 sulfuric acid) for the whole range of experimental conditions tested. This is in agreement with previously reported observations. The signal for 31P+ is also higher (1.1-fold) in the presence of sulfuric acid. The signal enhancements for 128Te+, 78Se+, 75As+ and 31P+ are explained in relation to an increase in the analyte ion population as a result of charge transfer reactions involving S+ species in the plasma. Theoretical data suggest that Os, Sb, Pt, Ir, Zn and Hg could also be involved in sulfur-based charge transfer reactions, but no experimental evidence has been found. The presence of sulfuric acid gives rise to lower ion signals (about 10–20% lower) for the other nuclides tested, thus indicating the negative matrix effect caused by changes in the amount of analyte loading of the plasma. The elemental composition of a certified low-density polyethylene sample (ERM-EC681K) was determined by ICP-MS after two different sample digestion procedures, one of them including sulfuric acid. Element concentrations were in agreement with the certified values, irrespective of the acids used for the digestion. These results demonstrate that the use of matrix-matched standards allows the accurate determination of the tested elements in a sulfuric acid matrix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The McMurdo Dry Valleys, Antarctica (MDV) are among the oldest landscapes on Earth, and some landforms there present an intriguing apparent contradiction such that millions of years old surface deposits maintain their meter-scale morphology despite the fact that measured erosion rates are 0.1-4 m/Ma. We analyzed the concentration of cosmic ray-produced 10Be and 26Al in quartz sands from regolith directly above and below two well-documented ash deposits in the MDV, the Arena Valley ash (40Ar/39Ar age of 4.33 Ma) and the Hart ash (K-Ar age of 3.9 Ma). Measured concentrations of 10Be and 26Al are significantly less than expected given the age of the in situ air fall ashes and are best interpreted as reflecting the degradation rate of the overlying sediments. The erosion rate of the material above the Arena Valley ash that best explains the observed isotope profiles is 3.5 ± 0.41 x 10**-5 g/cm**2/yr (~0.19 m/Ma) for the past ~4 Ma. For the Hart ash, the erosion rate is 4.8 ± 0.21 x 10**-4 g/cm**2/yr (~2.6 m/Ma) for the past ~1 Ma. The concentration profiles do not show signs of mixing, creep, or deflation caused by sublimation of ground ice. These results indicate that the slow, steady lowering of the surface without vertical mixing may allow landforms to maintain their meter-scale morphology even though they are actively eroding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple, reliable, and efficient method has been elaborated for direct determination of isotopic composition of authigenic uranium in siliceous lacustrine sediments. The method is based on studying kinetics of selective extraction of authigenic uranium from sediments by weak solutions of ammonium hydrocarbonate followed by ICP-MS analysis of nuclides. To estimate contamination of authigenic uranium by terrigenous one contents of 232Th and some other clastogenic elements in the extracts were measured simultaneously. Selectivity of extraction of authigenic uranium from the sediments treated with 1% NH4HCO3 solution appeared to be no worse than 99%. The method was applied to analysis of isotopic composition of authigenic uranium at several key horizons of the earlier dated core from the Baikal Lake. Measurements directly show that 234U/238U values in Baikal water varied depending on climate, which contradicts existing hypotheses. Measured 234U/238U ratios in water of the paleo-Baikal match corresponding values reconstructed from isotopic data for total uranium in the sediments on supposition that U/Th ratio is constant in terrigenous fraction of the sediments. Direct experimental determination of total and authigenic nuclides in sediments enhances potentiality of the method for 234U-230Th dating of non-carbonate lacustrine sediments including those from the Baikal Lake within intervals corresponding to periods of glaciation, when sediments were rich in terrigenous components. Portions of terrigenous and authigenic uranium are well separated and we can study variability of sources of terrigenous matter and refine the earlier model for reconstructing climate humidity in the East Siberia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nine samples of supergene goethite (FeOOH) from Brazil and Australia were selected to rest the suitability of this mineral for (U-Th)/He dating. Measured He ages ranged from 61 to 8 Ma and were reproducible to better than a few percent despite very large Variations in [U] and [Th]. In all Samples with internal stratigraphy or independent age constraints, the He ages corroborated the expected relationship's. These data demonstrate that internally consistent He ages can be obtained on goethite. but do not prove quantitative 4 He retention. To assess possible diffusive He loss, stepped-heating experiments were performed on two goethite samples that were subjected to proton irradiation to produce a homogeneous distribution of spallogenic He-3. The He-3 release pattern indicates the presence of at least two diffusion domains, one with high helium retentivity and the other with very low retentivity at Earth surface conditions. The low retentivity domain, which accounts for similar to 5% of He-3, contains no natural He-4 and may represent poorly crystalline or intergranular material which has lost all radiogenic He-4 by diffusion in nature. Diffusive loss of He-3 from the high retentivity domain is independent of the macroscopic dimensions of the analyzed polycrystalline aggregate, so probably represents diffusion from individual micrometer-size goethite crystals. The He-2/He-3 evolution during the incremental heating experiments shows that the high retentivity domain has retained 90%-95% of its radiogenic helium. This degree of retentivity is in excellent agreement with that independently predicted from the helium diffusion coefficients extrapolated to Earth surface temperature and held for the appropriate duration. Considering both the high and low retentivity domains, these data indicate that one of the samples retained 90% of its radiogenic He-4 over 47.5 Ma and the other retained 86% over 12.3 Ma. Thus while diffusive-loss corrections to supergene goethite He ages are required. these initial results indicate that the corrections are not extremely large and can be rigorously quantified using the proton-irradiation He-4/He-3 method. Copyright (C) 2005 Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

137Cs and 134Cs as compounds of the radioactive release from the reactor catastrophy of Chernobyl on the 26.04.1986 were deposited into sediments of lakes in Schleswig-Holstein (Germany). Three years later, in autumn 1989, a sediment core was taken from the Großer Plöner See and the distribution of both caesium isotopes was determined. The radiocaesium profiles were dated by 210Pb. The radiocaesium nuclides from Chernobyl diffused into sediment layers which were deposited decades before the catastrophy. The activity of 137Cs from Chernobyl was higher than from the nuclear bomb fallout.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis it is shown that the cosmogenic radionuclide 10Be proved to be a sensitive stratigraphic tool for sediment cores from the Arctic Ocean with low or negligible content of biogenic carbonate, impeding a reliable 0180 stratigraphy. 10Be enables a stratigraphy of Arctic sediments comparable to the d18O stratigraphy Imbrie et al. [1984] in that high concentration of 10Be are related to interglacial stages in contrast to lower values during glacial periods. To use the °Be profile as dating tool it is necessary to investigate the sources and sinks as well as the pathways of this radiotracer. 10Be is produced in the upper atmosphere and transfered to the earth's surface by dry and wet deposition. Besides the atmospheric component there is an important input of 10Be with the rivers to the Arctic Ocean. I determined depositional 10Be fluxes in the shelf area of the Laptev Sea, which is characterized by a huge input of river water, the continental slope of the Laptev Sea, the central Arctic Ocean and the Norwegian- and Greenland Sea. The depositional 10Be fluxes of (20 ± 5) x 10**6 atoms/cm**2/a in the shelf area of the Laptev Sea are by two orders of magnitude higher than the recent atmospheric input (0.2 - 0.5) x 10**6 atoms/cm**2/a in Greenland. while the fluxes in the central Arctic Ocean are in the same range. Further I developed a model to reconstruct the pathways of radionuclides 230Th, 231Pa and 10Be in high northern latitudes. The modelling results were compared with the measured concentrations in the water column and the recent depositional fluxes. These results show that the recent pathways of these nuclides can be rebuild by this model. Thus we can apply this model to earlier oxygen isotope stages to find out which predominate conditions lead to the determined depositional fluxes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este trabajo se calcula la tasa media de incisión fluvial del río Darro (Granada, España) durante el periodo 1890-2010 en su tramo urbano (sector Alhambra-Valparaíso). Para ello se han utilizado fotografías históricas en las que aparece dicho río, a partir de las cuales se ha podido determinar la posición del cauce en el momento en el que se realizaron las fotografías. La comparación con los escenarios actuales de tales imágenes ha permitido determinar la diferencia de altura del cauce a través de medidas de cotas absolutas realizadas mediante teodolito. Esta metodología ha permitido estimar de modo cuantitativo un índice de encajamiento vertical medio del río de 1,05 cm/año para el periodo histórico considerado.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le polyhydroxybutyrate (PHB) est un biopolymère intracellulaire synthétisé par fermentation bactérienne. Il présente de nombreux avantages (propriétés thermiques et mécaniques comparables à celles de polyoléfines, haute hydrophobie, biocompatibilité, biodégradabilité, industrialisable) et de nombreux inconvénients significatifs (mauvaise stabilité thermique et fragilité). Cette mauvaise stabilité est due à un phénomène de dégradation thermique proche du point de fusion du PHB. Ceci conduit à une réduction du poids moléculaire par un procédé de scission de chaîne aléatoire, modifiant alors irréversiblement les propriétés cristallines et rhéologiques de manière. Ainsi, les températures de fusion et de cristallisation du PHB diminuent drastiquement et la cinétique de cristallisation est ralentie. Par ailleurs, un second phénomène d'autonucléation intervient à proximité du point de fusion. En effet, une certaine quantité d'énergie est nécessaire pour effacer toute présence de résidus cristallins dans la matière à l’état fondu. Ces résidus peuvent agir comme nucléides pendant le processus de cristallisation et y influencer de manière significative la cinétique de cristallisation du PHB. Ce mémoire vise à montrer l'effet des processus de dégradation thermique et d’autonucléation du PHB sur sa cinétique de cristallisation. Pour cela, trois protocoles thermiques spécifiques ont été proposés, faisant varier la température maximum de chauffe (Th) et le temps de maintien à cette température (th) afin apporter une nouvelle approche de effet du traitement thermique sur la cristallisation, l’autonucléation, la dégradation thermique et la microstructure du PHB en utilisant respectivement la calorimétrie différentielle à balayage (DSC) dans des conditions cristallisation non-isotherme et isotherme, la diffraction de rayon X (WAXD), la spectroscopie infrarouge (FT-IR) et la microscopie optique. Th a été varié entre 167 et 200 °C et th entre 3 et 10 min. À Th ≥185°C, le phénomène de scission de chaine est le seul phénomène qui influence de cinétique de cristallisation alors qu’à Th < 180°C le processus de nucléation homogène est favorisé par la présence de résidus cristallins est prédomine sur le phénomène de dégradation. En ce qui concerne l'effet du temps de maintien, th, il a été mis en évidence le phénomène de dégradation thermique est sensible à la variation de ce paramètre, ce qui n’est pas le cas du processus d’autonucléation. Finalement, il a été montré que la morphologie cristalline est fortement affectée par les mécanismes de dégradation thermique et d'auto-nucléation.