228 resultados para Cosmogenic nuclides
Resumo:
The calcareous nannofossil assemblages of Ocean Drilling Program Hole 963D from the central Mediterranean Sea have been investigated to document oceanographic changes in surface waters. The studied site is located in an area sensitive to large-scale atmospheric and climatic systems and to high- and low-latitude climate connection. It is characterized by a high sedimentation rate (the achieved mean sampling resolution is <70 years) that allowed the Sicily Channel environmental changes to be examined in great detail over the last 12 ka BP. We focused on the species Florisphaera profunda that lives in the lower photic zone. Its distribution pattern shows repeated abundance fluctuations of about 10-15%. Such variations could be related to different primary production levels, given that the study of the distribution of this species on the Sicily Channel seafloor demonstrates the significant correlation to productivity changes as provided by satellite imagery. Productivity variations were quantitatively estimated and were interpreted on the basis of the relocation of the nutricline within the photic zone, led by the dynamics of the summer thermocline. Productivity changes were compared with oceanographic, atmospheric, and cosmogenic nuclide proxies. The good match with Holocene master records, as with ice-rafted detritus in the subpolar North Atlantic, and the near-1500-year periodicity suggest that the Sicily Channel environment responded to worldwide climate anomalies. Enhanced Northern Hemisphere atmospheric circulation, which has been reported as one of the most important forcing mechanisms for Holocene coolings in previous Mediterranean studies, had a remarkable impact on the water column dynamics of the Sicily Channel.
Resumo:
We report new 187Os/186Os data and Re and Os concentrations in metalliferous sediments from the Pacific to construct a composite Os isotope seawater evolution curve over the past 80 m.y. Analyses of four samples of upper Cretaceous age yield 187Os/186Os values of between 3 and 6.5 and 187Re/186Os values below 55. Mass balance calculations indicate that the pronounced minimum of about 2 in the Os isotope ratio of seawater at the K-T boundary probably reflects the enormous input of cosmogenic material into the oceans by the K-T impactor(s). Following a rapid recovery to 187Os/186Os of 3.5 at 63 Ma, data for the early and middle part of the Cenozoic show an increase in 187Os/186Os to about 6 at 15 Ma. Variations in the isotopic composition of leachable Os from slowly accumulating metalliferous sediments show large fluctuations over short time spans. In contrast, analyses of rapidly accumulating metalliferous carbonates do not exhibit the large oscillations observed in the pelagic clay leach data. These results together with sediment leaching experiments indicate that dissolution of non-hydrogenous Os can occur during the hydrogen peroxide leach and demonstrate that Os data from pelagic clay leachates do not always reflect the Os isotopic composition of seawater. New data for the late Cenozoic further substantiate the rapid increase in the 187Os/186Os of seawater during the past 15 Ma. We interpret the correlation between the marine Sr and Os isotope records during this time period as evidence that weathering within the drainage basin of the Ganges-Brahmaputra river system is responsible for driving seawater Sr and Os toward more radiogenic isotopic compositions. The positive correlation between 87Sr/86Sr and U concentration, the covariation of U and Re concentrations, and the high dissolved Re, U and Sr concentrations found in the Ganges-Brahmaputra river waters supports this interpretation. Accelerating uplift of many orogens worldwide over the past 15 Ma, especially during the last 5 Ma, could have contributed to the rapid increase in 187Os/186Os from 6 to 8.5 over the past 15 Ma. Prior to 15 Ma the marine Sr and Os record are not tightly coupled. The heterogeneous distribution of different lithologies within eroding terrains may play an important role in decoupling the supplies of radiogenic Os and Sr to the oceans and account for the periods of decoupling of the marine Sr and Os isotope records.
Resumo:
We report iodine and bromine concentrations in a total of 256 pore water samples collected from all nine sites of Ocean Drilling Program Leg 204, Hydrate Ridge. In a subset of these samples, we also determined iodine ages in the fluids using the cosmogenic isotope 129I (T1/2 = 15.7 Ma). The presence of this cosmogenic isotope, combined with the strong association of iodine with methane, allows the identification of the organic source material responsible for iodine and methane in gas hydrates. In all cores, iodine concentrations were found to increase strongly with depth from values close to that of seawater (0.0004 mM) to concentrations >0.5 mM. Several of the cores taken from the northwest flank of the southern summit show a pronounced maximum in iodine concentrations at depths between 100 and 150 meters below seafloor in the layer just above the bottom-simulating reflector. This maximum is especially visible at Site 1245, where concentrations reach values as high as 2.3 mM, but maxima are absent in the cores taken from the slope basin sites (Sites 1251 and 1252). Bromine concentrations follow similar trends, but enrichment factors for Br are only 4-8 times that of seawater (i.e., considerably lower than those for iodine). Iodine concentrations are sufficient to allow isotope determinations by accelerator mass spectrometry in individual pore water samples collected onboard (~5 mL). We report 129I/I ratios in a few samples from each core and a more complete profile for one flank site (Site 1245). All 129I/I ratios are below the marine input ratio (Ri = 1500x10**-15). The lowest values found at most sites are between 150 and 250x10**-15, which correspond to minimum ages between 40 and 55 Ma, respectively. These ages rule out derivation of most of the iodine (and, by association, of methane) from the sediments hosting the gas hydrates or from currently subducting sediments. The iodine maximum at Site 1245 is accompanied by an increase in 129I/I ratios, suggesting the presence of an additional source with an age younger than 10 Ma; there is indication that younger sources also contribute at other sites, but data coverage is not yet sufficient to allow a definitive identification of sources there. Likely sources for the older component are formations of early Eocene age close to the backstop in the overriding wedge, whereas the younger sources might be found in recent sediments underlying the current locations of the gas hydrates.
Resumo:
The onset of abundant ice-rafted debris (IRD) deposition in the Nordic Seas and subpolar North Atlantic Ocean 2.72 millions of years ago (Ma) is thought to record the Pliocene onset of major northern hemisphere glaciation (NHG) due to a synchronous advance of North American Laurentide, Scandinavian and Greenland ice-sheets to their marine calving margins during marine isotope stage (MIS) G6. Numerous marine and terrestrial records from the Nordic Seas region indicate that extensive ice sheets on Greenland and Scandinavia increased IRD inputs to these seas from 2.72 Ma. The timing of ice-sheet expansion on North America as tracked by IRD deposition in the subpolar North Atlantic Ocean, however, is less clear because both Europe and North America are potential sources for icebergs in this region. Moreover, cosmogenic-dating of terrestrial tills on North America indicate that the Laurentide Ice Sheet did not extend to ~39°N until 2.4 ±0.14 Ma, at least 180 ka after the onset of major IRD deposition at 2.72 Ma. To address this problem,we present the first detailed analysis of the geochemical provenance of individual sand-sized IRD deposited in the subpolar North Atlantic Ocean between MIS G6 and 100 (~2.72-2.52 Ma). IRD provenance is assessed using laser ablation lead (Pb) isotope analyses of single ice-rafted (>150 mm) feldspar grains. To track when an ice-rafting setting consistent with major NHG first occurred in the North Atlantic Ocean during the Pliocene intensification of NHG (iNHG), we investigate when the Pb-isotope composition (206Pb/204Pb, 207Pb/204Pb, 208Pb/204Pb) of feldspars deposited at DSDP Site 611 first resembles that determined for IRD deposited at this site during MIS 100, the oldest glacial for which there exists convincing evidence for widespread glaciation of North America. Whilst Quaternary-magnitude IRD fluxes exist at Site 611 during glacials from 2.72 Ma, we find that the provenance of this IRD is not constant. Instead, we find that the Pb isotope composition of IRD at our study site is not consistent with major NHG until MIS G2 (2.64 Ma). We hypothesise that IRD deposition in the North Atlantic Ocean prior to MIS G2 was dominated by iceberg calving from Greenland and Scandinavia. We further suggest that the grounding line of continental ice on Northeast America may not have extended onto the continental shelf and calved significant numbers of icebergs to the North Atlantic Ocean during glacials until 2.64 Ma.
Resumo:
Sediment accretion and subduction at convergent margins play an important role in the nature of hazardous interplate seismicity (the seismogenic zone) and the subduction recycling of volatiles and continentally derived materials to the Earth's mantle. Identifying and quantifying sediment accretion, essential for a complete mass balance across the margin, can be difficult. Seismic images do not define the processes by which a prism was built, and cored sediments may show disturbed magnetostratigraphy and sparse biostratigraphy. This contribution reports the first use of cosmogenic 10Be depth profiles to define the origin and structural evolution of forearc sedimentary prisms. Biostratigraphy and 10Be model ages generally are in good agreement for sediments drilled at Deep Sea Drilling Project Site 434 in the Japan forearc, and support an origin by imbricate thrusting for the upper section. Forearc sediments from Ocean Drilling Program Site 1040 in Costa Rica lack good fossil or paleomagnetic age control above the decollement. Low and homogeneous 10Be concentrations show that the prism sediments are older than 3-4 Ma, and that the prism is either a paleoaccretionary prism or it formed largely from slump deposits of apron sediments. Low 10Be in Costa Rican lavas and the absence of frontal accretion imply deeper sediment underplating or subduction erosion.
Resumo:
Studies of Be distributions in subduction zone sediments will help to understand questions regarding the enrichments of cosmogenic Be-10 in arc volcanic rocks. Analyses of Be-10 and Be-9 in sediments of Ocean Drilling Program Site 808, Nankai Trough and Be-9 in porewaters of Site 808 and Sites 671 and 672, Barbados ridge complex, show significant decreases in solid phase Be-10 and large increases of porewater Be-9 at the location of the décollement zone and below or at potential flow conduits. These data imply the potential mobilization of Be during pore fluid expulsion upon sediment burial. Experiments involving reaction between a décollement sediment and a synthetic NaCl-CaCl2 solution at elevated pressure and temperatures were conducted in an attempt to mimic early subduction zone processes. The results demonstrate that Be is mobilized under elevated pressure and temperature with a strong pH dependence. The Be mobilization provides an explanation of Be-10 enrichment in arc volcanic rocks and supports the argument of the importance of the fluid processes in subduction zones at convergent margins.
Resumo:
For a number of important nuclides, complete activation data libraries with covariance data will be produced, so that uncertainty propagation in fuel cycle codes (in this case ACAB,FISPIN, ...) can be developed and tested. Eventually, fuel inventory codes should be able to handle the complete set of uncertainty data, i.e. those of nuclear reactions (cross sections, etc.), radioactive decay and fission yield data. For this, capabilities will be developed both to produce covariance data and to propagate the uncertainties through the inventory calculations.
Resumo:
A sensitivity analysis on the multiplication factor, keffkeff, to the cross section data has been carried out for the MYRRHA critical configuration in order to show the most relevant reactions. With these results, a further analysis on the 238Pu and 56Fe cross sections has been performed, comparing the evaluations provided in the JEFF-3.1.2 and ENDF/B-VII.1 libraries for these nuclides. Then, the effect in MYRRHA of the differences between evaluations are analysed, presenting the source of the differences. With these results, recommendations for the 56Fe and 238Pu evaluations are suggested. These calculations have been performed with SCALE6.1 and MCNPX-2.7e.
Resumo:
La gestión de los residuos radiactivos de vida larga producidos en los reactores nucleares constituye uno de los principales desafíos de la tecnología nuclear en la actualidad. Una posible opción para su gestión es la transmutación de los nucleidos de vida larga en otros de vida más corta. Los sistemas subcríticos guiados por acelerador (ADS por sus siglas en inglés) son una de las tecnologías en desarrollo para logar este objetivo. Un ADS consiste en un reactor nuclear subcrítico mantenido en un estado estacionario mediante una fuente externa de neutrones guiada por un acelerador de partículas. El interés de estos sistemas radica en su capacidad para ser cargados con combustibles que tengan contenidos de actínidos minoritarios mayores que los reactores críticos convencionales, y de esta manera, incrementar las tasas de trasmutación de estos elementos, que son los principales responsables de la radiotoxicidad a largo plazo de los residuos nucleares. Uno de los puntos clave que han sido identificados para la operación de un ADS a escala industrial es la necesidad de monitorizar continuamente la reactividad del sistema subcrítico durante la operación. Por esta razón, desde los años 1990 se han realizado varios experimentos en conjuntos subcríticos de potencia cero (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) con el fin de validar experimentalmente estas técnicas. En este contexto, la presente tesis se ocupa de la validación de técnicas de monitorización de la reactividad en el conjunto subcrítico Yalina-Booster. Este conjunto pertenece al Joint Institute for Power and Nuclear Research (JIPNR-Sosny) de la Academia Nacional de Ciencias de Bielorrusia. Dentro del proyecto EUROTRANS del 6º Programa Marco de la UE, en el año 2008 se ha realizado una serie de experimentos en esta instalación concernientes a la monitorización de la reactividad bajo la dirección del CIEMAT. Se han realizado dos tipos de experimentos: experimentos con una fuente de neutrones pulsada (PNS) y experimentos con una fuente continua con interrupciones cortas (beam trips). En el caso de los primeros, experimentos con fuente pulsada, existen dos técnicas fundamentales para medir la reactividad, conocidas como la técnica del ratio bajo las áreas de los neutrones inmediatos y retardados (o técnica de Sjöstrand) y la técnica de la constante de decaimiento de los neutrones inmediatos. Sin embargo, varios experimentos han mostrado la necesidad de aplicar técnicas de corrección para tener en cuenta los efectos espaciales y energéticos presentes en un sistema real y obtener valores precisos de la reactividad. En esta tesis, se han investigado estas correcciones mediante simulaciones del sistema con el código de Montecarlo MCNPX. Esta investigación ha servido también para proponer una versión generalizada de estas técnicas donde se buscan relaciones entre la reactividad el sistema y las cantidades medidas a través de simulaciones de Monte Carlo. El segundo tipo de experimentos, experimentos con una fuente continua e interrupciones del haz, es más probable que sea empleado en un ADS industrial. La versión generalizada de las técnicas desarrolladas para los experimentos con fuente pulsada también ha sido aplicada a los resultados de estos experimentos. Además, el trabajo presentado en esta tesis es la primera vez, en mi conocimiento, en que la reactividad de un sistema subcrítico se monitoriza durante la operación con tres técnicas simultáneas: la técnica de la relación entre la corriente y el flujo (current-to-flux), la técnica de desconexión rápida de la fuente (source-jerk) y la técnica del decaimiento de los neutrones inmediatos. Los casos analizados incluyen la variación rápida de la reactividad del sistema (inserción y extracción de las barras de control) y la variación rápida de la fuente de neutrones (interrupción larga del haz y posterior recuperación). ABSTRACT The management of long-lived radioactive wastes produced by nuclear reactors constitutes one of the main challenges of nuclear technology nowadays. A possible option for its management consists in the transmutation of long lived nuclides into shorter lived ones. Accelerator Driven Subcritical Systems (ADS) are one of the technologies in development to achieve this goal. An ADS consists in a subcritical nuclear reactor maintained in a steady state by an external neutron source driven by a particle accelerator. The interest of these systems lays on its capacity to be loaded with fuels having larger contents of minor actinides than conventional critical reactors, and in this way, increasing the transmutation rates of these elements, that are the main responsible of the long-term radiotoxicity of nuclear waste. One of the key points that have been identified for the operation of an industrial-scale ADS is the need of continuously monitoring the reactivity of the subcritical system during operation. For this reason, since the 1990s a number of experiments have been conducted in zero-power subcritical assemblies (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) in order to experimentally validate these techniques. In this context, the present thesis is concerned with the validation of reactivity monitoring techniques at the Yalina-Booster subcritical assembly. This assembly belongs to the Joint Institute for Power and Nuclear Research (JIPNR-Sosny) of the National Academy of Sciences of Belarus. Experiments concerning reactivity monitoring have been performed in this facility under the EUROTRANS project of the 6th EU Framework Program in year 2008 under the direction of CIEMAT. Two types of experiments have been carried out: experiments with a pulsed neutron source (PNS) and experiments with a continuous source with short interruptions (beam trips). For the case of the first ones, PNS experiments, two fundamental techniques exist to measure the reactivity, known as the prompt-to-delayed neutron area-ratio technique (or Sjöstrand technique) and the prompt neutron decay constant technique. However, previous experiments have shown the need to apply correction techniques to take into account the spatial and energy effects present in a real system and thus obtain accurate values for the reactivity. In this thesis, these corrections have been investigated through simulations of the system with the Monte Carlo code MCNPX. This research has also served to propose a generalized version of these techniques where relationships between the reactivity of the system and the measured quantities are obtained through Monte Carlo simulations. The second type of experiments, with a continuous source with beam trips, is more likely to be employed in an industrial ADS. The generalized version of the techniques developed for the PNS experiments has also been applied to the result of these experiments. Furthermore, the work presented in this thesis is the first time, to my knowledge, that the reactivity of a subcritical system has been monitored during operation simultaneously with three different techniques: the current-to-flux, the source-jerk and the prompt neutron decay techniques. The cases analyzed include the fast variation of the system reactivity (insertion and extraction of a control rod) and the fast variation of the neutron source (long beam interruption and subsequent recovery).
Resumo:
A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.