962 resultados para Radiation-hard Detector
Resumo:
RATIONALE AND OBJECTIVES: To evaluate the effect of a modified abdominal multislice computed tomography (CT) protocol for obese patients on image quality and radiation dose. MATERIALS AND METHODS: An adult female anthropomorphic phantom was used to simulate obese patients by adding one or two 4-cm circumferential layers of fat-equivalent material to the abdominal portion. The phantom was scanned with a subcutaneous fat thickness of 0, 4, and 8 cm using the following parameters (detector configuration/beam pitch/table feed per rotation/gantry rotation time/kV/mA): standard protocol A: 16 x 0.625 mm/1.75/17.5 mm/0.5 seconds/140/380, and modified protocol B: 16 x 1.25 mm/1.375/27.5 mm/1.0 seconds/140/380. Radiation doses to six abdominal organs and the skin, image noise values, and contrast-to-noise ratios (CNRs) were analyzed. Statistical analysis included analysis of variance, Wilcoxon rank sum, and Student's t-test (P < .05). RESULTS: Applying the modified protocol B with one or two fat rings, the image noise decreased significantly (P < .05), and simultaneously, the CNR increased significantly compared with protocol A (P < .05). Organ doses significantly increased, up to 54.7%, comparing modified protocol B with one fat ring to the routine protocol A with no fat rings (P < .05). However, no significant change in organ dose was seen for protocol B with two fat rings compared with protocol A without fat rings (range -2.1% to 8.1%) (P > .05). CONCLUSIONS: Using a modified abdominal multislice CT protocol for obese patients with 8 cm or more of subcutaneous fat, image quality can be substantially improved without a significant increase in radiation dose to the abdominal organs.
Resumo:
PURPOSE: To prospectively evaluate, for the depiction of simulated hypervascular liver lesions in a phantom, the effect of a low tube voltage, high tube current computed tomographic (CT) technique on image noise, contrast-to-noise ratio (CNR), lesion conspicuity, and radiation dose. MATERIALS AND METHODS: A custom liver phantom containing 16 cylindric cavities (four cavities each of 3, 5, 8, and 15 mm in diameter) filled with various iodinated solutions to simulate hypervascular liver lesions was scanned with a 64-section multi-detector row CT scanner at 140, 120, 100, and 80 kVp, with corresponding tube current-time product settings at 225, 275, 420, and 675 mAs, respectively. The CNRs for six simulated lesions filled with different iodinated solutions were calculated. A figure of merit (FOM) for each lesion was computed as the ratio of CNR2 to effective dose (ED). Three radiologists independently graded the conspicuity of 16 simulated lesions. An anthropomorphic phantom was scanned to evaluate the ED. Statistical analysis included one-way analysis of variance. RESULTS: Image noise increased by 45% with the 80-kVp protocol compared with the 140-kVp protocol (P < .001). However, the lowest ED and the highest CNR were achieved with the 80-kVp protocol. The FOM results indicated that at a constant ED, a reduction of tube voltage from 140 to 120, 100, and 80 kVp increased the CNR by factors of at least 1.6, 2.4, and 3.6, respectively (P < .001). At a constant CNR, corresponding reductions in ED were by a factor of 2.5, 5.5, and 12.7, respectively (P < .001). The highest lesion conspicuity was achieved with the 80-kVp protocol. CONCLUSION: The CNR of simulated hypervascular liver lesions can be substantially increased and the radiation dose reduced by using an 80-kVp, high tube current CT technique.
Resumo:
RATIONALE AND OBJECTIVES: The aim of this study was to measure the radiation dose of dual-energy and single-energy multidetector computed tomographic (CT) imaging using adult liver, renal, and aortic imaging protocols. MATERIALS AND METHODS: Dual-energy CT (DECT) imaging was performed on a conventional 64-detector CT scanner using a software upgrade (Volume Dual Energy) at tube voltages of 140 and 80 kVp (with tube currents of 385 and 675 mA, respectively), with a 0.8-second gantry revolution time in axial mode. Parameters for single-energy CT (SECT) imaging were a tube voltage of 140 kVp, a tube current of 385 mA, a 0.5-second gantry revolution time, helical mode, and pitch of 1.375:1. The volume CT dose index (CTDI(vol)) value displayed on the console for each scan was recorded. Organ doses were measured using metal oxide semiconductor field-effect transistor technology. Effective dose was calculated as the sum of 20 organ doses multiplied by a weighting factor found in International Commission on Radiological Protection Publication 60. Radiation dose saving with virtual noncontrast imaging reconstruction was also determined. RESULTS: The CTDI(vol) values were 49.4 mGy for DECT imaging and 16.2 mGy for SECT imaging. Effective dose ranged from 22.5 to 36.4 mSv for DECT imaging and from 9.4 to 13.8 mSv for SECT imaging. Virtual noncontrast imaging reconstruction reduced the total effective dose of multiphase DECT imaging by 19% to 28%. CONCLUSION: Using the current Volume Dual Energy software, radiation doses with DECT imaging were higher than those with SECT imaging. Substantial radiation dose savings are possible with DECT imaging if virtual noncontrast imaging reconstruction replaces precontrast imaging.
Resumo:
PURPOSE: To determine if multi–detector row computed tomography (CT) can replace conventional radiography and be performed alone in severe trauma patients for the depiction of thoracolumbar spine fractures. MATERIALS AND METHODS: One hundred consecutive severe trauma patients who underwent conventional radiography of the thoracolumbar spine as well as thoracoabdominal multi–detector row CT were prospectively identified. Conventional radiographs were reviewed independently by three radiologists and two orthopedic surgeons; CT images were reviewed by three radiologists. Reviewers were blinded both to one another’s reviews and to the results of initial evaluation. Presence, location, and stability of fractures, as well as quality of reviewed images, were assessed. Statistical analysis was performed to determine sensitivity and interobserver agreement for each procedure, with results of clinical and radiologic follow-up as the standard of reference. The time to perform each examination and the radiation dose involved were evaluated. A resource cost analysis was performed. RESULTS: Sixty-seven fractured vertebrae were diagnosed in 26 patients. Twelve patients had unstable spine fractures. Mean sensitivity and interobserver agreement, respectively, for detection of unstable fractures were 97.2% and 0.951 for multi–detector row CT and 33.3% and 0.368 for conventional radiography. The median times to perform a conventional radiographic and a multi–detector row CT examination, respectively, were 33 and 40 minutes. Effective radiation doses at conventional radiography of the spine and thoracoabdominal multi–detector row CT, respectively, were 6.36 mSv and 19.42 mSv. Multi–detector row CT enabled identification of 146 associated traumatic lesions. The costs of conventional radiography and multi–detector row CT, respectively, were $145 and $880 per patient. CONCLUSION: Multi–detector row CT is a better examination for depicting spine fractures than conventional radiography. It can replace conventional radiography and be performed alone in patients who have sustained severe trauma.
Resumo:
PURPOSE Computed tomography (CT) accounts for more than half of the total radiation exposure from medical procedures, which makes dose reduction in CT an effective means of reducing radiation exposure. We analysed the dose reduction that can be achieved with a new CT scanner [Somatom Edge (E)] that incorporates new developments in hardware (detector) and software (iterative reconstruction). METHODS We compared weighted volume CT dose index (CTDIvol) and dose length product (DLP) values of 25 consecutive patients studied with non-enhanced standard brain CT with the new scanner and with two previous models each, a 64-slice 64-row multi-detector CT (MDCT) scanner with 64 rows (S64) and a 16-slice 16-row MDCT scanner with 16 rows (S16). We analysed signal-to-noise and contrast-to-noise ratios in images from the three scanners and performed a quality rating by three neuroradiologists to analyse whether dose reduction techniques still yield sufficient diagnostic quality. RESULTS CTDIVol of scanner E was 41.5 and 36.4 % less than the values of scanners S16 and S64, respectively; the DLP values were 40 and 38.3 % less. All differences were statistically significant (p < 0.0001). Signal-to-noise and contrast-to-noise ratios were best in S64; these differences also reached statistical significance. Image analysis, however, showed "non-inferiority" of scanner E regarding image quality. CONCLUSIONS The first experience with the new scanner shows that new dose reduction techniques allow for up to 40 % dose reduction while still maintaining image quality at a diagnostically usable level.
Resumo:
OBJECTIVES The aim of this phantom study was to minimize the radiation dose by finding the best combination of low tube current and low voltage that would result in accurate volume measurements when compared to standard CT imaging without significantly decreasing the sensitivity of detecting lung nodules both with and without the assistance of CAD. METHODS An anthropomorphic chest phantom containing artificial solid and ground glass nodules (GGNs, 5-12 mm) was examined with a 64-row multi-detector CT scanner with three tube currents of 100, 50 and 25 mAs in combination with three tube voltages of 120, 100 and 80 kVp. This resulted in eight different protocols that were then compared to standard CT sensitivity (100 mAs/120 kVp). For each protocol, at least 127 different nodules were scanned in 21-25 phantoms. The nodules were analyzed in two separate sessions by three independent, blinded radiologists and computer-aided detection (CAD) software. RESULTS The mean sensitivity of the radiologists for identifying solid lung nodules on a standard CT was 89.7% ± 4.9%. The sensitivity was not significantly impaired when the tube and current voltage were lowered at the same time, except at the lowest exposure level of 25 mAs/80 kVp [80.6% ± 4.3% (p = 0.031)]. Compared to the standard CT, the sensitivity for detecting GGNs was significantly lower at all dose levels when the voltage was 80 kVp; this result was independent of the tube current. The CAD significantly increased the radiologists' sensitivity for detecting solid nodules at all dose levels (5-11%). No significant volume measurement errors (VMEs) were documented for the radiologists or the CAD software at any dose level. CONCLUSIONS Our results suggest a CT protocol with 25 mAs and 100 kVp is optimal for detecting solid and ground glass nodules in lung cancer screening. The use of CAD software is highly recommended at all dose levels.
Resumo:
This paper presents the first analysis of the input impedance and radiation properties of a dipole antenna, placed on top of Fan 's three-dimensional electromagnetic bandgap (EBG) structure, (Applied Physics Letters, 1994) constructed using a high dielectric constant ceramic. The best position of the dipole on the EBG surface is determined following impedance and radiation pattern analyses. Based on this optimum configuration an integrated Schottky heterodyne detector was designed, manufactured and tested from 0.48 to 0.52 THz. The main antenna features were not degraded by the high dielectric constant substrate due to the use of the EBG approach. Measured radiation patterns are in good agreement with the predicted ones.
Resumo:
The new Bern cyclotron laboratory aims at industrial radioisotope production for PET diagnostics and multidisciplinary research by means of a specifically conceived beam transfer line, terminated in a separate bunker. In this framework, an innovative beam monitor detector based on doped silica and optical fibres has been designed, constructed, and tested. Scintillation light produced by Ce and Sb doped silica fibres moving across the beam is measured, giving information on beam position, shape, and intensity. The doped fibres are coupled to commercial optical fibres, allowing the read-out of the signal far away from the radiation source. This general-purpose device can be easily adapted for any accelerator used in medical applications and is suitable either for low currents used in hadrontherapy or for currents up to a few μA for radioisotope production, as well as for both pulsed and continuous beams.
Resumo:
OBJECTIVE The aim of the present study was to evaluate a dose reduction in contrast-enhanced chest computed tomography (CT) by comparing the three latest generations of Siemens CT scanners used in clinical practice. We analyzed the amount of radiation used with filtered back projection (FBP) and an iterative reconstruction (IR) algorithm to yield the same image quality. Furthermore, the influence on the radiation dose of the most recent integrated circuit detector (ICD; Stellar detector, Siemens Healthcare, Erlangen, Germany) was investigated. MATERIALS AND METHODS 136 Patients were included. Scan parameters were set to a thorax routine: SOMATOM Sensation 64 (FBP), SOMATOM Definition Flash (IR), and SOMATOM Definition Edge (ICD and IR). Tube current was set constantly to the reference level of 100 mA automated tube current modulation using reference milliamperes. Care kV was used on the Flash and Edge scanner, while tube potential was individually selected between 100 and 140 kVp by the medical technologists at the SOMATOM Sensation. Quality assessment was performed on soft-tissue kernel reconstruction. Dose was represented by the dose length product. RESULTS Dose-length product (DLP) with FBP for the average chest CT was 308 mGy*cm ± 99.6. In contrast, the DLP for the chest CT with IR algorithm was 196.8 mGy*cm ± 68.8 (P = 0.0001). Further decline in dose can be noted with IR and the ICD: DLP: 166.4 mGy*cm ± 54.5 (P = 0.033). The dose reduction compared to FBP was 36.1% with IR and 45.6% with IR/ICD. Signal-to-noise ratio (SNR) was favorable in the aorta, bone, and soft tissue for IR/ICD in combination compared to FBP (the P values ranged from 0.003 to 0.048). Overall contrast-to-noise ratio (CNR) improved with declining DLP. CONCLUSION The most recent technical developments, namely IR in combination with integrated circuit detectors, can significantly lower radiation dose in chest CT examinations.
Resumo:
This paper presents the performance of the ATLAS muon reconstruction during the LHC run with pp collisions at √s = 7–8 TeV in 2011–2012, focusing mainly on data collected in 2012. Measurements of the reconstruction efficiency and of the momentum scale and resolution, based on large reference samples of J/ψ → μμ, Z → μμ and ϒ → μμ decays, are presented and compared to Monte Carlo simulations. Corrections to the simulation, to be used in physics analysis, are provided. Over most of the covered phase space (muon |η| < 2.7 and 5 ≲ pT ≲ 100 GeV) the efficiency is above 99% and is measured with per-mille precision. The momentum resolution ranges from 1.7% at central rapidity and for transverse momentum pT ≅ 10 GeV, to 4% at large rapidity and pT ≅ 100 GeV. The momentum scale is known with an uncertainty of 0.05% to 0.2% depending on rapidity. A method for the recovery of final state radiation from the muons is also presented.
Resumo:
Distributions sensitive to the underlying event in QCD jet events have been measured with the ATLAS detector at the LHC, based on 37 pb−1 of proton–proton collision data collected at a centre-of-mass energy of 7 TeV. Chargedparticle mean pT and densities of all-particle ET and chargedparticle multiplicity and pT have been measured in regions azimuthally transverse to the hardest jet in each event. These are presented both as one-dimensional distributions and with their mean values as functions of the leading-jet transverse momentum from 20 to 800 GeV. The correlation of chargedparticle mean pT with charged-particle multiplicity is also studied, and the ET densities include the forward rapidity region; these features provide extra data constraints for Monte Carlo modelling of colour reconnection and beamremnant effects respectively. For the first time, underlying event observables have been computed separately for inclusive jet and exclusive dijet event selections, allowing more detailed study of the interplay of multiple partonic scattering and QCD radiation contributions to the underlying event. Comparisonsto the predictions of different Monte Carlo models show a need for further model tuning, but the standard approach is found to generally reproduce the features of the underlying event in both types of event selection.
Resumo:
Measurements of the natural background radiation have been made at numerous places throughout the world. Very little work in this field has been done in developing countries. In Mexico the natural radiation to which the population is exposed has not been assessed. This dissertation represents a pioneer study in this environmental area. The radiation exposure which occupants within buildings receive as a result of naturally occurring radionuclides present in construction materials is the principal focus.^ Data were collected between August 1979 and November 1980. Continuous monitoring was done with TLDs placed on site for periods of 3 to 6 months. The instrumentation used for "real-time" measurements was a portable NaI (Tl) scintillation detector. In addition, radiometric measurements were performed on construction materials commonly used in Mexican homes.^ Based on TLD readings taken within 75 dwellings, the typical indoor exposure for a resident of the study area is 9.2 (mu)Rh('-1). The average reading of the 152 indoor scintillometer surveys was 9.5 (mu)Rh('-1), the outdoor reading 7.5 (mu)Rh('-1). Results of one-way and multi-way analyses of the exposure data to determine the effect due to building materials type, geologic subsoil, age of dwelling, and elevation are also presented. The results of 152 indoor scintillometer surveys are described. ^
Resumo:
Neutron spectra unfolding and dose equivalent calculation are complicated tasks in radiation protection, are highly dependent of the neutron energy, and a precise knowledge on neutron spectrometry is essential for all dosimetry-related studies as well as many nuclear physics experiments. In previous works have been reported neutron spectrometry and dosimetry results, by using the ANN technology as alternative solution, starting from the count rates of a Bonner spheres system with a LiI(Eu) thermal neutrons detector, 7 polyethylene spheres and the UTA4 response matrix with 31 energy bins. In this work, an ANN was designed and optimized by using the RDANN methodology for the Bonner spheres system used at CIEMAT Spain, which is composed of a He neutron detector, 12 moderator spheres and a response matrix for 72 energy bins. For the ANN design process a neutrons spectra catalogue compiled by the IAEA was used. From this compilation, the neutrons spectra were converted from lethargy to energy spectra. Then, the resulting energy ?uence spectra were re-binned by using the MCNP code to the corresponding energy bins of the He response matrix before mentioned. With the response matrix and the re-binned spectra the counts rate of the Bonner spheres system were calculated and the resulting re-binned neutrons spectra and calculated counts rate were used as the ANN training data set.
Resumo:
A passive neutron area monitor has been designed using Monte Carlo methods; the monitor is a polyethylene cylinder with pairs of thermoluminescent dosimeters (TLD600 and TLD700) as thermal neutron detector. The monitor was calibrated with a bare and a thermalzed 241AmBe neutron sources and its performance was evaluated measuring the ambient dose equivalent due to photoneutrons produced by a 15 MV linear accelerator for radiotherapy and the neutrons in the output of a TRIGA Mark III radial beam port.
Resumo:
A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.