966 resultados para RESPONSE FUNCTIONS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

As contribuições dos mecanismos de detecção de contraste ao potencial cortical provocado visual (VECP) têm sido investigadas com o estudo das funções de resposta ao contraste e de resposta à frequência espacial. Anteriormente, o uso de sequências-m para o controle da estimulação era restrito à estimulação eletrofisiológica multifocal que, em alguns aspectos, se diferencia substancialmente do VECP convencional. Estimulações únicas com contraste espacial controlado por sequências-m não foram extensivamente estudadas ou comparadas às respostas obtidas com as técnicas multifocais. O objetivo deste trabalho foi avaliar a influência da frequência espacial e do contraste de redes senoidais no VECP gerado por estimulação pseudoaleatória. Nove sujeitos normais foram estimulados por redes senoidais acromáticas controladas por uma sequência-m binária pseudoaleatória em 7 frequências espaciais (0,4 a 10 cpg) em 3 tamanhos diferentes (4º, 8º e 16º de ângulo visual). Em 8º, foram testados adicionalmente seis níveis de contraste (3,12% a 99%). O kernel de primeira ordem não forneceu respostas consistentes com sinais mensuráveis através das frequências espaciais e dos contrastes testados – o sinal foi muito pequeno ou ausente – enquanto o primeiro e o segundo slice do kernel de segunda ordem exibiram respostas bastante confiáveis para as faixas de estímulo testadas. As principais diferenças entre os resultados obtidos com o primeiro e o segundo slice do kernel de segunda ordem foram o perfil das funções de amplitude versus contraste e de amplitude versus frequência espacial. Os resultados indicaram que o primeiro slice do kernel de segunda ordem foi dominado pela via M, porém para algumas condições de estímulo, pôde ser percebida a contribuição da via P. Já o segundo slice do kernel de segunda ordem refletiu contribuição apenas da via P. O presente trabalho estende achados anteriores sobre a contribuição das vias visuais ao VECP gerado por estimulação pseudoaleatória para uma grande faixa de frequências espaciais.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A identificação e descrição dos caracteres litológicos de uma formação são indispensáveis à avaliação de formações complexas. Com este objetivo, tem sido sistematicamente usada a combinação de ferramentas nucleares em poços não-revestidos. Os perfis resultantes podem ser considerados como a interação entre duas fases distintas: • Fase de transporte da radiação desde a fonte até um ou mais detectores, através da formação. • Fase de detecção, que consiste na coleção da radiação, sua transformação em pulsos de corrente e, finalmente, na distribuição espectral destes pulsos. Visto que a presença do detector não afeta fortemente o resultado do transporte da radiação, cada fase pode ser simulada independentemente uma da outra, o que permite introduzir um novo tipo de modelamento que desacopla as duas fases. Neste trabalho, a resposta final é simulada combinando soluções numéricas do transporte com uma biblioteca de funções resposta do detector, para diferentes energias incidentes e para cada arranjo específico de fontes e detectores. O transporte da radiação é calculado através do algoritmo de elementos finitos (FEM), na forma de fluxo escalar 2½-D, proveniente da solução numérica da aproximação de difusão para multigrupos da equação de transporte de Boltzmann, no espaço de fase, dita aproximação P1, onde a variável direção é expandida em termos dos polinômios ortogonais de Legendre. Isto determina a redução da dimensionalidade do problema, tornando-o mais compatível com o algoritmo FEM, onde o fluxo dependa exclusivamente da variável espacial e das propriedades físicas da formação. A função resposta do detector NaI(Tl) é obtida independentemente pelo método Monte Carlo (MC) em que a reconstrução da vida de uma partícula dentro do cristal cintilador é feita simulando, interação por interação, a posição, direção e energia das diferentes partículas, com a ajuda de números aleatórios aos quais estão associados leis de probabilidades adequadas. Os possíveis tipos de interação (Rayleigh, Efeito fotoelétrico, Compton e Produção de pares) são determinados similarmente. Completa-se a simulação quando as funções resposta do detector são convolvidas com o fluxo escalar, produzindo como resposta final, o espectro de altura de pulso do sistema modelado. Neste espectro serão selecionados conjuntos de canais denominados janelas de detecção. As taxas de contagens em cada janela apresentam dependências diferenciadas sobre a densidade eletrônica e a fitologia. Isto permite utilizar a combinação dessas janelas na determinação da densidade e do fator de absorção fotoelétrico das formações. De acordo com a metodologia desenvolvida, os perfis, tanto em modelos de camadas espessas quanto finas, puderam ser simulados. O desempenho do método foi testado em formações complexas, principalmente naquelas em que a presença de minerais de argila, feldspato e mica, produziram efeitos consideráveis capazes de perturbar a resposta final das ferramentas. Os resultados mostraram que as formações com densidade entre 1.8 e 4.0 g/cm3 e fatores de absorção fotoelétrico no intervalo de 1.5 a 5 barns/e-, tiveram seus caracteres físicos e litológicos perfeitamente identificados. As concentrações de Potássio, Urânio e Tório, puderam ser obtidas com a introdução de um novo sistema de calibração, capaz de corrigir os efeitos devidos à influência de altas variâncias e de correlações negativas, observadas principalmente no cálculo das concentrações em massa de Urânio e Potássio. Na simulação da resposta da sonda CNL, utilizando o algoritmo de regressão polinomial de Tittle, foi verificado que, devido à resolução vertical limitada por ela apresentada, as camadas com espessuras inferiores ao espaçamento fonte - detector mais distante tiveram os valores de porosidade aparente medidos erroneamente. Isto deve-se ao fato do algoritmo de Tittle aplicar-se exclusivamente a camadas espessas. Em virtude desse erro, foi desenvolvido um método que leva em conta um fator de contribuição determinado pela área relativa de cada camada dentro da zona de máxima informação. Assim, a porosidade de cada ponto em subsuperfície pôde ser determinada convolvendo estes fatores com os índices de porosidade locais, porém supondo cada camada suficientemente espessa a fim de adequar-se ao algoritmo de Tittle. Por fim, as limitações adicionais impostas pela presença de minerais perturbadores, foram resolvidas supondo a formação como que composta por um mineral base totalmente saturada com água, sendo os componentes restantes considerados perturbações sobre este caso base. Estes resultados permitem calcular perfis sintéticos de poço, que poderão ser utilizados em esquemas de inversão com o objetivo de obter uma avaliação quantitativa mais detalhada de formações complexas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This Article reports a combined experimental and theoretical analysis on the one and two-photon absorption properties of a novel class of organic molecules with a pi-conjugated backbone based on phenylacetylene (JCM874, FD43, and FD48) and azoaromatic (YB3p2S) moieties. Linear optical properties show that the phenylacetylene-based compounds exhibit strong molar absorptivity in the UV and high fluorescence quantum yield with lifetimes of approximately 2.0 ns, while the azoaromatic-compound has a strong absorption in the visible region with very low fluorescence quantum yield. The two-photon absorption was investigated employing nonlinear optical techniques and quantum chemical calculations based on the response functions formalism within the density functional theory framework. The experimental data revealed well-defined 2PA spectra with reasonable cross-section values in the visible and IR. Along the nonlinear spectra we observed two 2PA allowed bands, as well as the resonance enhancement effect due to the presence of one intermediate one-photon allowed state. Quantum chemical calculations revealed that the 2PA allowed bands correspond to transitions to states that are also one-photon allowed, indicating the relaxation of the electric-dipole selection rules. Moreover, using the theoretical results, we were able to interpret the experimental trends of the 2PA spectra. Finally, using a few-energy-level diagram, within the sum-over-essential states approach, we observed strong qualitative and quantitative correlation between experimental and theoretical results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In dieser Arbeit aus dem Bereich der Wenig-Nukleonen-Physik wird die neu entwickelte Methode der Lorentz Integral Transformation (LIT) auf die Untersuchung von Kernphotoabsorption und Elektronenstreuung an leichten Kernen angewendet. Die LIT-Methode ermoeglicht exakte Rechnungen durchzufuehren, ohne explizite Bestimmung der Endzustaende im Kontinuum. Das Problem wird auf die Loesung einer bindungzustandsaehnlichen Gleichung reduziert, bei der die Endzustandswechselwirkung vollstaendig beruecksichtigt wird. Die Loesung der LIT-Gleichung wird mit Hilfe einer Entwicklung nach hypersphaerischen harmonischen Funktionen durchgefuehrt, deren Konvergenz durch Anwendung einer effektiven Wechselwirkung im Rahmem des hypersphaerischen Formalismus (EIHH) beschleunigt wird. In dieser Arbeit wird die erste mikroskopische Berechnung des totalen Wirkungsquerschnittes fuer Photoabsorption unterhalb der Pionproduktionsschwelle an 6Li, 6He und 7Li vorgestellt. Die Rechnungen werden mit zentralen semirealistischen NN-Wechselwirkungen durchgefuehrt, die die Tensor Kraft teilweise simulieren, da die Bindungsenergien von Deuteron und von Drei-Teilchen-Kernen richtig reproduziert werden. Der Wirkungsquerschnitt fur Photoabsorption an 6Li zeigt nur eine Dipol-Riesenresonanz, waehrend 6He zwei unterschiedliche Piks aufweist, die dem Aufbruch vom Halo und vom Alpha-Core entsprechen. Der Vergleich mit experimentellen Daten zeigt, dass die Addition einer P-Wellen-Wechselwirkung die Uebereinstimmung wesentlich verbessert. Bei 7Li wird nur eine Dipol-Riesenresonanz gefunden, die gut mit den verfuegbaren experimentellen Daten uebereinstimmt. Bezueglich der Elektronenstreuung wird die Berechnung der longitudinalen und transversalen Antwortfunktionen von 4He im quasi-elastischen Bereich fuer mittlere Werte des Impulsuebertrages dargestellt. Fuer die Ladungs- und Stromoperatoren wird ein nichtrelativistisches Modell verwendet. Die Rechnungen sind mit semirealistischen Wechselwirkungen durchgefuert und ein eichinvarianter Strom wird durch die Einfuehrung eines Mesonaustauschstroms gewonnen. Die Wirkung des Zweiteilchenstroms auf die transversalen Antwortfunktionen wird untersucht. Vorlaeufige Ergebnisse werden gezeigt und mit den verfuegbaren experimentellen Daten verglichen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In dieser Arbeit werden nichtlineare Experimente zur Untersuchung der Dynamik in amorphen Festkörpern im Rahmen von Modellrechnungen diskutiert. Die Experimente beschäftigen sich mit der Frage nach dynamischen Heterogenitäten, worunter man das Vorliegen dynamischer Prozesse auf unterschiedlichen Zeitskalen versteht. Ist es möglich, gezielt 'langsame' oder 'schnelle' Dynamik in der Probe nachzuweisen, so ist die Existenz von dynamischen Heterogenitäten gezeigt. Ziel der Experimente sind deshalb sogenannte frequenzselektive Anregungen des Systems. In den beiden diskutierten Experimenten, zum einen nichtresonantes Lochbrennen, zum anderen ein ähnliches Experiment, das auf dem dynamischen Kerreffekt beruht, werden nichtlineare Antwortfunktionen gemessen. Um eine Probe in frequenzselektiver Weise anzuregen, werden zunächst einer oder mehrere Zyklen eines oszillierenden elektrischen Feldes an die Probe angelegt. Die Experimente werden zunächst im Terahertz-Bereich untersucht. Auf dieser Zeitskala findet man phonon-ähnliche kollektive Schwingungen in Gläsern. Diese Schwingungen werden durch (anharmonische) Brownsche Oszillatoren beschrieben. Der zentrale Befund der Modellrechnungen ist, daß eine frequenzselektive Anregung im Terahertz-Bereich möglich ist. Ein Nachweis dynamischer Heterogenitäten im Terahertz-Bereich ist somit durch beide Experimente möglich. Anschliessend wird das vorgestellte Kerreffekt-Experiment im Bereich wesentlich kleinerer Frequenzen diskutiert. Die langsame Reorientierungsdynamik in unterkühlten Flüssigkeiten wird dabei durch ein Rotationsdiffusionsmodell beschrieben. Es werden zum einen ein heterogenes und zum anderen ein homogenes Szenario zugrundegelegt. Es stellt sich heraus, daß wie beim Lochbrennen eine Unterscheidung durch das Experiment möglich ist. Das Kerreffekt-Experiment wird somit als eine relativ einfache Alternative zur Technik des nichtresonanten Lochbrennens vorgeschlagen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: To investigate effects of isoflurane at approximately the minimum alveolar concentration (MAC) on the nociceptive withdrawal reflex (NWR) of the forelimb of ponies as a method for quantifying anesthetic potency. ANIMALS: 7 healthy adult Shetland ponies. PROCEDURE: Individual MAC (iMAC) for isoflurane was determined for each pony. Then, effects of isoflurane administered at 0.85, 0.95, and 1.05 iMAC on the NWR were assessed. At each concentration, the NWR threshold was defined electromyographically for the common digital extensor and deltoid muscles by stimulating the digital nerve; additional electrical stimulations (3, 5, 10, 20, 30, and 40 mA) were delivered, and the evoked activity was recorded and analyzed. After the end of anesthesia, the NWR threshold was assessed in standing ponies. RESULTS: Mean +/- SD MAC of isoflurane was 1.0 +/- 0.2%. The NWR thresholds for both muscles increased significantly in a concentration-dependent manner during anesthesia, whereas they decreased in awake ponies. Significantly higher thresholds were found for the deltoid muscle, compared with thresholds for the common digital extensor muscle, in anesthetized ponies. At each iMAC tested, amplitudes of the reflex responses from both muscles increased as stimulus intensities increased from 3 to 40 mA. A concentration-dependent depression of evoked reflexes with reduction in slopes of the stimulus-response functions was detected. CONCLUSIONS AND CLINICAL RELEVANCE: Anesthetic-induced changes in sensory-motor processing in ponies anesthetized with isoflurane at concentrations of approximately 1.0 MAC can be detected by assessment of NWR. This method will permit comparison of effects of inhaled anesthetics or anesthetic combinations on spinal processing in equids.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Custom modes at a wavelength of 1064 nm were generated with a deformable mirror. The required surface deformations of the adaptive mirror were calculated with the Collins integral written in a matrix formalism. The appropriate size and shape of the actuators as well as the needed stroke were determined to ensure that the surface of the controllable mirror matches the phase front of the custom modes. A semipassive bimorph adaptive mirror with five concentric ring-shaped actuators and one defocus actuator was manufactured and characterised. The surface deformation was modelled with the response functions of the adaptive mirror in terms of an expansion with Zernike polynomials. In the experiments the Nd:YAG laser crystal was quasi-CW pumped to avoid thermally induced distortions of the phase front. The adaptive mirror allows to switch between a super-Gaussian mode, a doughnut mode, a Hermite-Gaussian fundamental beam, multi-mode operation or no oscillation in real time during laser operation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Squeeze film damping effects naturally occur if structures are subjected to loading situations such that a very thin film of fluid is trapped within structural joints, interfaces, etc. An accurate estimate of squeeze film effects is important to predict the performance of dynamic structures. Starting from linear Reynolds equation which governs the fluid behavior coupled with structure domain which is modeled by Kirchhoff plate equation, the effects of nondimensional parameters on the damped natural frequencies are presented using boundary characteristic orthogonal functions. For this purpose, the nondimensional coupled partial differential equations are obtained using Rayleigh-Ritz method and the weak formulation, are solved using polynomial and sinusoidal boundary characteristic orthogonal functions for structure and fluid domain respectively. In order to implement present approach to the complex geometries, a two dimensional isoparametric coupled finite element is developed based on Reissner-Mindlin plate theory and linearized Reynolds equation. The coupling between fluid and structure is handled by considering the pressure forces and structural surface velocities on the boundaries. The effects of the driving parameters on the frequency response functions are investigated. As the next logical step, an analytical method for solution of squeeze film damping based upon Green’s function to the nonlinear Reynolds equation considering elastic plate is studied. This allows calculating modal damping and stiffness force rapidly for various boundary conditions. The nonlinear Reynolds equation is divided into multiple linear non-homogeneous Helmholtz equations, which then can be solvable using the presented approach. Approximate mode shapes of a rectangular elastic plate are used, enabling calculation of damping ratio and frequency shift as well as complex resistant pressure. Moreover, the theoretical results are correlated and compared with experimental results both in the literature and in-house experimental procedures including comparison against viscoelastic dampers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The capability to detect combustion in a diesel engine has the potential of being an important control feature to meet increasingly stringent emission regulations, develop alternative combustion strategies, and use of biofuels. In this dissertation, block mounted accelerometers were investigated as potential feedback sensors for detecting combustion characteristics in a high-speed, high pressure common rail (HPCR), 1.9L diesel engine. Accelerometers were positioned in multiple placements and orientations on the engine, and engine testing was conducted under motored, single and pilot-main injection conditions. Engine tests were conducted at varying injection timings, engine loads, and engine speeds to observe the resulting time and frequency domain changes of the cylinder pressure and accelerometer signals. The frequency content of the cylinder pressure based signals and the accelerometer signals between 0.5 kHz and 6 kHz indicated a strong correlation with coherence values of nearly 1. The accelerometers were used to produce estimated combustion signals using the Frequency Response Functions (FRF) measured from the frequency domain characteristics of the cylinder pressure signals and the response of the accelerometers attached to the engine block. When compared to the actual combustion signals, the estimated combustion signals produced from the accelerometer response had Root Mean Square Errors (RMSE) between 7% and 25% of the actual signals peak value. Weighting the FRF’s from multiple test conditions along their frequency axis with the coherent output power reduced the median RMSE of the estimated combustion signals and the 95th percentile of RMSE produced from each test condition. The RMSE’s of the magnitude based combustion metrics including peak cylinder pressure, MPG, peak ROHR, and work estimated from the combustion signals produced by the accelerometer responses were between 15% and 50% of their actual value. The MPG measured from the estimated pressure gradient shared a direct relationship to the actual MPG. The location based combustion metrics such as the location of peak values and burn durations were capable of RMSE measurements as low as 0.9°. Overall, accelerometer based combustion sensing system was capable of detecting combustion and providing feedback regarding the in cylinder combustion process

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To assess if tree age may modulate the main climatic drivers of radial growth, two relict Pinus nigra subsp. salzmannii populations (Maria, most xeric site; Magina, least xeric site) were sampled in southern Spain near the limits of the species range. Tree-ring width residual chronologies for two age groups (mature trees, age <= 100 years (minimum 40 years); old trees, age > 100 years) were built to evaluate their responses to climate by relating them to monthly precipitation and temperature and a drought index (DRI) using correlation and response functions. We found that drought is the main driver of growth of relict P. nigra populations, but differences between sites and age classes were also observed. First, growth in the most xeric site depends on the drought severity during the previous autumn and the spring of the year of tree-ring formation, whereas in the relatively more mesic site growth is mainly enhanced by warm and wet conditions in spring. Second, growth of mature trees responded more to drought severity than that of old trees. Our findings indicate that drought severity will mainly affect growth of relict P. nigra populations dominated by mature trees in xeric sites. This conclusion may also apply to similar mountain Mediterranean conifer relicts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Investigations have shown that the analysis results of ground level enhancements (GLEs) based on neutron monitor (NM) data for a selected event can differ considerably depending the procedure used. This may have significant consequences e.g. for the assessment of radiation doses at flight altitudes. The reasons for the spread of the GLE parameters deduced from NM data can be manifold and are at present unclear. They include differences in specific properties of the various analysis procedures (e.g. NM response functions, different ways in taking into account the dynamics of the Earth’s magnetospheric field), different characterisations of the solar particle flux near Earth as well as the specific selection of NM stations used for the analysis. In the present paper we quantitatively investigate this problem for a time interval during the maximum phase of the GLE on 13 December 2006. We present and discuss the changes in the resulting GLE parameters when using different NM response functions, different model representations of the Earth’s magnetospheric field as well as different assumptions for the solar particle spectrum and pitch angle distribution near Earth. The results of the study are expected to yield a basis for the reduction in the spread of the GLE parameters deduced from NM data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We combine phytoplankton occurrence data for 119 species from the continuous plankton recorder with climatological environmental variables in the North Atlantic to obtain ecological response functions of each species using the MaxEnt statistical method. These response functions describe how the probability of occurrence of each species changes as a function of environmental conditions and can be reduced to a simple description of phytoplankton realized niches using the mean and standard deviation of each environmental variable, weighted by its response function. Although there was substantial variation in the realized niche among species within groups, the envelope of the realized niches of North Atlantic diatoms and dinoflagellates are mostly separate in niche space.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Scaling is becoming an increasingly important topic in the earth and environmental sciences as researchers attempt to understand complex natural systems through the lens of an ever-increasing set of methods and scales. The guest editors introduce the papers in this issue’s special section and present an overview of some of the work being done. Scaling remains one of the most challenging topics in earth and environmental sciences, forming a basis for our understanding of process development across the multiple scales that make up the subsurface environment. Tremendous progress has been made in discovery, explanation, and applications of scaling. And yet much more needs to be done and is being done as part of the modern quest to quantify, analyze, and manage the complexity of natural systems. Understanding and succinct representation of scaling properties can unveil underlying relationships between system structure and response functions, improve parameterization of natural variability and heterogeneity, and help us address societal needs by effectively merging knowledge acquired at different scales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.