977 resultados para OPACITY CALCULATIONS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tin disulfide SnS2 was recently proposed as a high efficiency solar cell precursor [1]. The aim of this work is a deep study of the structural disposition of the most important polytipes of this layered material, not only describing the electronic correlation but also the interatomic Van der Waals interactions that is present between the layers. The two recent implementations to take Van der Waals interactions into account in the VASP code are the self-consistent Dion et al. [2] functional optimized for solids by Michaelides et al [3] and the Grimme [4] dispersion correction that is applied after each autoconsistent PBE electronic calculation. In this work these two methods are compared with DFT PBE functional. The results we will presented at this Conference, demonstrates the enhancement of the geometric parameters by the use of the Van der Waals interactions in agreement with the experimental values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A review of the experimental data for natC(n,c) and 12C(n,c) was made to identify the origin of the natC capture cross sections included in evaluated data libraries and to clarify differences observed in neutronic calculations for graphite moderated reactors using different libraries. The performance of the JEFF-3.1.2 and ENDF/B-VII.1 libraries was verified by comparing results of criticality calculations with experimental results obtained for the BR1 reactor. This reactor is an air-cooled reactor with graphite as moderator and is located at the Belgian Nuclear Research Centre SCK-CEN in Mol (Belgium). The results of this study confirm conclusions drawn from neutronic calculations of the High Temperature Engineering Test Reactor (HTTR) in Japan. Furthermore, both BR1 and HTTR calculations support the capture cross section of 12C at thermal energy which is recommended by Firestone and Révay. Additional criticality calculations were carried out in order to illustrate that the natC thermal capture cross section is important for systems with a large amount of graphite. The present study shows that only the evaluation carried out for JENDL-4.0 reflects the current status of the experimental data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In activation calculations, there are several approaches to quantify uncertainties: deterministic by means of sensitivity analysis, and stochastic by means of Monte Carlo. Here, two different Monte Carlo approaches for nuclear data uncertainty are presented: the first one is the Total Monte Carlo (TMC). The second one is by means of a Monte Carlo sampling of the covariance information included in the nuclear data libraries to propagate these uncertainties throughout the activation calculations. This last approach is what we named Covariance Uncertainty Propagation, CUP. This work presents both approaches and their differences. Also, they are compared by means of an activation calculation, where the cross-section uncertainties of 239Pu and 241Pu are propagated in an ADS activation calculation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the framework of the OECD/NEA project on Benchmark for Uncertainty Analysis in Modeling (UAM) for Design, Operation, and Safety Analysis of LWRs, several approaches and codes are being used to deal with the exercises proposed in Phase I, “Specifications and Support Data for Neutronics Cases.” At UPM, our research group treats these exercises with sensitivity calculations and the “sandwich formula” to propagate cross-section uncertainties. Two different codes are employed to calculate the sensitivity coefficients of to cross sections in criticality calculations: MCNPX-2.7e and SCALE-6.1. The former uses the Differential Operator Technique and the latter uses the Adjoint-Weighted Technique. In this paper, the main results for exercise I-2 “Lattice Physics” are presented for the criticality calculations of PWR. These criticality calculations are done for a TMI fuel assembly at four different states: HZP-Unrodded, HZP-Rodded, HFP-Unrodded, and HFP-Rodded. The results of the two different codes above are presented and compared. The comparison proves a good agreement between SCALE-6.1 and MCNPX-2.7e in uncertainty that comes from the sensitivity coefficients calculated by both codes. Differences are found when the sensitivity profiles are analysed, but they do not lead to differences in the uncertainty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Generation of Fission Yield covariance data and application to Fission Pulse Decay Heat calculations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Propagation of nuclear data uncertainties in reactor calculations is interesting for design purposes and libraries evaluation. Previous versions of the GRS XSUSA library propagated only neutron cross section uncertainties. We have extended XSUSA uncertainty assessment capabilities by including propagation of fission yields and decay data uncertainties due to the their relevance in depletion simulations. We apply this extended methodology to the UAM6 PWR Pin-Cell Burnup Benchmark, which involves uncertainty propagation through burnup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The calibration coefficients of several models of cup and propeller anemometers were analysed. The analysis was based on a series of laboratory calibrations between January 2003 and August 2007. Mean and standard deviation values of calibration coefficients from the anemometers studied were included. Two calibration procedures were used and compared. In the first, recommended by the Measuring network of Wind Energy Institutes (MEASNET), 13 measurement points were taken over a wind speed range of 4 to 16  m  s−1. In the second procedure, 9 measurement points were taken over a wider speed range of 4 to 23  m  s−1. Results indicated no significant differences between the two calibration procedures applied to the same anemometer in terms of measured wind speed and wind turbines' Annual Energy Production (AEP). The influence of the cup anemometers' design on the calibration coefficients was also analysed. The results revealed that the slope of the calibration curve, if based on the rotation frequency and not the anemometer's output frequency, seemed to depend on the cup center rotation radius.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Se presentan las mejoras introducidas en un código de transporte de radiación acoplada a la hidrodinámica llamado ARWEN para el estudio de sistemas en el rango de física de alta densidad de energía (High Energy Density Physics). Los desarrollos introducidos se basan en las siguientes áreas: ít>,~ Ecuaciones de estado: se desarrolla una nueva metodología mediante la cual es posible ajustar los resultados de un modelo simple de ecuaciones de estado como QEOS a datos experimentales y resultados de AIMD. Esta metodología tiene carácter general para poder ser aplicada en gran cantidad de materuales de interés y amplia la flexibilidad de ajuste de los métodos de los que ha partido como base este trabajo. En segundo lugar, se ha desarrollado una librería para la gestión de tablas de datos de ecuaciones de estado que también incluye la gestión de tablas con datos de opacidades y de ionización. Esta nueva librería extiende las capacidades de la anterior al tener llamadas más específicas que aceleran los cálculos, y posibilidad de uso de varias tablas a la vez. Solver de difusión: se ha desarrollado un nuevo paquete para resolver la ecuación de difusión que se aplicará a la conducción de calor dentro del plasma. El método anterior no podía ser ejecutado en paralelo y producía resultados dependientes de la resolución de la malla, mientras que este método es paralelizable y además obtiene una solución con mejor convergencia, lo que supone una solución que no depende del refinamiento del mallado. Revisión del paquete de radiación: en primer lugar se ha realizado una revisión de la implementación del modelo de radiación descubriendo varios errores que han sido depurados. También se ha incluido la nueva librería de gestión de tablas de opacidades que permiten la obtención de las propiedades ópticas del plasma en multigrupos de energía. Por otra parte se ha extendido el cálculo de los coeficientes de transporte al esquema multimaterial que ha introducido David Portillo García en el paquete hidrodinámico del código de simulación. Por último se ha revisado el esquema de resolución del transporte y se ha modificado para hacerlo paralelizable. • Se ha implementado un paquete de trazado de rayos para deposición láser que extiende la utilidad del anterior al ser en 3D y poder utilizar entonces diferentes configuraciones. • Una vez realizadas todas estas tareas se ha aplicado el código ARWEN al estudio de la astrofísica de laboratorio simulando los experimentos llevados a cabo en la instalación PALS por Chantal Stehlé acerca de ondas de choque radiativas. Se han comparado los resultados experimentales frente a las predicciones del código ARWEN obteniéndose una gran concordancia en la velocidad de la onda de choque generada y en las dimensiones del precursor. El código de simulación sobre el que se ha trabajado, junto con los desarrollos aportados por otros investigadores durante la realización de esta tesis, ha permitido participar en colaboraciones con laboratorios de Francia o Japón y se han producido resultados científicos publicados basados en el trabajo descrito en esta tesis. ABSTRACT Improvements in radiation hydrodynamic code ARWEN for the study of systems in the range of physics high energy density (High Energy Density Physics) are presented. The developments introduced are based on the following áreas: • Equations of state: a new methodology was developed to adjust the results of a simple Equation of State model like QEOS to experimental data and results of AIMD. This methodology can be applied to a large amount of materials and it increases the flexibility and range of the previous methods used as basis for this work. Also a new computer library has been developed to manage data tables of thermodynamic properties as well as includes the management of opacity and ionization data tables. This new library extends the capabilities of the previous one with more specific routines, and the possibility of using múltiple tables for several materials. • Diffusion solver: a new package has been developed to solve the diffusion equation applied to the heat conduction of the plasma. The previous method is not parallelizable and it produced mesh dependent results, while this new package can be executed in parallel and achieves a more converged solution that does not depend on the refinement of the mesh. • Radiation package: the check of the radiation model rose several bugs in the implementation that had been removed. The new computer library for EOS managing includes capabilities to store opacity tables for multigroups of energy. Moreover the transport coefficients calculations have been extended for the new multimaterial hydrodynamic package developed by David Portillo García. Also the solving methodology for the transport equation has been modified to make the code run in parallel. • A new ray tracing package has been introduced to extend the previous one to 3D. Once all these tasks has been implemented, the ARWEN code has been applied to study laboratory astrophysics systems. Simulations have been done in order to reproduce the results of the experiments carried out in PALS facility by Chantal Stehlé in radiative shock production. Experimental results are in cióse agreement to the ARWEN estimations of the speed of the shock wave and the length of the precursor. The simulation code used in this thesis, including the work done in ARWEN by other colleagues at the time of this research, allowed the collaboration with other research institution in France and Japan and some of the results presented in this thesis have been published in scientific journals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Liquid-fueled burners are used in a number of propulsion devices ranging from internal combustion engines to gas turbines. The structure of spray flames is quite complex and involves a wide range of time and spatial scales in both premixed and non-premixed modes (Williams 1965; Luo et al. 2011). A number of spray-combustion regimes can be observed experimentally in canonical scenarios of practical relevance such as counterflow diffusion flames (Li 1997), as sketched in figure 1, and for which different microscalemodelling strategies are needed. In this study, source terms for the conservation equations are calculated for heating, vaporizing and burning sprays in the single-droplet combustion regime. The present analysis provides extended formulation for source terms, which include non-unity Lewis numbers and variable thermal conductivities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multigroup diffusion codes for three dimensional LWR core analysis use as input data pre-generated homogenized few group cross sections and discontinuity factors for certain combinations of state variables, such as temperatures or densities. The simplest way of compiling those data are tabulated libraries, where a grid covering the domain of state variables is defined and the homogenized cross sections are computed at the grid points. Then, during the core calculation, an interpolation algorithm is used to compute the cross sections from the table values. Since interpolation errors depend on the distance between the grid points, a determined refinement of the mesh is required to reach a target accuracy, which could lead to large data storage volume and a large number of lattice transport calculations. In this paper, a simple and effective procedure to optimize the distribution of grid points for tabulated libraries is presented. Optimality is considered in the sense of building a non-uniform point distribution with the minimum number of grid points for each state variable satisfying a given target accuracy in k-effective. The procedure consists of determining the sensitivity coefficients of k-effective to cross sections using perturbation theory; and estimating the interpolation errors committed with different mesh steps for each state variable. These results allow evaluating the influence of interpolation errors of each cross section on k-effective for any combination of state variables, and estimating the optimal distance between grid points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 8-dimensional Luttinger–Kohn–Pikus–Bir Hamiltonian matrix may be made up of four 4-dimensional blocks. A 4-band Hamiltonian is presented, obtained from making the non-diagonal blocks zero. The parameters of the new Hamiltonian are adjusted to fit the calculated effective masses and strained QD bandgap with the measured ones. The 4-dimensional Hamiltonian thus obtained agrees well with measured quantum efficiency of a quantum dot intermediate band solar cell and the full absorption spectrum can be calculated in about two hours using Mathematica© and a notebook. This is a hundred times faster than with the commonly-used 8-band Hamiltonian and is considered suitable for helping design engineers in the development of nanostructured solar cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the thin-film photovoltaic industry, to achieve a high light scattering in one or more of the cell interfaces is one of the strategies that allow an enhancement of light absorption inside the cell and, therefore, a better device behavior and efficiency. Although chemical etching is the standard method to texture surfaces for that scattering improvement, laser light has shown as a new way for texturizing different materials, maintaining a good control of the final topography with a unique, clean, and quite precise process. In this work AZO films with different texture parameters are fabricated. The typical parameters used to characterize them, as the root mean square roughness or the haze factor, are discussed and, for deeper understanding of the scattering mechanisms, the light behavior in the films is simulated using a finite element method code. This method gives information about the light intensity in each point of the system, allowing the precise characterization of the scattering behavior near the film surface, and it can be used as well to calculate a simulated haze factor that can be compared with experimental measurements. A discussion of the validation of the numerical code, based in a comprehensive comparison with experimental data is included.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigated the relative free energies of hapten binding to the germ line and mature forms of the 48G7 antibody Fab fragments by applying a continuum model to structures sampled from molecular dynamics simulations in explicit solvent. Reasonable absolute and very good relative free energies were obtained. As a result of nine somatic mutations that do not contact the hapten, the affinity-matured antibody binds the hapten >104 tighter than the germ line antibody. Energetic analysis reveals that van der Waals interactions and nonpolar contributions to solvation are similar and drive the formations of both the germ line and mature antibody–hapten complexes. Affinity maturation of the 48G7 antibody therefore appears to occur through reorganization of the combining site geometry in a manner that optimizes the balance of gaining favorable electrostatic interactions with the hapten and losing those with solvent during the binding process. As reflected by lower rms fluctuations in the antibody–hapten complex, the mature complex undergoes more restricted fluctuations than the germ line complex. The dramatically increased affinity of the 48G7 antibody over its germ line precursor is thus made possible by electrostatic optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neisseria gonorrhoeae (GC) or Escherichia coli expressing phase-variable opacity (Opa) protein (Opa+ GC or Opa+ E. coli) adhere to human neutrophils and stimulate phagocytosis, whereas their counterparts not expressing Opa protein (Opa− GC or Opa− E. coli) do not. Opa+ GC or E. coli do not adhere to human lymphocytes and promyelocytic cell lines such as HL-60 cells. The adherence of Opa+ GC to the neutrophils can be enhanced dramatically if the neutrophils are preactivated. These data suggest that the components binding the Opa+ bacteria might exist in the granules. CGM1a antigen, a transmembrane protein of the carcinoembryonic antigen family, is exclusively expressed in the granulocytic lineage. The predicted molecular weight of CGM1a is ≈30 kDa. We observed specific binding of OpaI+ E. coli to a 30-kDa band of polymorphonuclear leukocytes lysates. To prove the hypothesis that the 30-kDa CGM1a antigen from neutrophils was the receptor of Opa+ bacteria, we showed that a HeLa cell line expressing human CGM1a antigen (HeLa-CGM1a) bound Opa+ E. coli and subsequently engulfed the bacteria. Monoclonal antibodies (COL-1) against CGM1 blocked the interaction between Opa+ E. coli and HeLa-CGM1a. These results demonstrate that HeLa cells when expressing the CGM1a antigens bind and internalize OpaI+ bacteria.