266 resultados para CRITICALITY


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently more than half of Electronic Health Record (EHR) projects fail. Most of these failures are not due to flawed technology, but rather due to the lack of systematic considerations of human issues. Among the barriers for EHR adoption, function mismatching among users, activities, and systems is a major area that has not been systematically addressed from a human-centered perspective. A theoretical framework called Functional Framework was developed for identifying and reducing functional discrepancies among users, activities, and systems. The Functional Framework is composed of three models – the User Model, the Designer Model, and the Activity Model. The User Model was developed by conducting a survey (N = 32) that identified the functions needed and desired from the user’s perspective. The Designer Model was developed by conducting a systemic review of an Electronic Dental Record (EDR) and its functions. The Activity Model was developed using an ethnographic method called shadowing where EDR users (5 dentists, 5 dental assistants, 5 administrative personnel) were followed quietly and observed for their activities. These three models were combined to form a unified model. From the unified model the work domain ontology was developed by asking users to rate the functions (a total of 190 functions) in the unified model along the dimensions of frequency and criticality in a survey. The functional discrepancies, as indicated by the regions of the Venn diagrams formed by the three models, were consistent with the survey results, especially with user satisfaction. The survey for the Functional Framework indicated the preference of one system over the other (R=0.895). The results of this project showed that the Functional Framework provides a systematic method for identifying, evaluating, and reducing functional discrepancies among users, systems, and activities. Limitations and generalizability of the Functional Framework were discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo esta dedicado al estudio de las estructuras macroscópicas conocidas en la literatura como filamentos o blobs que han sido observadas de manera universal en el borde de todo tipo de dispositivos de fusión por confinamiento magnético. Estos filamentos, celdas convectivas elongadas a lo largo de las líneas de campo que surgen en el plasma fuertemente turbulento que existe en este tipo de dispositivos, parecen dominar el transporte radial de partículas y energía en la región conocida como Scrape-off Layer, en la que las líneas de campo dejan de estar cerradas y el plasma es dirigido hacia la pared sólida que forma la cámara de vacío. Aunque el comportamiento y las leyes de escala de estas estructuras son relativamente bien conocidos, no existe aún una teoría generalmente aceptada acerca del mecanismo físico responsable de su formación, que constituye una de las principales incógnitas de la teoría de transporte del borde en plasmas de fusión y una cuestión de gran importancia práctica en el desarrollo de la siguiente generación de reactores de fusión (incluyendo dispositivos como ITER y DEMO), puesto que la eficiencia del confinamiento y la cantidad de energía depositadas en la pared dependen directamente de las características del transporte en el borde. El trabajo ha sido realizado desde una perspectiva eminentemente experimental, incluyendo la observación y el análisis de este tipo de estructuras en el stellarator tipo heliotrón LHD (un dispositivo de gran tamaño, capaz de generar plasmas de características cercanas a las necesarias en un reactor de fusión) y en el stellarator tipo heliac TJ-II (un dispositivo de medio tamaño, capaz de generar plasmas relativamente más fríos pero con una accesibilidad y disponibilidad de diagnósticos mayor). En particular, en LHD se observó la generación de filamentos durante las descargas realizadas en configuración de alta _ (alta presión cinética frente a magnética) mediante una cámara visible ultrarrápida, se caracterizó su comportamiento y se investigó, mediante el análisis estadístico y la comparación con modelos teóricos, el posible papel de la Criticalidad Autoorganizada en la formación de este tipo de estructuras. En TJ-II se diseñó y construyó una cabeza de sonda capaz de medir simultáneamente las fluctuaciones electrostáticas y electromagnéticas del plasma. Gracias a este nuevo diagnóstico se pudieron realizar experimentos con el fin de determinar la presencia de corriente paralela a través de los filamentos (un parámetro de gran importancia en su modelización) y relacionar los dos tipos de fluctuaciones por primera vez en un stellarator. Así mismo, también por primera vez en este tipo de dispositivo, fue posible realizar mediciones simultáneas de los tensores viscoso y magnético (Reynolds y Maxwell) de transporte de cantidad de movimiento. ABSTRACT This work has been devoted to the study of the macroscopic structures known in the literature as filaments or blobs, which have been observed universally in the edge of all kind of magnetic confinement fusion devices. These filaments, convective cells stretching along the magnetic field lines, arise from the highly turbulent plasma present in this kind of machines and seem to dominate radial transport of particles and energy in the region known as Scrapeoff Layer, in which field lines become open and plasma is directed towards the solid wall of the vacuum vessel. Although the behavior and scale laws of these structures are relatively well known, there is no generally accepted theory about the physical mechanism involved in their formation yet, which remains one of the main unsolved questions in the fusion plasmas edge transport theory and a matter of great practical importance for the development of the next generation of fusion reactors (including ITER and DEMO), since efficiency of confinement and the energy deposition levels on the wall are directly dependent of the characteristics of edge transport. This work has been realized mainly from an experimental perspective, including the observation and analysis of this kind of structures in the heliotron stellarator LHD (a large device capable of generating reactor-relevant plasma conditions) and in the heliac stellarator TJ-II (a medium-sized device, capable of relatively colder plasmas, but with greater ease of access and diagnostics availability). In particular, in LHD, the generation of filaments during high _ discharges (with high kinetic to magnetic pressure ratio) was observed by means of an ultrafast visible camera, and the behavior of this structures was characterized. Finally, the potential role of Self-Organized Criticality in the generation of filaments was investigated. In TJ-II, a probe head capable of measuring simultaneously electrostatic and electromagnetic fluctuations in the plasma was designed and built. Thanks to this new diagnostic, experiments were carried out in order to determine the presence of parallel current through filaments (one of the most important parameters in their modelization) and to related electromagnetic (EM) and electrostatic (ES) fluctuations for the first time in an stellarator. As well, also for the first time in this kind of device, measurements of the viscous and magnetic momentum transfer tensors (Reynolds and Maxwell) were performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Burn-up credit analyses are based on depletion calculations that provide an accurate prediction of spent fuel isotopic contents, followed by criticality calculations to assess keff

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Determining as accurate as possible spent nuclear fuel isotopic content is gaining importance due to its safety and economic implications. Since nowadays higher burn ups are achievable through increasing initial enrichments, more efficient burn up strategies within the reactor cores and the extension of the irradiation periods, establishing and improving computation methodologies is mandatory in order to carry out reliable criticality and isotopic prediction calculations. Several codes (WIMSD5, SERPENT 1.1.7, SCALE 6.0, MONTEBURNS 2.0 and MCNP-ACAB) and methodologies are tested here and compared to consolidated benchmarks (OECD/NEA pin cell moderated with light water) with the purpose of validating them and reviewing the state of the isotopic prediction capabilities. These preliminary comparisons will suggest what can be generally expected of these codes when applied to real problems. In the present paper, SCALE 6.0 and MONTEBURNS 2.0 are used to model the same reported geometries, material compositions and burn up history of the Spanish Van de llós II reactor cycles 7-11 and to reproduce measured isotopies after irradiation and decay times. We analyze comparisons between measurements and each code results for several grades of geometrical modelization detail, using different libraries and cross-section treatment methodologies. The power and flux normalization method implemented in MONTEBURNS 2.0 is discussed and a new normalization strategy is developed to deal with the selected and similar problems, further options are included to reproduce temperature distributions of the materials within the fuel assemblies and it is introduced a new code to automate series of simulations and manage material information between them. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, we have estimated uncertainties using our MCNP-ACAB system. This depletion code, which combines the neutron transport code MCNP and the inventory code ACAB, propagates the uncertainties in the nuclide inventory assessing the potential impact of uncertainties in the basic nuclear data: cross-section, decay data and fission yields

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The accurate prediction of the spent nuclear fuel content is essential for its safe and optimized transportation, storage and management. This isotopic evolution can be predicted using powerful codes and methodologies throughout irradiation as well as cooling time periods. However, in order to have a realistic confidence level in the prediction of spent fuel isotopic content, it is desirable to determine how uncertainties affect isotopic prediction calculations by quantifying their associated uncertainties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Isotopic content assessment has a paramount importance for safety and storage reasons. During the latest years, a great variety of codes have been developed to perform transport and decay calculations, but only those that couple both in an iterative manner achieve an accurate prediction of the final isotopic content of irradiated fuels. Needless to say, them all are supposed to pass the test of the comparison of their predictions against the corresponding experimental measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As it is defined in ATM 2000+ Strategy (Eurocontrol 2001), the mission of the Air Traffic Management (ATM) System is: “For all the phases of a flight, the ATM system should facilitate a safe, efficient, and expedite traffic flow, through the provision of adaptable ATM services that can be dimensioned in relation to the requirements of all the users and areas of the European air space. The ATM services should comply with the demand, be compatible, operate under uniform principles, respect the environment and satisfy the national security requirements.” The objective of this paper is to present a methodology designed to evaluate the status of the ATM system in terms of the relationship between the offered capacity and traffic demand, identifying weakness areas and proposing solutions. The first part of the methodology relates to the characterization and evaluation of the current system, while a second part proposes an approach to analyze the possible development limit. As part of the work, general criteria are established to define the framework in which the analysis and diagnostic methodology presented is placed. They are: the use of Air Traffic Control (ATC) sectors as analysis unit, the presence of network effects, the tactical focus, the relative character of the analysis, objectivity and a high level assessment that allows assumptions on the human and Communications, Navigation and Surveillance (CNS) elements, considered as the typical high density air traffic resources. The steps followed by the methodology start with the definition of indicators and metrics, like the nominal criticality or the nominal efficiency of a sector; scenario characterization where the necessary data is collected; network effects analysis to study the relations among the constitutive elements of the ATC system; diagnostic by means of the “System Status Diagram”; analytical study of the ATC system development limit; and finally, formulation of conclusions and proposal for improvement. This methodology was employed by Aena (Spanish Airports Manager and Air Navigation Service Provider) and INECO (Spanish Transport Engineering Company) in the analysis of the Spanish ATM System in the frame of the Spanish airspace capacity sustainability program, although it could be applied elsewhere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable real-time kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fuel cycles are designed with the aim of obtaining the highest amount of energy possible. Since higher burnup values are reached, it is necessary to improve our disposal designs, traditionally based on the conservative assumption that they contain fresh fuel. The criticality calculations involved must consider burnup by making the most of the experimental and computational capabilities developed, respectively, to measure and predict the isotopic content of the spent nuclear fuel. These high burnup scenarios encourage a review of the computational tools to find out possible weaknesses in the nuclear data libraries, in the methodologies applied and their applicability range. Experimental measurements of the spent nuclear fuel provide the perfect framework to benchmark the most well-known and established codes, both in the industry and academic research activity. For the present paper, SCALE 6.0/TRITON and MONTEBURNS 2.0 have been chosen to follow the isotopic content of four samples irradiated in the Spanish Vandellós-II pressurized water reactor up to burnup values ranging from 40 GWd/MTU to 75 GWd/MTU. By comparison with the experimental data reported for these samples, we can probe the applicability of these codes to deal with high burnup problems. We have developed new computational tools within MONTENBURNS 2.0. They make possible to handle an irradiation history that includes geometrical and positional changes of the samples within the reactor core. This paper describes the irradiation scenario against which the mentioned codes and our capabilities are to be benchmarked.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Partitioning is a common approach to developing mixed-criticality systems, where partitions are isolated from each other both in the temporal and the spatial domain in order to prevent low-criticality subsystems from compromising other subsystems with high level of criticality in case of misbehaviour. The advent of many-core processors, on the other hand, opens the way to highly parallel systems in which all partitions can be allocated to dedicated processor cores. This trend will simplify processor scheduling, although other issues such as mutual interference in the temporal domain may arise as a consequence of memory and device sharing. The paper describes an architecture for multi-core partitioned systems including critical subsystems built with the Ada Ravenscar profile. Some implementation issues are discussed, and experience on implementing the ORK kernel on the XtratuM partitioning hypervisor is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Following the processing and validation of JEFF-3.1 performed in 2006 and presented in ND2007, and as a consequence of the latest updated of this library (JEFF-3.1.2) in February 2012, a new processing and validation of JEFF-3.1.2 cross section library is presented in this paper. The processed library in ACE format at ten different temperatures was generated with NJOY-99.364 nuclear data processing system. In addition, NJOY-99 inputs are provided to generate PENDF, GENDF, MATXSR and BOXER formats. The library has undergone strict QA procedures, being compared with other available libraries (e.g. ENDF/B-VII.1) and processing codes as PREPRO-2000 codes. A set of 119 criticality benchmark experiments taken from ICSBEP-2010 has been used for validation purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Location-based services (LBS) highly rely on the location of the mobile user in order to provide the service tailored to that location. This location is calculated differently depending on the technology available in the used mobile device. No matter which technology is used, the location will never be calculated 100% correctly; instead there will always be a margin of error generated during the calculation, which is referred to as positional accuracy. This research has reviewed the eight most common positioning technologies available in the major current smart-phones and assessed their positional accuracy with respect to its usage by LBS applications. Given the vast majority of these applications, this research classified them into thirteen categories, and these categories were also classified depending on their level criticality as low, medium, or high critical, and whether they function indoor or outdoor. The accuracies of different positioning technologies are compared to these two criteria. Low critical outdoor and high critical indoor applications were found technologically covered; high and medium critical outdoor ones weren?t fully resolved. Finally three potential solutions are suggested to be implemented in future smartphones to resolve this technological gap: Real-Time Kinematics Global Positioning System (RTK GPS), terrestrial transmitters, and combination of Wireless Sensors Network and Radio Frequency Identification (WSN-RFID).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Numerical simulations of axisymmetric reactive jets with one-step Arrhenius kinetics are used to investigate the problem of deflagration initiation in a premixed fuel–air mixture by the sudden discharge of a hot jet of its adiabatic reaction products. For the moderately large values of the jet Reynolds number considered in the computations, chemical reaction is seen to occur initially in the thin mixing layer that separates the hot products from the cold reactants. This mixing layer is wrapped around by the starting vortex, thereby enhancing mixing at the jet head, which is followed by an annular mixing layer that trails behind, connecting the leading vortex with the orifice rim. A successful deflagration is seen to develop for values of the orifice radius larger than a critical value a c in the order of the flame thickness of the planar deflagration δL. Introduction of appropriate scales provides the dimensionless formulation of the problem, with flame initiation characterised in terms of a critical Damköhler number Δc=(a d/δL)2, whose parametric dependence is investigated. The numerical computations reveal that, while the jet Reynolds number exerts a limited influence on the criticality conditions, the effect of the reactant diffusivity on ignition is much more pronounced, with the value of Δc increasing significantly with increasing Lewis numbers. The reactant diffusivity affects also the way ignition takes place, so that for reactants with the flame develops as a result of ignition in the annular mixing layer surrounding the developing jet stem, whereas for highly diffusive reactants with Lewis numbers sufficiently smaller than unity combustion is initiated in the mixed core formed around the starting vortex. The analysis provides increased understanding of deflagration initiation processes, including the effects of differential diffusion, and points to the need for further investigations corporating detailed chemistry models for specific fuel–air mixtures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La creciente complejidad, heterogeneidad y dinamismo inherente a las redes de telecomunicaciones, los sistemas distribuidos y los servicios avanzados de información y comunicación emergentes, así como el incremento de su criticidad e importancia estratégica, requieren la adopción de tecnologías cada vez más sofisticadas para su gestión, su coordinación y su integración por parte de los operadores de red, los proveedores de servicio y las empresas, como usuarios finales de los mismos, con el fin de garantizar niveles adecuados de funcionalidad, rendimiento y fiabilidad. Las estrategias de gestión adoptadas tradicionalmente adolecen de seguir modelos excesivamente estáticos y centralizados, con un elevado componente de supervisión y difícilmente escalables. La acuciante necesidad por flexibilizar esta gestión y hacerla a la vez más escalable y robusta, ha provocado en los últimos años un considerable interés por desarrollar nuevos paradigmas basados en modelos jerárquicos y distribuidos, como evolución natural de los primeros modelos jerárquicos débilmente distribuidos que sucedieron al paradigma centralizado. Se crean así nuevos modelos como son los basados en Gestión por Delegación, en el paradigma de código móvil, en las tecnologías de objetos distribuidos y en los servicios web. Estas alternativas se han mostrado enormemente robustas, flexibles y escalables frente a las estrategias tradicionales de gestión, pero continúan sin resolver aún muchos problemas. Las líneas actuales de investigación parten del hecho de que muchos problemas de robustez, escalabilidad y flexibilidad continúan sin ser resueltos por el paradigma jerárquico-distribuido, y abogan por la migración hacia un paradigma cooperativo fuertemente distribuido. Estas líneas tienen su germen en la Inteligencia Artificial Distribuida (DAI) y, más concretamente, en el paradigma de agentes autónomos y en los Sistemas Multi-agente (MAS). Todas ellas se perfilan en torno a un conjunto de objetivos que pueden resumirse en alcanzar un mayor grado de autonomía en la funcionalidad de la gestión y una mayor capacidad de autoconfiguración que resuelva los problemas de escalabilidad y la necesidad de supervisión presentes en los sistemas actuales, evolucionar hacia técnicas de control fuertemente distribuido y cooperativo guiado por la meta y dotar de una mayor riqueza semántica a los modelos de información. Cada vez más investigadores están empezando a utilizar agentes para la gestión de redes y sistemas distribuidos. Sin embargo, los límites establecidos en sus trabajos entre agentes móviles (que siguen el paradigma de código móvil) y agentes autónomos (que realmente siguen el paradigma cooperativo) resultan difusos. Muchos de estos trabajos se centran en la utilización de agentes móviles, lo cual, al igual que ocurría con las técnicas de código móvil comentadas anteriormente, les permite dotar de un mayor componente dinámico al concepto tradicional de Gestión por Delegación. Con ello se consigue flexibilizar la gestión, distribuir la lógica de gestión cerca de los datos y distribuir el control. Sin embargo se permanece en el paradigma jerárquico distribuido. Si bien continúa sin definirse aún una arquitectura de gestión fiel al paradigma cooperativo fuertemente distribuido, estas líneas de investigación han puesto de manifiesto serios problemas de adecuación en los modelos de información, comunicación y organizativo de las arquitecturas de gestión existentes. En este contexto, la tesis presenta un modelo de arquitectura para gestión holónica de sistemas y servicios distribuidos mediante sociedades de agentes autónomos, cuyos objetivos fundamentales son el incremento del grado de automatización asociado a las tareas de gestión, el aumento de la escalabilidad de las soluciones de gestión, soporte para delegación tanto por dominios como por macro-tareas, y un alto grado de interoperabilidad en entornos abiertos. A partir de estos objetivos se ha desarrollado un modelo de información formal de tipo semántico, basado en lógica descriptiva que permite un mayor grado de automatización en la gestión en base a la utilización de agentes autónomos racionales, capaces de razonar, inferir e integrar de forma dinámica conocimiento y servicios conceptualizados mediante el modelo CIM y formalizados a nivel semántico mediante lógica descriptiva. El modelo de información incluye además un “mapping” a nivel de meta-modelo de CIM al lenguaje de especificación de ontologías OWL, que supone un significativo avance en el área de la representación y el intercambio basado en XML de modelos y meta-información. A nivel de interacción, el modelo aporta un lenguaje de especificación formal de conversaciones entre agentes basado en la teoría de actos ilocucionales y aporta una semántica operacional para dicho lenguaje que facilita la labor de verificación de propiedades formales asociadas al protocolo de interacción. Se ha desarrollado también un modelo de organización holónico y orientado a roles cuyas principales características están alineadas con las demandadas por los servicios distribuidos emergentes e incluyen la ausencia de control central, capacidades de reestructuración dinámica, capacidades de cooperación, y facilidades de adaptación a diferentes culturas organizativas. El modelo incluye un submodelo normativo adecuado al carácter autónomo de los holones de gestión y basado en las lógicas modales deontológica y de acción.---ABSTRACT---The growing complexity, heterogeneity and dynamism inherent in telecommunications networks, distributed systems and the emerging advanced information and communication services, as well as their increased criticality and strategic importance, calls for the adoption of increasingly more sophisticated technologies for their management, coordination and integration by network operators, service providers and end-user companies to assure adequate levels of functionality, performance and reliability. The management strategies adopted traditionally follow models that are too static and centralised, have a high supervision component and are difficult to scale. The pressing need to flexibilise management and, at the same time, make it more scalable and robust recently led to a lot of interest in developing new paradigms based on hierarchical and distributed models, as a natural evolution from the first weakly distributed hierarchical models that succeeded the centralised paradigm. Thus new models based on management by delegation, the mobile code paradigm, distributed objects and web services came into being. These alternatives have turned out to be enormously robust, flexible and scalable as compared with the traditional management strategies. However, many problems still remain to be solved. Current research lines assume that the distributed hierarchical paradigm has as yet failed to solve many of the problems related to robustness, scalability and flexibility and advocate migration towards a strongly distributed cooperative paradigm. These lines of research were spawned by Distributed Artificial Intelligence (DAI) and, specifically, the autonomous agent paradigm and Multi-Agent Systems (MAS). They all revolve around a series of objectives, which can be summarised as achieving greater management functionality autonomy and a greater self-configuration capability, which solves the problems of scalability and the need for supervision that plague current systems, evolving towards strongly distributed and goal-driven cooperative control techniques and semantically enhancing information models. More and more researchers are starting to use agents for network and distributed systems management. However, the boundaries established in their work between mobile agents (that follow the mobile code paradigm) and autonomous agents (that really follow the cooperative paradigm) are fuzzy. Many of these approximations focus on the use of mobile agents, which, as was the case with the above-mentioned mobile code techniques, means that they can inject more dynamism into the traditional concept of management by delegation. Accordingly, they are able to flexibilise management, distribute management logic about data and distribute control. However, they remain within the distributed hierarchical paradigm. While a management architecture faithful to the strongly distributed cooperative paradigm has yet to be defined, these lines of research have revealed that the information, communication and organisation models of existing management architectures are far from adequate. In this context, this dissertation presents an architectural model for the holonic management of distributed systems and services through autonomous agent societies. The main objectives of this model are to raise the level of management task automation, increase the scalability of management solutions, provide support for delegation by both domains and macro-tasks and achieve a high level of interoperability in open environments. Bearing in mind these objectives, a descriptive logic-based formal semantic information model has been developed, which increases management automation by using rational autonomous agents capable of reasoning, inferring and dynamically integrating knowledge and services conceptualised by means of the CIM model and formalised at the semantic level by means of descriptive logic. The information model also includes a mapping, at the CIM metamodel level, to the OWL ontology specification language, which amounts to a significant advance in the field of XML-based model and metainformation representation and exchange. At the interaction level, the model introduces a formal specification language (ACSL) of conversations between agents based on speech act theory and contributes an operational semantics for this language that eases the task of verifying formal properties associated with the interaction protocol. A role-oriented holonic organisational model has also been developed, whose main features meet the requirements demanded by emerging distributed services, including no centralised control, dynamic restructuring capabilities, cooperative skills and facilities for adaptation to different organisational cultures. The model includes a normative submodel adapted to management holon autonomy and based on the deontic and action modal logics.