25 resultados para Benchmarks

em Universidad Politécnica de Madrid


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Determining as accurate as possible spent nuclear fuel isotopic content is gaining importance due to its safety and economic implications. Since nowadays higher burn ups are achievable through increasing initial enrichments, more efficient burn up strategies within the reactor cores and the extension of the irradiation periods, establishing and improving computation methodologies is mandatory in order to carry out reliable criticality and isotopic prediction calculations. Several codes (WIMSD5, SERPENT 1.1.7, SCALE 6.0, MONTEBURNS 2.0 and MCNP-ACAB) and methodologies are tested here and compared to consolidated benchmarks (OECD/NEA pin cell moderated with light water) with the purpose of validating them and reviewing the state of the isotopic prediction capabilities. These preliminary comparisons will suggest what can be generally expected of these codes when applied to real problems. In the present paper, SCALE 6.0 and MONTEBURNS 2.0 are used to model the same reported geometries, material compositions and burn up history of the Spanish Van de llós II reactor cycles 7-11 and to reproduce measured isotopies after irradiation and decay times. We analyze comparisons between measurements and each code results for several grades of geometrical modelization detail, using different libraries and cross-section treatment methodologies. The power and flux normalization method implemented in MONTEBURNS 2.0 is discussed and a new normalization strategy is developed to deal with the selected and similar problems, further options are included to reproduce temperature distributions of the materials within the fuel assemblies and it is introduced a new code to automate series of simulations and manage material information between them. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, we have estimated uncertainties using our MCNP-ACAB system. This depletion code, which combines the neutron transport code MCNP and the inventory code ACAB, propagates the uncertainties in the nuclide inventory assessing the potential impact of uncertainties in the basic nuclear data: cross-section, decay data and fission yields

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Owing to the complexity of Ambient Assisted Living (AAL) systems and platforms, the evaluation of AAL solutions is a complex task that will challenge researchers for years to come. However, the analysis and comparison of proposed solutions is paramount to enable us to assess research results in this area. We have thus organized an international contest called EvAAL: Evaluating AAL Systems through Competitive Benchmarking. Its aims are to raise interest within the research and developer communities in the multidisciplinary research fields enabling AAL, and to create benchmarks for the evaluation and comparison of AAL systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The liberalization of electricity markets more than ten years ago in the vast majority of developed countries has introduced the need of modelling and forecasting electricity prices and volatilities, both in the short and long term. Thus, there is a need of providing methodology that is able to deal with the most important features of electricity price series, which are well known for presenting not only structure in conditional mean but also time-varying conditional variances. In this work we propose a new model, which allows to extract conditionally heteroskedastic common factors from the vector of electricity prices. These common factors are jointly estimated as well as their relationship with the original vector of series, and the dynamics affecting both their conditional mean and variance. The estimation of the model is carried out under the state-space formulation. The new model proposed is applied to extract seasonal common dynamic factors as well as common volatility factors for electricity prices and the estimation results are used to forecast electricity prices and their volatilities in the Spanish zone of the Iberian Market. Several simplified/alternative models are also considered as benchmarks to illustrate that the proposed approach is superior to all of them in terms of explanatory and predictive power.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The modeling needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermalhydraulics modeling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan. From 1987 to 1995, NUPEC performed steady-state and transient critical power and departure from nucleate boiling (DNB) test series based on the equivalent full-size mock-ups. Considering the reliability not only of the measured data, but also other relevant parameters such as the system pressure, inlet sub-cooling and rod surface temperature, these test series supplied the first substantial database for the development of truly mechanistic and consistent models for boiling transition and critical heat flux. Over the last few years the Pennsylvania State University (PSU) under the sponsorship of the U.S. Nuclear Regulatory Commission (NRC) has prepared, organized, conducted and summarized the OECD/NRC Full-size Fine-mesh Bundle Tests (BFBT) Benchmark. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and Japan Nuclear Energy Safety (JNES) organization, Japan. Consequently, the JNES has made available the Boiling Water Reactor (BWR) NUPEC database for the purposes of the benchmark. Based on the success of the OECD/NRC BFBT benchmark the JNES has decided to release also the data based on the NUPEC Pressurized Water Reactor (PWR) subchannel and bundle tests for another follow-up international benchmark entitled OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known subchannel code COBRA-TF, namely CTF, to the critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La importancia de la seguridad en la aplicación de la tecnología nuclear impregna todas las tareas asociadas a la utilización de esta fuente de energía, comenzando por la fase de diseño, explotación y posterior desmantelamiento o gestión de residuos. En todos estos pasos, las herramientas de simulación computacional juegan un papel esencial como guía para el diseño, apoyo durante la operación o predicción de la evolución isotópica de los materiales del reactor. Las constantes mejoras en cuanto a recursos computacionales desde mediados del siglo XX hasta este momento así como los avances en los métodos de cálculo utilizados, permiten tratar la complejidad de estas situaciones con un detalle cada vez mayor, que en ocasiones anteriores fue simplemente descartado por falta de capacidad de cálculo o herramientas adecuadas. El presente trabajo se centra en el desarrollo de un método de cálculo neutrónico para reactores de agua ligera basado en teoría de difusión corregida con un nivel de detalle hasta la barra de combustible, considerando un número de grupos de energía mayor que los tradicionales rápido y térmico, y modelando la geometría tridimensional del núcleo del reactor. La capacidad de simular tanto situaciones estacionarias con posible búsqueda de criticidad, como la evolución durante transitorios del flujo neutrónico ha sido incluida, junto con un algoritmo de cálculo de paso de tiempo adaptativo para mejorar el rendimiento de las simulaciones. Se ha llevado a cabo un estudio de optimización de los métodos de cálculo utilizados para resolver la ecuación de difusión, tanto en el lazo de iteración de fuente como en los métodos de resolución de sistemas lineales empleados en las iteraciones internas. Por otra parte, la cantidad de memoria y tiempo de computación necesarios para resolver problemas de núcleo completo en malla fina obliga a introducir un método de paralelización en el cálculo; habiéndose aplicado una descomposición en subdominios basada en el método alternante de Schwarz acompañada de una aceleración nodal. La aproximación de difusión debe ser corregida si se desea reproducir los valores con una precisión cercana a la obtenida con la ecuación de transporte. Los factores de discontinuidad de la interfase utilizados para esta corrección no pueden en la práctica ser calculados y almacenados para cada posible configuración de una barra de combustible de composición determinada en el interior del reactor. Por esta razón, se ha estudiado una parametrización del factor de discontinuidad según la vecindad que permitiría tratar este factor como una sección eficaz más, parametrizada en función de valores significativos del entorno de la barra de material. Por otro lado, también se ha contemplado el acoplamiento con códigos termohidráulicos, lo que permite realizar simulaciones multifísica y producir resultados más realistas. Teniendo en cuenta la demanda creciente de la industria nuclear para que los resultados realistas sean suministrados junto con sus márgenes de confianza, se ha desarrollado la posibilidad de obtener las sensibilidades de los resultados mediante el cálculo del flujo adjunto, para posteriormente propagar las incertidumbres de las secciones eficaces a los cálculos de núcleo completo. Todo este trabajo se ha integrado en el código COBAYA3 que forma parte de la plataforma de códigos desarrollada en el proyecto europeo NURESIM del 6º Programa Marco. Los desarrollos efectuados han sido verificados en cuanto a su capacidad para modelar el problema a tratar; y la implementación realizada en el código ha sido validada numéricamente frente a los datos del benchmark de transitorio accidental en un reactor PWR con combustible UO2/MOX de la Agencia de Energía Nuclear de la OCDE, así como frente a otros benchmarks de LWR definidos en los proyectos europeos NURESIM y NURISP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modeling the evolution of the state of program memory during program execution is critical to many parallehzation techniques. Current memory analysis techniques either provide very accurate information but run prohibitively slowly or produce very conservative results. An approach based on abstract interpretation is presented for analyzing programs at compile time, which can accurately determine many important program properties such as aliasing, logical data structures and shape. These properties are known to be critical for transforming a single threaded program into a versión that can be run on múltiple execution units in parallel. The analysis is shown to be of polynomial complexity in the size of the memory heap. Experimental results for benchmarks in the Jolden suite are given. These results show that in practice the analysis method is efflcient and is capable of accurately determining shape information in programs that créate and manipúlate complex data structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several models for context-sensitive analysis of modular programs have been proposed, each with different characteristics and representing different trade-offs. The advantage of these context-sensitive analyses is that they provide information which is potentially more accurate than that provided by context-free analyses. Such information can then be applied to validating/debugging the program and/or to specializing the program in order to obtain important performance improvements. Some very preliminary experimental results have also been reported for some of these models which provided initial evidence on their potential. However, further experimentation, which is needed in order to understand the many issues left open and to show that the proposed modes scale and are usable in the context of large, real-life modular programs, was left as future work. The aim of this paper is two-fold. On one hand we provide an empirical comparison of the different models proposed in previous work, as well as experimental data on the different choices left open in those designs. On the other hand we explore the scalability of these models by using larger modular programs as benchmarks. The results have been obtained from a realistic implementation of the models, integrated in a production-quality compiler (CiaoPP/Ciao). Our experimental results shed light on the practical implications of the different design choices and of the models themselves. We also show that contextsensitive analysis of modular programs is indeed feasible in practice, and that in certain critical cases it provides better performance results than those achievable by analyzing the whole program at once, specially in terms of memory consumption and when reanalyzing after making changes to a program, as is often the case during program development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a complete scheme for automatic exploitation of independent and-parallelism in CLP programs. We first discuss the new problems involved because of the different properties of the independence notions applicable to CLP. We then show how independence can be derived from a number of standard analysis domains for CLP. Finally, we perform a preliminary evaluation of the efficiency, accuracy, and effectiveness of the approach by implementing a parallehzing compiler for CLP based on the proposed ideas and applying it on a number of CLP benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present and evaluate a compiler from Prolog (and extensions) to JavaScript which makes it possible to use (constraint) logic programming to develop the client side of web applications while being compliant with current industry standards. Targeting JavaScript makes (C)LP programs executable in virtually every modern computing device with no additional software requirements from the point of view of the user. In turn, the use of a very high-level language facilitates the development of high-quality, complex software. The compiler is a back end of the Ciao system and supports most of its features, including its module system and its rich language extension mechanism based on packages. We present an overview of the compilation process and a detailed description of the run-time system, including the support for modular compilation into separate JavaScript code. We demonstrate the maturity of the compiler by testing it with complex code such as a CLP(FD) library written in Prolog with attributed variables. Finally, we validate our proposal by measuring the performance of some LP and CLP(FD) benchmarks running on top of major JavaScript engines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La vía tradicional sobre balasto sigue siendo una selección para las líneas de alta velocidad a pesar de los problemas técnicos y la prestación del funcionamiento. El problema de la vía sobre balasto es el proceso continuo del deterioro de éste debido a las cargas asociadas al tráfico ferroviario. En consecuencia es imprescindible un mantenimiento continuado para mantener un alineamiento adecuado de la vía. Por eso se surge la necesidad de comprender mejor el mecanismo involucrado en el deterioro de la vía y los factores claves que rigen su progresión a lo largo de ciclos de carga con el fin de reducir los costos del mantenimiento de la vía y mejorar el diseño de las nuevas vías. La presente tesis intenta por un lado desarrollar los modelos más adecuados y eficientes del vehículo y de la vía para los cálculos de los efectos dinámicos debido al tráfico de ferrocarril sobre la infraestructura de la vía sobre balasto, y por otro evaluar estos efectos dinámicos sobre el deterioro de la vía sobre balasto a largo plazo, empleando un adecuado modelo de predicción del deterioro de la misma. Se incluye en el trabajo una recopilación del estado del arte en lo referente a la dinámica de la vía, a la modelización del vehículo, de la vía y de la interacción entre ambos. También se hace un repaso al deterioro de la vía y los factores que influyen en su proceso. Para la primera línea de investigación de esta tesis, se han desarrollado los diferentes modelos del vehículo y de la vía y la modelización de la interacción entre ambos para los cálculos dinámicos en dos y tres dimensiones. En la interacción vehículo-vía, se ha empleado la formulación de contacto nodo-superficie para establecer la identificación de las superficies en contacto y el método de los multiplicadores de Lagrange para imponer las restricciones de contacto. El modelo de interacción se ha contrastado con los casos reportados en la literatura. Teniendo en cuenta el contacto no lineal entre rueda-carril y los perfiles de irregularidades distribuidas de la vía, se han evaluado y comparado los efectos dinámicos sobre el sistema vehículo-vía en la interacción de ambos, para distintas velocidades de circulación del vehículo, en los aspectos como la vibración del vehículo, fuerza de contacto, fuerza transmitida en los railpads, la vibración del carril. También se hace un estudio de la influencia de las propiedades de los componentes de la vía en la respuesta dinámica del sistema vehículo-vía. Se ha desarrollado el modelo del asiento de la vía que consiste en la implementación del modelo de acumulación de Bochum y del modelo de hipoplasticidad en la subrutina del usuario \UMAT" del programa ABAQUS. La implementación numérica ha sido comprobado al comparar los resultados de las simulaciones numéricas con los reportados en la literatura. Se ha evaluado la calidad geométrica de la vía sobre balasto de los tramos de estudio con datos reales de la auscultación proporcionados por ADIF (2012). Se ha propuesto una metodología de simulación, empleando el modelo de asiento, para reproducir el deterioro de la geometría de la vía. Se usan los perfiles de la nivelación longitudinal de la auscultación como perfiles de irregularidades iniciales de la vía en las simulaciones numéricas. También se evalúa la influencia de la velocidad de circulación sobre el deterioro de la vía. The traditional ballast track structures are still being used in high speed railways lines with success, however technical problems or performance features have led to ballast track solution in some cases. The considerable maintenance work is needed for ballasted tracks due to the track deterioration. Therefore it is very important to understand the mechanism of track deterioration and to predict the track settlement or track irregularity growth rate in order to reduce track maintenance costs and enable new track structures to be designed. This thesis attempts to develop the most adequate and efficient models for calculation of dynamic track load effects on railways track infrastructure, and to evaluate these dynamic effects on the track settlement, using a track settlement prediction model, which consists of the vehicle/track dynamic model previously selected and a track settlement law. A revision of the state of the knowledge regarding the track dynamics, the modelling of the vehicle, the track and the interaction between them is included. An overview related to the track deterioration and the factors influencing the track settlement is also done. For the first research of this thesis, the different models of vehicle, track and the modelling of the interaction between both have been developed. In the vehicle-track interaction, the node-surface contact formulation to establish the identification of the surfaces in contact and the Lagrange multipliers method to enforce contact constraint are used. The interaction model has been verified by contrast with some benchmarks reported in the literature. Considering the nonlinear contact between wheel-rail and the track irregularities, the dynamic effects on the vehicle-track system have been evaluated and compared, for different speeds of the vehicle, in aspects as vehicle vibration, contact force, force transmitted in railpads, rail vibration. A study of the influence of the properties of the track components on the the dynamic response of the vehicle-track system has been done. The track settlement model is developed that consist of the Bochum accumulation model and the hipoplasticity model in the user subroutine \UMAT" of the program ABAQUS. The numerical implementation has been verified by comparing the numerical results with those reported in the literature. The geometric quality of the ballast track has been evaluated with real data of auscultation provided by ADIF (2012). The simulation methodology has been proposed, using the settlement model for the ballast material, to reproduce the deterioration of the track geometry. The profiles of the longitudinal level of the auscultation is used as initial profiles of the track irregularities in the numerical simulation. The influence of the running speed on the track deterioration is also investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is known that the Minimum Weight Triangulation problem is NP-hard. Also the complexity of the Minimum Weight Pseudo-Triangulation problem is unknown, yet it is suspected to be also NP-hard. Therefore we focused on the development of approximate algorithms to find high quality triangulations and pseudo-triangulations of minimum weight. In this work we propose two metaheuristics to solve these problems: Ant Colony Optimization (ACO) and Simulated Annealing (SA). For the experimental study we have created a set of instances for MWT and MWPT problems, since no reference to benchmarks for these problems were found in the literature. Through experimental evaluation, we assess the applicability of the ACO and SA metaheuristics for MWT and MWPT problems. These results are compared with those obtained from the application of deterministic algorithms for the same problems (Delaunay Triangulation for MWT and a Greedy algorithm respectively for MWT and MWPT).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Globally optimal triangulations are difficult to be found by deterministic methods as, for most type of criteria, no polynomial algorithm is known. In this work, we consider the Minimum Weight Triangulation (MWT) problem of a given set of n points in the plane. This paper shows how the Ant Colony Optimization (ACO) metaheuristic can be used to find high quality triangulations. For the experimental study we have created a set of instances for MWT problem since no reference to benchmarks for these problems were found in the literature. Through the experimental evaluation, we assess the applicability of the ACO metaheuristic for MWT problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we consider the Minimum Weight Pseudo-Triangulation (MWPT) problem of a given set of n points in the plane. Globally optimal pseudo-triangulations with respect to the weight, as optimization criteria, are difficult to be found by deterministic methods, since no polynomial algorithm is known. We show how the Ant Colony Optimization (ACO) metaheuristic can be used to find high quality pseudo-triangulations of minimum weight. We present the experimental and statistical study based on our own set of instances since no reference to benchmarks for these problems were found in the literature. Throughout the experimental evaluation, we appraise the ACO metaheuristic performance for MWPT problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last few years, the Pennsylvania State University (PSU) under the sponsorship of the US Nuclear Regulatory Commission (NRC) has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT) Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT) Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and the Japan Nuclear Energy Safety (JNES) Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid), namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.