915 resultados para deterministic fractals
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Resumo:
This paper focuses on the railway rolling stock circulation problem in rapid transit networks where the known demand and train schedule must be met by a given fleet. In rapid transit networks the frequencies are high and distances are relatively short. Although the distances are not very large, service times are high due to the large number of intermediate stops required to allow proper passenger flow. The previous circumstances and the reduced capacity of the depot stations and that the rolling stock is shared between the different lines, force the introduction of empty trains and a careful control on shunting operation. In practice the future demand is generally unknown and the decisions must be based on uncertain forecast. We have developed a stochastic rolling stock formulation of the problem. The computational experiments were developed using a commercial line of the Madrid suburban rail network operated by RENFE (The main Spanish operator of suburban trains of passengers). Comparing the results obtained by deterministic scenarios and stochastic approach some useful conclusions may be obtained.
Resumo:
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of +-2 standard deviations).
Resumo:
In activation calculations, there are several approaches to quantify uncertainties: deterministic by means of sensitivity analysis, and stochastic by means of Monte Carlo. Here, two different Monte Carlo approaches for nuclear data uncertainty are presented: the first one is the Total Monte Carlo (TMC). The second one is by means of a Monte Carlo sampling of the covariance information included in the nuclear data libraries to propagate these uncertainties throughout the activation calculations. This last approach is what we named Covariance Uncertainty Propagation, CUP. This work presents both approaches and their differences. Also, they are compared by means of an activation calculation, where the cross-section uncertainties of 239Pu and 241Pu are propagated in an ADS activation calculation.
Resumo:
Steam Generator Tube Rupture (SGTR) sequences in Pressurized Water Reactors are known to be one of the most demanding transients for the operating crew. SGTR are a special kind of transient as they could lead to radiological releases without core damage or containment failure, as they can constitute a direct path from the reactor coolant system to the environment. The first methodology used to perform the Deterministic Safety Analysis (DSA) of a SGTR did not credit the operator action for the first 30 min of the transient, assuming that the operating crew was able to stop the primary to secondary leakage within that period of time. However, the different real SGTR accident cases happened in the USA and over the world demonstrated that the operators usually take more than 30 min to stop the leakage in actual sequences. Some methodologies were raised to overcome that fact, considering operator actions from the beginning of the transient, as it is done in Probabilistic Safety Analysis. This paper presents the results of comparing different assumptions regarding the single failure criteria and the operator action taken from the most common methodologies included in the different Deterministic Safety Analysis. One single failure criteria that has not been analysed previously in the literature is proposed and analysed in this paper too. The comparison is done with a PWR Westinghouse three loop model in TRACE code (Almaraz NPP) with best estimate assumptions but including deterministic hypothesis such as single failure criteria or loss of offsite power. The behaviour of the reactor is quite diverse depending on the different assumptions made regarding the operator actions. On the other hand, although there are high conservatisms included in the hypothesis, as the single failure criteria, all the results are quite far from the regulatory limits. In addition, some improvements to the Emergency Operating Procedures to minimize the offsite release from the damaged SG in case of a SGTR are outlined taking into account the offsite dose sensitivity results.
Resumo:
Machine and Statistical Learning techniques are used in almost all online advertisement systems. The problem of discovering which content is more demanded (e.g. receive more clicks) can be modeled as a multi-armed bandit problem. Contextual bandits (i.e., bandits with covariates, side information or associative reinforcement learning) associate, to each specific content, several features that define the “context” in which it appears (e.g. user, web page, time, region). This problem can be studied in the stochastic/statistical setting by means of the conditional probability paradigm using the Bayes’ theorem. However, for very large contextual information and/or real-time constraints, the exact calculation of the Bayes’ rule is computationally infeasible. In this article, we present a method that is able to handle large contextual information for learning in contextual-bandits problems. This method was tested in the Challenge on Yahoo! dataset at ICML2012’s Workshop “new Challenges for Exploration & Exploitation 3”, obtaining the second place. Its basic exploration policy is deterministic in the sense that for the same input data (as a time-series) the same results are obtained. We address the deterministic exploration vs. exploitation issue, explaining the way in which the proposed method deterministically finds an effective dynamic trade-off based solely in the input-data, in contrast to other methods that use a random number generator.
Resumo:
El proyecto geotécnico de columnas de grava tiene todas las incertidumbres asociadas a un proyecto geotécnico y además hay que considerar las incertidumbres inherentes a la compleja interacción entre el terreno y la columna, la puesta en obra de los materiales y el producto final conseguido. Este hecho es común a otros tratamientos del terreno cuyo objetivo sea, en general, la mejora “profunda”. Como los métodos de fiabilidad (v.gr., FORM, SORM, Monte Carlo, Simulación Direccional) dan respuesta a la incertidumbre de forma mucho más consistente y racional que el coeficiente de seguridad tradicional, ha surgido un interés reciente en la aplicación de técnicas de fiabilidad a la ingeniería geotécnica. Si bien la aplicación concreta al proyecto de técnicas de mejora del terreno no es tan extensa. En esta Tesis se han aplicado las técnicas de fiabilidad a algunos aspectos del proyecto de columnas de grava (estimación de asientos, tiempos de consolidación y aumento de la capacidad portante) con el objetivo de efectuar un análisis racional del proceso de diseño, considerando los efectos que tienen la incertidumbre y la variabilidad en la seguridad del proyecto, es decir, en la probabilidad de fallo. Para alcanzar este objetivo se ha utilizado un método analítico avanzado debido a Castro y Sagaseta (2009), que mejora notablemente la predicción de las variables involucradas en el diseño del tratamiento y su evolución temporal (consolidación). Se ha estudiado el problema del asiento (valor y tiempo de consolidación) en el contexto de la incertidumbre, analizando dos modos de fallo: i) el primer modo representa la situación en la que es posible finalizar la consolidación primaria, parcial o totalmente, del terreno mejorado antes de la ejecución de la estructura final, bien sea por un precarga o porque la carga se pueda aplicar gradualmente sin afectar a la estructura o instalación; y ii) por otra parte, el segundo modo de fallo implica que el terreno mejorado se carga desde el instante inicial con la estructura definitiva o instalación y se comprueba que el asiento final (transcurrida la consolidación primaria) sea lo suficientemente pequeño para que pueda considerarse admisible. Para trabajar con valores realistas de los parámetros geotécnicos, los datos se han obtenido de un terreno real mejorado con columnas de grava, consiguiendo, de esta forma, un análisis de fiabilidad más riguroso. La conclusión más importante, obtenida del análisis de este caso particular, es la necesidad de precargar el terreno mejorado con columnas de grava para conseguir que el asiento ocurra de forma anticipada antes de la aplicación de la carga correspondiente a la estructura definitiva. De otra forma la probabilidad de fallo es muy alta, incluso cuando el margen de seguridad determinista pudiera ser suficiente. En lo que respecta a la capacidad portante de las columnas, existen un buen número de métodos de cálculo y de ensayos de carga (tanto de campo como de laboratorio) que dan predicciones dispares del valor de la capacidad última de las columnas de grava. En las mallas indefinidas de columnas, los resultados del análisis de fiabilidad han confirmado las consideraciones teóricas y experimentales existentes relativas a que no se produce fallo por estabilidad, obteniéndose una probabilidad de fallo prácticamente nula para este modo de fallo. Sin embargo, cuando se analiza, en el contexto de la incertidumbre, la capacidad portante de pequeños grupos de columnas bajo zapatas se ha obtenido, para un caso con unos parámetros geotécnicos típicos, que la probabilidad de fallo es bastante alta, por encima de los umbrales normalmente admitidos para Estados Límite Últimos. Por último, el trabajo de recopilación sobre los métodos de cálculo y de ensayos de carga sobre la columna aislada ha permitido generar una base de datos suficientemente amplia como para abordar una actualización bayesiana de los métodos de cálculo de la columna de grava aislada. El marco bayesiano de actualización ha resultado de utilidad en la mejora de las predicciones de la capacidad última de carga de la columna, permitiendo “actualizar” los parámetros del modelo de cálculo a medida que se dispongan de ensayos de carga adicionales para un proyecto específico. Constituye una herramienta valiosa para la toma de decisiones en condiciones de incertidumbre ya que permite comparar el coste de los ensayos adicionales con el coste de una posible rotura y , en consecuencia, decidir si es procedente efectuar dichos ensayos. The geotechnical design of stone columns has all the uncertainties associated with a geotechnical project and those inherent to the complex interaction between the soil and the column, the installation of the materials and the characteristics of the final (as built) column must be considered. This is common to other soil treatments aimed, in general, to “deep” soil improvement. Since reliability methods (eg, FORM, SORM, Monte Carlo, Directional Simulation) deals with uncertainty in a much more consistent and rational way than the traditional safety factor, recent interest has arisen in the application of reliability techniques to geotechnical engineering. But the specific application of these techniques to soil improvement projects is not as extensive. In this thesis reliability techniques have been applied to some aspects of stone columns design (estimated settlements, consolidation times and increased bearing capacity) to make a rational analysis of the design process, considering the effects of uncertainty and variability on the safety of the project, i.e., on the probability of failure. To achieve this goal an advanced analytical method due to Castro and Sagaseta (2009), that significantly improves the prediction of the variables involved in the design of treatment and its temporal evolution (consolidation), has been employed. This thesis studies the problem of stone column settlement (amount and speed) in the context of uncertainty, analyzing two failure modes: i) the first mode represents the situation in which it is possible to cause primary consolidation, partial or total, of the improved ground prior to implementation of the final structure, either by a pre-load or because the load can be applied gradually or programmed without affecting the structure or installation; and ii) on the other hand, the second mode implies that the improved ground is loaded from the initial instant with the final structure or installation, expecting that the final settlement (elapsed primary consolidation) is small enough to be allowable. To work with realistic values of geotechnical parameters, data were obtained from a real soil improved with stone columns, hence producing a more rigorous reliability analysis. The most important conclusion obtained from the analysis of this particular case is the need to preload the stone columns-improved soil to make the settlement to occur before the application of the load corresponding to the final structure. Otherwise the probability of failure is very high, even when the deterministic safety margin would be sufficient. With respect to the bearing capacity of the columns, there are numerous methods of calculation and load tests (both for the field and the laboratory) giving different predictions of the ultimate capacity of stone columns. For indefinite columns grids, the results of reliability analysis confirmed the existing theoretical and experimental considerations that no failure occurs due to the stability failure mode, therefore resulting in a negligible probability of failure. However, when analyzed in the context of uncertainty (for a case with typical geotechnical parameters), results show that the probability of failure due to the bearing capacity failure mode of a group of columns is quite high, above thresholds usually admitted for Ultimate Limit States. Finally, the review of calculation methods and load tests results for isolated columns, has generated a large enough database, that allowed a subsequent Bayesian updating of the methods for calculating the bearing capacity of isolated stone columns. The Bayesian updating framework has been useful to improve the predictions of the ultimate load capacity of the column, allowing to "update" the parameters of the calculation model as additional load tests become available for a specific project. Moreover, it is a valuable tool for decision making under uncertainty since it is possible to compare the cost of further testing to the cost of a possible failure and therefore to decide whether it is appropriate to perform such tests.
Resumo:
A genetic algorithm (GA) is employed for the multi-objective shape optimization of the nose of a high-speed train. Aerodynamic problems observed at high speeds become still more relevant when traveling along a tunnel. The objective is to minimize both the aerodynamic drag and the amplitude of the pressure gradient of the compression wave when a train enters a tunnel. The main drawback of GA is the large number of evaluations need in the optimization process. Metamodels-based optimization is considered to overcome such problem. As a result, an explicit relationship between pressure gradient and geometrical parameters is obtained.
Resumo:
Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.
Resumo:
The calculation of the effective delayed neutron fraction, beff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for beff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we call the k-eigenvalue technique and other techniques based on different interpretations of the physical meaning of the adjoint weighting. To test the validity of all these techniques we have implemented them with the MCNPX code and we have benchmarked them against a range of critical and subcritical systems for which either experimental or deterministic values of beff are available. Furthermore, several nuclear data libraries have been used in order to assess the impact of the uncertainty in nuclear data in the calculated value of beff .
Resumo:
Whole brain resting state connectivity is a promising biomarker that might help to obtain an early diagnosis in many neurological diseases, such as dementia. Inferring resting-state connectivity is often based on correlations, which are sensitive to indirect connections, leading to an inaccurate representation of the real backbone of the network. The precision matrix is a better representation for whole brain connectivity, as it considers only direct connections. The network structure can be estimated using the graphical lasso (GL), which achieves sparsity through l1-regularization on the precision matrix. In this paper, we propose a structural connectivity adaptive version of the GL, where weaker anatomical connections are represented as stronger penalties on the corre- sponding functional connections. We applied beamformer source reconstruction to the resting state MEG record- ings of 81 subjects, where 29 were healthy controls, 22 were single-domain amnestic Mild Cognitive Impaired (MCI), and 30 were multiple-domain amnestic MCI. An atlas-based anatomical parcellation of 66 regions was ob- tained for each subject, and time series were assigned to each of the regions. The fiber densities between the re- gions, obtained with deterministic tractography from diffusion-weighted MRI, were used to define the anatomical connectivity. Precision matrices were obtained with the region specific time series in five different frequency bands. We compared our method with the traditional GL and a functional adaptive version of the GL, in terms of log-likelihood and classification accuracies between the three groups. We conclude that introduc- ing an anatomical prior improves the expressivity of the model and, in most cases, leads to a better classification between groups.
Resumo:
La necesidad de desarrollar técnicas para predecir la respuesta vibroacústica de estructuras espaciales lia ido ganando importancia en los últimos años. Las técnicas numéricas existentes en la actualidad son capaces de predecir de forma fiable el comportamiento vibroacústico de sistemas con altas o bajas densidades modales. Sin embargo, ambos rangos no siempre solapan lo que hace que sea necesario el desarrollo de métodos específicos para este rango, conocido como densidad modal media. Es en este rango, conocido también como media frecuencia, donde se centra la presente Tesis doctoral, debido a la carencia de métodos específicos para el cálculo de la respuesta vibroacústica. Para las estructuras estudiadas en este trabajo, los mencionados rangos de baja y alta densidad modal se corresponden, en general, con los rangos de baja y alta frecuencia, respectivamente. Los métodos numéricos que permiten obtener la respuesta vibroacústica para estos rangos de frecuencia están bien especificados. Para el rango de baja frecuencia se emplean técnicas deterministas, como el método de los Elementos Finitos, mientras que, para el rango de alta frecuencia las técnicas estadísticas son más utilizadas, como el Análisis Estadístico de la Energía. En el rango de medias frecuencias ninguno de estos métodos numéricos puede ser usado con suficiente precisión y, como consecuencia -a falta de propuestas más específicas- se han desarrollado métodos híbridos que combinan el uso de métodos de baja y alta frecuencia, intentando que cada uno supla las deficiencias del otro en este rango medio. Este trabajo propone dos soluciones diferentes para resolver el problema de la media frecuencia. El primero de ellos, denominado SHFL (del inglés Subsystem based High Frequency Limit procedure), propone un procedimiento multihíbrido en el cuál cada subestructura del sistema completo se modela empleando una técnica numérica diferente, dependiendo del rango de frecuencias de estudio. Con este propósito se introduce el concepto de límite de alta frecuencia de una subestructura, que marca el límite a partir del cual dicha subestructura tiene una densidad modal lo suficientemente alta como para ser modelada utilizando Análisis Estadístico de la Energía. Si la frecuencia de análisis es menor que el límite de alta frecuencia de la subestructura, ésta se modela utilizando Elementos Finitos. Mediante este método, el rango de media frecuencia se puede definir de una forma precisa, estando comprendido entre el menor y el mayor de los límites de alta frecuencia de las subestructuras que componen el sistema completo. Los resultados obtenidos mediante la aplicación de este método evidencian una mejora en la continuidad de la respuesta vibroacústica, mostrando una transición suave entre los rangos de baja y alta frecuencia. El segundo método propuesto se denomina HS-CMS (del inglés Hybrid Substructuring method based on Component Mode Synthesis). Este método se basa en la clasificación de la base modal de las subestructuras en conjuntos de modos globales (que afectan a todo o a varias partes del sistema) o locales (que afectan a una única subestructura), utilizando un método de Síntesis Modal de Componentes. De este modo es posible situar espacialmente los modos del sistema completo y estudiar el comportamiento del mismo desde el punto de vista de las subestructuras. De nuevo se emplea el concepto de límite de alta frecuencia de una subestructura para realizar la clasificación global/local de los modos en la misma. Mediante dicha clasificación se derivan las ecuaciones globales del movimiento, gobernadas por los modos globales, y en las que la influencia del conjunto de modos locales se introduce mediante modificaciones en las mismas (en su matriz dinámica de rigidez y en el vector de fuerzas). Las ecuaciones locales se resuelven empleando Análisis Estadístico de Energías. Sin embargo, este último será un modelo híbrido, en el cual se introduce la potencia adicional aportada por la presencia de los modos globales. El método ha sido probado para el cálculo de la respuesta de estructuras sometidas tanto a cargas estructurales como acústicas. Ambos métodos han sido probados inicialmente en estructuras sencillas para establecer las bases e hipótesis de aplicación. Posteriormente, se han aplicado a estructuras espaciales, como satélites y reflectores de antenas, mostrando buenos resultados, como se concluye de la comparación de las simulaciones y los datos experimentales medidos en ensayos, tanto estructurales como acústicos. Este trabajo abre un amplio campo de investigación a partir del cual es posible obtener metodologías precisas y eficientes para reproducir el comportamiento vibroacústico de sistemas en el rango de la media frecuencia. ABSTRACT Over the last years an increasing need of novel prediction techniques for vibroacoustic analysis of space structures has arisen. Current numerical techniques arc able to predict with enough accuracy the vibro-acoustic behaviour of systems with low and high modal densities. However, space structures are, in general, very complex and they present a range of frequencies in which a mixed behaviour exist. In such cases, the full system is composed of some sub-structures which has low modal density, while others present high modal density. This frequency range is known as the mid-frequency range and to develop methods for accurately describe the vibro-acoustic response in this frequency range is the scope of this dissertation. For the structures under study, the aforementioned low and high modal densities correspond with the low and high frequency ranges, respectively. For the low frequency range, deterministic techniques as the Finite Element Method (FEM) are used while, for the high frequency range statistical techniques, as the Statistical Energy Analysis (SEA), arc considered as more appropriate. In the mid-frequency range, where a mixed vibro-acoustic behaviour is expected, any of these numerical method can not be used with enough confidence level. As a consequence, it is usual to obtain an undetermined gap between low and high frequencies in the vibro-acoustic response function. This dissertation proposes two different solutions to the mid-frequency range problem. The first one, named as The Subsystem based High Frequency Limit (SHFL) procedure, proposes a multi-hybrid procedure in which each sub-structure of the full system is modelled with the appropriate modelling technique, depending on the frequency of study. With this purpose, the concept of high frequency limit of a sub-structure is introduced, marking out the limit above which a substructure has enough modal density to be modelled by SEA. For a certain analysis frequency, if it is lower than the high frequency limit of the sub-structure, the sub-structure is modelled through FEM and, if the frequency of analysis is higher than the high frequency limit, the sub-structure is modelled by SEA. The procedure leads to a number of hybrid models required to cover the medium frequency range, which is defined as the frequency range between the lowest substructure high frequency limit and the highest one. Using this procedure, the mid-frequency range can be define specifically so that, as a consequence, an improvement in the continuity of the vibro-acoustic response function is achieved, closing the undetermined gap between the low and high frequency ranges. The second proposed mid-frequency solution is the Hybrid Sub-structuring method based on Component Mode Synthesis (HS-CMS). The method adopts a partition scheme based on classifying the system modal basis into global and local sets of modes. This classification is performed by using a Component Mode Synthesis, in particular a Craig-Bampton transformation, in order to express the system modal base into the modal bases associated with each sub-structure. Then, each sub-structure modal base is classified into global and local set, fist ones associated with the long wavelength motion and second ones with the short wavelength motion. The high frequency limit of each sub-structure is used as frequency frontier between both sets of modes. From this classification, the equations of motion associated with global modes are derived, which include the interaction of local modes by means of corrections in the dynamic stiffness matrix and the force vector of the global problem. The local equations of motion are solved through SEA, where again interactions with global modes arc included through the inclusion of an additional input power into the SEA model. The method has been tested for the calculation of the response function of structures subjected to structural and acoustic loads. Both methods have been firstly tested in simple structures to establish their basis and main characteristics. Methods are also verified in space structures, as satellites and antenna reflectors, providing good results as it is concluded from the comparison with experimental results obtained in both, acoustic and structural load tests. This dissertation opens a wide field of research through which further studies could be performed to obtain efficient and accurate methodologies to appropriately reproduce the vibro-acoustic behaviour of complex systems in the mid-frequency range.
Resumo:
La computación con membranas surge como una alternativa a la computación tradicional. Dentro de este campo se sitúan los denominados Sistemas P de Transición que se basan en la existencia de regiones que contienen recursos y reglas que hacen evolucionar a dichos recursos para poder llevar a cada una de las regiones a una nueva situación denominada configuración. La sucesión de las diferentes configuraciones conforman la computación. En este campo, el Grupo de Computación Natural de la Universidad Politécnica de Madrid lleva a cabo numerosas investigaciones al amparo de las cuales se han publicado numerosos artículos y realizado varias tesis doctorales. Las principales vías de investigación han sido, hasta el momento, el estudio del modelo teórico sobre el que se definen los Sistemas P, el estudio de los algoritmos que se utilizan para la aplicación de las reglas de evolución en las regiones, el diseño de nuevas arquitecturas que mejoren las comunicaciones entre las diferentes membranas (regiones) que componen el sistema y la implantación de estos sistemas en dispositivos hardware que pudiesen definir futuras máquinas basadas en este modelo. Dentro de este último campo, es decir, dentro del objetivo de construir finalmente máquinas que puedan llevar a cabo la funcionalidad de la computación con Sistemas P, la presente tesis doctoral se centra en el diseño de dos procesadores paralelos que, aplicando variantes de algoritmos existentes, favorezcan el crecimiento en el nivel de intra-paralelismo a la hora de aplicar las reglas. El diseño y creación de ambos procesadores presentan novedosas aportaciones al entorno de investigación de los Sistemas P de Transición en tanto en cuanto se utilizan conceptos que aunque previamente definidos de manera teórica, no habían sido introducidos en el hardware diseñado para estos sistemas. Así, los dos procesadores mantienen las siguientes características: - Presentan un alto rendimiento en la fase de aplicación de reglas, manteniendo por otro lado una flexibilidad y escalabilidad medias que son dependientes de la tecnología final sobre la que se sinteticen dichos procesadores. - Presentan un alto nivel de intra-paralelismo en las regiones al permitir la aplicación simultánea de reglas. - Tienen carácter universal en tanto en cuanto no depende del carácter de las reglas que componen el Sistema P. - Tienen un comportamiento indeterminista que es inherente a la propia naturaleza de estos sistemas. El primero de los circuitos utiliza el conjunto potencia del conjunto de reglas de aplicación así como el concepto de máxima aplicabilidad para favorecer el intra-paralelismo y el segundo incluye, además, el concepto de dominio de aplicabilidad para determinar el conjunto de reglas que son aplicables en cada momento con los recursos existentes. Ambos procesadores se diseñan y se prueban mediante herramientas de diseño electrónico y se preparan para ser sintetizados sobre FPGAs. ABSTRACT Membrane computing appears as an alternative to traditional computing. P Systems are placed inside this field and they are based upon the existence of regions called “membranes” that contain resources and rules that describe how the resources may vary to take each of these regions to a new situation called "configuration". Successive configurations conform computation. Inside this field, the Natural Computing Group of the Universidad Politécnica of Madrid develops a large number of works and researches that provide a lot of papers and some doctoral theses. Main research lines have been, by the moment, the study of the theoretical model over which Transition P Systems are defined, the study of the algorithms that are used for the evolution rules application in the regions, the design of new architectures that may improve communication among the different membranes (regions) that compose the whole system and the implementation of such systems over hardware devices that may define machines based upon this new model. Within this last research field, this is, within the objective of finally building machines that may accomplish the functionality of computation with P Systems, the present thesis is centered on the design of two parallel processors that, applying several variants of some known algorithms, improve the level of the internal parallelism at the evolution rule application phase. Design and creation of both processors present innovations to the field of Transition P Systems research because they use concepts that, even being known before, were never used for circuits that implement the applying phase of evolution rules. So, both processors present the following characteristics: - They present a very high performance during the application rule phase, keeping, on the other hand, a level of flexibility and scalability that, even known it is not very high, it seems to be acceptable. - They present a very high level of internal parallelism inside the regions, allowing several rule to be applied at the same time. - They present a universal character meaning this that they are not dependent upon the active rules that compose the P System. - They have a non-deterministic behavior that is inherent to this systems nature. The first processor uses the concept of "power set of the application rule set" and the concept of "maximal application" number to improve parallelism, and the second one includes, besides the previous ones, the concept of "applicability domain" to determine the set of rules that may be applied in each moment with the existing resources.. Both processors are designed and tested with the design software by Altera Corporation and they are ready to be synthetized over FPGAs.
Resumo:
El enriquecimiento del conocimiento sobre la Irradiancia Solar (IS) a nivel de superficie terrestre, así como su predicción, cobran gran interés para las Energías Renovables (ER) - Energía Solar (ES)-, y para distintas aplicaciones industriales o ecológicas. En el ámbito de las ER, el uso óptimo de la ES implica contar con datos de la IS en superficie que ayuden tanto, en la selección de emplazamientos para instalaciones de ES, como en su etapa de diseño (dimensionar la producción) y, finalmente, en su explotación. En este último caso, la observación y la predicción es útil para el mercado energético, la planificación y gestión de la energía (generadoras y operadoras del sistema eléctrico), especialmente en los nuevos contextos de las redes inteligentes de transporte. A pesar de la importancia estratégica de contar con datos de la IS, especialmente los observados por sensores de IS en superficie (los que mejor captan esta variable), estos no siempre están disponibles para los lugares de interés ni con la resolución espacial y temporal deseada. Esta limitación se une a la necesidad de disponer de predicciones a corto plazo de la IS que ayuden a la planificación y gestión de la energía. Se ha indagado y caracterizado las Redes de Estaciones Meteorológicas (REM) existentes en España que publican en internet sus observaciones, focalizando en la IS. Se han identificado 24 REM (16 gubernamentales y 8 redes voluntarios) que aglutinan 3492 estaciones, convirtiéndose éstas en las fuentes de datos meteorológicos utilizados en la tesis. Se han investigado cinco técnicas de estimación espacial de la IS en intervalos de 15 minutos para el territorio peninsular (3 técnicas geoestadísticas, una determinística y el método HelioSat2 basado en imágenes satelitales) con distintas configuraciones espaciales. Cuando el área de estudio tiene una adecuada densidad de observaciones, el mejor método identificado para estimar la IS es el Kriging con Regresión usando variables auxiliares -una de ellas la IS estimada a partir de imágenes satelitales-. De este modo es posible estimar espacialmente la IS más allá de los 25 km identificados en la bibliografía. En caso contrario, se corrobora la idoneidad de utilizar estimaciones a partir de sensores remotos cuando la densidad de observaciones no es adecuada. Se ha experimentado con el modelado de Redes Neuronales Artificiales (RNA) para la predicción a corto plazo de la IS utilizando observaciones próximas (componentes espaciales) en sus entradas y, los resultados son prometedores. Así los niveles de errores disminuyen bajo las siguientes condiciones: (1) cuando el horizonte temporal de predicción es inferior o igual a 3 horas, las estaciones vecinas que se incluyen en el modelo deben encentrarse a una distancia máxima aproximada de 55 km. Esto permite concluir que las RNA son capaces de aprender cómo afectan las condiciones meteorológicas vecinas a la predicción de la IS. ABSTRACT ABSTRACT The enrichment of knowledge about the Solar Irradiance (SI) at Earth's surface and its prediction, have a high interest for Renewable Energy (RE) - Solar Energy (SE) - and for various industrial and environmental applications. In the field of the RE, the optimal use of the SE involves having SI surface to help in the selection of sites for facilities ES, in the design stage (sizing energy production), and finally on their production. In the latter case, the observation and prediction is useful for the market, planning and management of the energy (generators and electrical system operators), especially in new contexts of smart transport networks (smartgrid). Despite the strategic importance of SI data, especially those observed by sensors of SI at surface (the ones that best measure this environmental variable), these are not always available to the sights and the spatial and temporal resolution desired. This limitation is bound to the need for short-term predictions of the SI to help planning and energy management. It has been investigated and characterized existing Networks of Weather Stations (NWS) in Spain that share its observations online, focusing on SI. 24 NWS have been identified (16 government and 8 volunteer networks) that implies 3492 stations, turning it into the sources of meteorological data used in the thesis. We have investigated five technical of spatial estimation of SI in 15 minutes to the mainland (3 geostatistical techniques and HelioSat2 a deterministic method based on satellite images) with different spatial configurations. When the study area has an adequate density of observations we identified the best method to estimate the SI is the regression kriging with auxiliary variables (one of them is the SI estimated from satellite images. Thus it is possible to spatially estimate the SI beyond the 25 km identified in the literature. Otherwise, when the density of observations is inadequate the appropriateness is using the estimates values from remote sensing. It has been experimented with Artificial Neural Networks (ANN) modeling for predicting the short-term future of the SI using observations from neighbor’s weather stations (spatial components) in their inputs, and the results are promising. The error levels decrease under the following conditions: (1) when the prediction horizon is less or equal than 3 hours the best models are the ones that include data from the neighboring stations (at a maximum distance of 55 km). It is concluded that the ANN is able to learn how weather conditions affect neighboring prediction of IS at such Spatio-temporal horizons.