995 resultados para Critical coupling parameter
Resumo:
Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.
Resumo:
Wave energy conversion has an essential difference from other renewable energies since the dependence between the devices design and the energy resource is stronger. Dimensioning is therefore considered a key stage when a design project of Wave Energy Converters (WEC) is undertaken. Location, WEC concept, Power Take-Off (PTO) type, control strategy and hydrodynamic resonance considerations are some of the critical aspects to take into account to achieve a good performance. The paper proposes an automatic dimensioning methodology to be accomplished at the initial design project stages and the following elements are described to carry out the study: an optimization design algorithm, its objective functions and restrictions, a PTO model, as well as a procedure to evaluate the WEC energy production. After that, a parametric analysis is included considering different combinations of the key parameters previously introduced. A variety of study cases are analysed from the point of view of energy production for different design-parameters and all of them are compared with a reference case. Finally, a discussion is presented based on the results obtained, and some recommendations to face the WEC design stage are given.
Resumo:
Propagation of discharges in cortical and thalamic systems, which is used as a probe for examining network circuitry, is studied by constructing a one-dimensional model of integrate-and-fire neurons that are coupled by excitatory synapses with delay. Each neuron fires only one spike. The velocity and stability of propagating continuous pulses are calculated analytically. Above a certain critical value of the constant delay, these pulses lose stability. Instead, lurching pulses propagate with discontinuous and periodic spatio-temporal characteristics. The parameter regime for which lurching occurs is strongly affected by the footprint (connectivity) shape; bistability may occur with a square footprint shape but not with an exponential footprint shape. For strong synaptic coupling, the velocity of both continuous and lurching pulses increases logarithmically with the synaptic coupling strength gsyn for an exponential footprint shape, and it is bounded for a step footprint shape. We conclude that the differences in velocity and shape between the front of thalamic spindle waves in vitro and cortical paroxysmal discharges stem from their different effective delay; in thalamic networks, large effective delay between inhibitory neurons arises from their effective interaction via the excitatory cells which display postinhibitory rebound.
Resumo:
The reason that the indefinite exponential increase in the number of one’s ancestors does not take place is found in the law of sibling interference, which can be expressed by the following simple equation:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}\begin{matrix}{\mathit{N}}_{{\mathit{n}}} \enskip & \\ {\mathit{{\blacksquare}}} \enskip & \\ {\mathit{ASZ}} \enskip & \end{matrix} {\mathrm{\hspace{.167em}{\times}\hspace{.167em}2\hspace{.167em}=\hspace{.167em}}}{\mathit{N_{n+1},}}\end{equation*}\end{document} where Nn is the number of ancestors in the nth generation, ASZ is the average sibling size of these ancestors, and Nn+1 is the number of ancestors in the next older generation (n + 1). Accordingly, the exponential increase in the number of one’s ancestors is an initial anomaly that occurs while ASZ remains at 1. Once ASZ begins to exceed 1, the rate of increase in the number of ancestors is progressively curtailed, falling further and further behind the exponential increase rate. Eventually, ASZ reaches 2, and at that point, the number of ancestors stops increasing for two generations. These two generations, named AN SA and AN SA + 1, are the most critical in the ancestry, for one’s ancestors at that point come to represent all the progeny-produced adults of the entire ancestral population. Thereafter, the fate of one’s ancestors becomes the fate of the entire population. If the population to which one belongs is a successful, slowly expanding one, the number of ancestors would slowly decline as you move toward the remote past. This is because ABZ would exceed 2. Only when ABZ is less than 2 would the number of ancestors increase beyond the AN SA and AN SA + 1 generations. Since the above is an indication of a failing population on the way to extinction, there had to be the previous AN SA involving a far greater number of individuals for such a population. Simulations indicated that for a member of a continuously successful population, the AN SA ancestors might have numbered as many as 5.2 million, the AN SA generation being the 28th generation in the past. However, because of the law of increasingly irrelevant remote ancestors, only a very small fraction of the AN SA ancestors would have left genetic traces in the genome of each descendant of today.
Resumo:
Each G protein-coupled receptor recognizes only a distinct subset of the many structurally closely related G proteins expressed within a cell. How this selectively is achieved at a molecular level is not well understood, particularly since no specific point-to-point contact sites between a receptor and its cognate G protein(s) have been identified. In this study, we demonstrate that a 4-aa epitope on the m2 muscarinic acetylcholine receptor, a prototypical Gi/o-coupled receptor, can specifically recognize the C-terminal 5 aa of alpha subunits of the Gi/o protein family. The m2 receptor residues involved in this interaction are predicted to be located on one side of an alpha-helical receptor region present at the junction between the third intracellular loop and the sixth transmembrane domain. Coexpression studies with hybrid m2/m3 muscarinic receptors and mutant G-protein alpha q subunits showed that the receptor/G-protein contact site identified in this study is essential for coupling specificity and G-protein activation.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Adsorption of ethylene and ethane on graphitized thermal carbon black and in slit pores whose walls are composed of graphene layers is studied in detail to investigate the packing efficiency, the two-dimensional critical temperature, and the variation of the isosteric heat of adsorption with loading and temperature. Here we used a Monte Carlo simulation method with a grand canonical Monte Carlo ensemble. A number of two-center Lennard-Jones (LJ) potential models are investigated to study the impact of the choice of potential models in the description of adsorption behavior. We chose two 2C-LJ potential models in our investigation of the (i) UA-TraPPE-LJ model of Martin and Siepmann (J. Phys. Chem. B 1998,102, 25692577) for ethane and Wick et al. (J. Phys. Chem. B 2000,104, 8008-8016) for ethylene and (ii) AUA4-LJ model of Ungerer et al. (J. Chem. Phys. 2000,112, 5499-5510) for ethane and Bourasseau et al. (J. Chem. Phys. 2003, 118, 3020-3034) for ethylene. These models are used to study the adsorption of ethane and ethylene on graphitized thermal carbon black. It is found that the solid-fluid binary interaction parameter is a function of adsorbate and temperature, and the adsorption isotherms and heat of adsorption are well described by both the UA-TraPPE and AUA models, although the UA-TraPPE model performs slightly better. However, the local distributions predicted by these two models are slightly different. These two models are used to explore the two-dimensional condensation for the graphitized thermal carbon black, and these values are 110 K for ethylene and 120 K for ethane.
Resumo:
We consider the solvability of the Neumann problem for the equation -Delta u + lambda u = 0, partial derivative u/partial derivative v = Q(x)vertical bar u vertical bar(q-2)u on partial derivative Omega, where Q is a positive and continuous coefficient on partial derivative Omega, lambda is a parameter and q = 2(N - 1)/(N - 2) is a critical Sobolev exponent for the trace embedding of H-1(Omega) into L-q(partial derivative Omega). We investigate the joint effect of the mean curvature of partial derivative Omega and the shape of the graph of Q on the existence of solutions. As a by product we establish a sharp Sobolev inequality for the trace embedding. In Section 6 we establish the existence of solutions when a parameter lambda interferes with the spectrum of -Delta with the Neumann boundary conditions. We apply a min-max principle based on the topological linking.
Resumo:
This paper presents a review of modelling and control of biological nutrient removal (BNR)-activated sludge processes for wastewater treatment using distributed parameter models described by partial differential equations (PDE). Numerical methods for solution to the BNR-activated sludge process dynamics are reviewed and these include method of lines, global orthogonal collocation and orthogonal collocation on finite elements. Fundamental techniques and conceptual advances of the distributed parameter approach to the dynamics and control of activated sludge processes are briefly described. A critical analysis on the advantages of the distributed parameter approach over the conventional modelling strategy in this paper shows that the activated sludge process is more adequately described by the former and the method is recommended for application to the wastewater industry (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Structure–activity relationships are indispensable to identify the most optimal antioxidants. The advantages of in vitro over in vivo experiments for obtaining these relationships are, that the structure is better defined in vitro, since less metabolism takes place. It is also the case that the concentration, a parameter that is directly linked to activity, is more accurately controlled. Moreover, the reactions that occur in vivo, including feed-back mechanisms, are often too multi-faceted and diverse to be compensated for during the assessment of a single structure–activity relationship. Pitfalls of in vitro antioxidant research include: (i) by definition, antioxidants are not stable and substantial amounts of oxidation products are formed and (ii) during the scavenging of reactive species, reaction products of the antioxidants accumulate. Another problem is that the maintenance of a defined concentration of antioxidants is subject to processes such as oxidation and the formation of reaction products during the actual antioxidant reaction, as well as the compartmentalization of the antioxidant and the reactive species in the in vitro test system. So determinations of in vitro structure-activity relationships are subject to many competing variables and they should always be evaluated critically. (c) 2005 Published by Elsevier B.V.
Resumo:
This article qualitatively analyzes the Critical Success Factors (CSFs) for Information Systems (IS) executive careers based on evidence gathered from five case studies carried out in 1997. Typical IS executive career paths are presented within a time series style and the CSFs are interpreted within a descriptive framework by synthesising the case data based on Social Cognitive Theory. The descriptive framework suggests that successful IS executive careers would most likely be achieved by well educated and experienced IS employees who have the right attitude towards both their career and work, together with good performance. They would also exhibit an ability for self-learning and to anticipate future IT uses, as well as having proficient IS management knowledge and skills while working with an appropriate organizational environment. Moreover, the framework systematically indicates the interactions between the coupling factors in the typical career development processes. This provides a benchmark for employees that are aiming at a senior IS executive career against which they can compare their own achievements and aspirations. It also raises propositions for further research on theory building.
Resumo:
We apply well known nonlinear diffraction theory governing focusing of a powerful light beam of arbitrary shape in medium with Kerr nonlinearity to the analysis of femtosecond (fs) laser processing of dielectric in sub-critical (input power less than the critical power of selffocusing) regime. Simple analytical expressions are derived for the input beam power and spatial focusing parameter (numerical aperture) that are required for achieving an inscription threshold. Application of non-Gaussian laser beams for better controlled fs inscription at higher powers is also discussed. © 2007 Optical Society of America.
Resumo:
Contrast susceptibility is defined as the difference in visual acuity recorded for high and low contrast optotypes. Other researchers refer to this parameter as "normalised low contrast acuity". Pilot surveys have revealed that contrast susceptibility deficits are more strongly related to driving accident involvement than are deficits in high contrast visual acuity. It has been hypothesised that driving situation avoidance is purely based upon high contrast visual acuity. Hence, the relationship between high contrast visual acuity and accidents is masked by situation avoidance whilst drivers with contrast susceptibility deficits remain prone to accidents in poor visibility conditions. A national survey carried out to test this hypothesis provided no support for either the link between contrast susceptibility deficits and accidents involvement or the proposed hypothesis. Further, systematically worse contrast susceptibility scores emerged from vision screeners compared to wall mounted test charts. This discrepancy was not due to variations in test luminance or instrument myopia. Instead, optical imperfections inherent in vision screeners were considered to be responsible. Although contrast susceptibility is unlikely to provide a useful means of screening drivers' vision, previous research does provide support for its ability to detect visual deficits that may influence everyday tasks. In this respect, individual contrast susceptibility variations were found to reflect variations in the contrast sensitivity function - a parameter that provides a global estimate of human contrast sensitivity.