912 resultados para Realistic design fire conditions
Resumo:
One of the most critical gas turbine engine components, rotor blade tip and casing, are exposed to high thermal load. It becomes a significant design challenge to protect the turbine materials from this severe situation. As a result of geometric complexity and experimental limitations, Computational Fluid Dynamics (CFD) tools have been used to predict blade tip leakage flow aerodynamics and heat transfer at typical engine operating conditions. In this paper, the effect of turbine inlet temperature on the tip leakage flow structure and heat transfer has been studied numerically. Uniform low (LTIT: 444 K) and high (HTIT: 800 K) turbine inlet temperature have been considered. The results showed the higher turbine inlet temperature yields the higher velocity and temperature variations in the leakage flow aerodynamics and heat transfer. For a given turbine geometry and on-design operating conditions, the turbine power output can be increased by 1.48 times, when the turbine inlet temperature increases 1.80 times. Whereas the averaged heat fluxes on the casing and the blade tip become 2.71 and 2.82 times larger, respectively. Therefore, about 2.8 times larger cooling capacity is required to keep the same turbine material temperature. Furthermore, the maximum heat flux on the blade tip of high turbine inlet temperature case reaches up to 3.348 times larger than that of LTIT case. The effect of the interaction of stator and rotor on heat transfer features is also explored using unsteady simulations.
Resumo:
Heating rate is one of the main variables that determine a fire cycle. In industrial processes that use high temperatures, greater fire great can reduce the cost of production and increase productivity. The use of faster and more efficient fire cycles has been little investigated by the structural ceramic industry in Brazil. However, one of the possibilities that aims at modernizing the sector is the use of roller kilns and the inclusion of natural gas as fuel. Thus, the purpose of this study is to investigate the effect of heating rate on the technological properties of structural ceramic products. Clay raw materials from the main ceramic industries in the state of Rio Grande do Norte were characterized. Some of the raw materials characterized were formulated to obtain the best physical and mechanical properties. Next, raw materials and formulations were selected to study the influence of heating rate on the final properties of the ceramic materials. The samples were shaped by pressing and extrusion and submitted to rates of 1 °C/min, 10 °C/min and 20 °C/min, with final temperatures of 850 °C, 950 °C and 1050 °C. Discontinuous cycles with rates of 10 °C/min or 15 °C/min up to 600 °C and a rate of 20 °C/min up to final temperature were also investigated. Technological properties were determined for all the samples and microstructural analysis was carried out under a number of fire conditions. Results indicate that faster and more efficient fire cycles than those currently in practice could be used, limiting only some clay doughs to certain fire conditions. The best results were obtained for the samples submitted to slow cycles up to 600 °C and fast fire sinterization up to 950 °C. This paper presents for the first time the use of a fast fire rate for raw materials and clay formulations and seeks to determine ideal dough and processing conditions for using shorter fire times, thus enabling the use of roller kilns and natural gas in structural ceramic industries
Resumo:
Neste trabalho, objetivou-se avaliar o efeito de métodos de superação de dormência e do ambiente de armazenamento sobre a qualidade fisiológica e fitopatológica das sementes de canafístula (Peltophorum dubium). As sementes foram submetidas aos seguintes tratamentos de superação de dormência: escarificação com lixa (200); imersão em água na temperatura ambiente, durante 24 e 72 h; imersão em ácido sulfúrico por 2, 6, 10, 15, 20 e 30 min; imersão em água quente (70, 80 e 90 C); e umedecimento do substrato com solução de KNO3 (0,2%). As sementes foram armazenadas na temperatura ambiente e a 10 C por 210 dias. Os efeitos dos tratamentos e do armazenamento foram avaliados por meio do teor de água, teste de germinação (cinco repetições de 30 sementes), de comprimento de plântulas e sanidade (400 sementes), com incubação por oito dias (22-25 C). Na análise estatística dos dados, utilizou-se o delineamento experimental inteiramente casualizado em esquema fatorial 2 x 14 (condições de armazenamento x tratamentos para a superação da dormência). As médias foram comparadas pelo teste de Tukey (P>0,5). Com relação às sementes não armazenadas, os melhores tratamentos para superar a dormência e promover a germinação foram escarificação com lixa ou ácido sulfúrico por 15 a 30 min; quanto às sementes armazenadas, houve a imersão em água quente (70 a 80 ºC). Os fungos detectados nas sementes foram Pestalotia sp., Alternaria sp., Rhizopus sp., Nigrospora sp., Curvularia sp., Fusarium sp., Rhizoctonia sp., Aspergillus sp., Cladosporium sp. e Fusarium semitectum.
Resumo:
We investigate how the initial geometry of a heavy-ion collision is transformed into final flow observables by solving event-by-event ideal hydrodynamics with realistic fluctuating initial conditions. We study quantitatively to what extent anisotropic flow (nu(n)) is determined by the initial eccentricity epsilon(n) for a set of realistic simulations, and we discuss which definition of epsilon(n) gives the best estimator of nu(n). We find that the common practice of using an r(2) weight in the definition of epsilon(n) in general results in a poorer predictor of nu(n) than when using r(n) weight, for n > 2. We similarly study the importance of additional properties of the initial state. For example, we show that in order to correctly predict nu(4) and nu(5) for noncentral collisions, one must take into account nonlinear terms proportional to epsilon(2)(2) and epsilon(2)epsilon(3), respectively. We find that it makes no difference whether one calculates the eccentricities over a range of rapidity or in a single slice at z = 0, nor is it important whether one uses an energy or entropy density weight. This knowledge will be important for making a more direct link between experimental observables and hydrodynamic initial conditions, the latter being poorly constrained at present.
Resumo:
Successful conservation of tropical montane forest, one of the most threatened ecosystems on earth, requires detailed knowledge of its biogeochemistry. Of particular interest is the response of the biogeochemical element cycles to external influences such as element deposition or climate change. Therefore the overall objective of my study was to contribute to improved understanding of role and functioning of the Andean tropical montane forest. In detail, my objectives were to determine (1) the role of long-range transported aerosols and their transport mechanisms, and (2) the role of short-term extreme climatic events for the element budget of Andean tropical forest. In a whole-catchment approach including three 8-13 ha microcatchments under tropical montane forest on the east-exposed slope of the eastern cordillera in the south Ecuadorian Andes at 1850-2200 m above sea level I monitored at least in weekly resolution the concentrations and fluxes of Ca, Mg, Na, K, NO3-N, NH4-N, DON, P, S, TOC, Mn, and Al in bulk deposition, throughfall, litter leachate, soil solution at the 0.15 and 0.3 m depths, and runoff between May 1998 and April 2003. I also used meteorological data from my study area collected by cooperating researchers and the Brazilian meteorological service (INPE), as well as remote sensing products of the North American and European space agencies NASA and ESA. My results show that (1) there was a strong interannual variation in deposition of Ca [4.4-29 kg ha-1 a-1], Mg [1.6-12], and K [9.8-30]) between 1998 and 2003. High deposition changed the Ca and Mg budgets of the catchments from loss to retention, suggesting that the additionally available Ca and Mg was used by the ecosystem. Increased base metal deposition was related to dust outbursts of the Sahara and an Amazonian precipitation pattern with trans-regional dry spells allowing for dust transport to the Andes. The increased base metal deposition coincided with a strong La Niña event in 1999/2000. There were also significantly elevated H+, N, and Mn depositions during the annual biomass burning period in the Amazon basin. Elevated H+ deposition during the biomass burning period caused elevated base metal loss from the canopy and the organic horizon and deteriorated already low base metal supply of the vegetation. Nitrogen was only retained during biomass burning but not during non-fire conditions when deposition was much smaller. Therefore biomass burning-related aerosol emissions in Amazonia seem large enough to substantially increase element deposition at the western rim of Amazonia. Particularly the related increase of acid deposition impoverishes already base-metal scarce ecosystems. As biomass burning is most intense during El Niño situations, a shortened ENSO cycle because of global warming likely enhances the acid deposition at my study forest. (2) Storm events causing near-surface water flow through C- and nutrient-rich topsoil during rainstorms were the major export pathway for C, N, Al, and Mn (contributing >50% to the total export of these elements). Near-surface flow also accounted for one third of total base metal export. This demonstrates that storm-event related near-surface flow markedly affects the cycling of many nutrients in steep tropical montane forests. Changes in the rainfall regime possibly associated with global climate change will therefore also change element export from the study forest. Element budgets of Andean tropical montane rain forest proved to be markedly affected by long-range transport of Saharan dust, biomass burning-related aerosols, or strong rainfalls during storm events. Thus, increased acid and nutrient deposition and the global climate change probably drive the tropical montane forest to another state with unknown consequences for its functions and biological diversity.
Resumo:
In animal experiments, animals, husbandry and test procedures are traditionally standardized to maximize test sensitivity and minimize animal use, assuming that this will also guarantee reproducibility. However, by reducing within-experiment variation, standardization may limit inference to the specific experimental conditions. Indeed, we have recently shown in mice that standardization may generate spurious results in behavioral tests, accounting for poor reproducibility, and that this can be avoided by population heterogenization through systematic variation of experimental conditions. Here, we examined whether a simple form of heterogenization effectively improves reproducibility of test results in a multi-laboratory situation. Each of six laboratories independently ordered 64 female mice of two inbred strains (C57BL/6NCrl, DBA/2NCrl) and examined them for strain differences in five commonly used behavioral tests under two different experimental designs. In the standardized design, experimental conditions were standardized as much as possible in each laboratory, while they were systematically varied with respect to the animals' test age and cage enrichment in the heterogenized design. Although heterogenization tended to improve reproducibility by increasing within-experiment variation relative to between-experiment variation, the effect was too weak to account for the large variation between laboratories. However, our findings confirm the potential of systematic heterogenization for improving reproducibility of animal experiments and highlight the need for effective and practicable heterogenization strategies.
Resumo:
Reducing the uncertainties related to blade dynamics by the improvement of the quality of numerical simulations of the fluid structure interaction process is a key for a breakthrough in wind-turbine technology. A fundamental step in that direction is the implementation of aeroelastic models capable of capturing the complex features of innovative prototype blades, so they can be tested at realistic full-scale conditions with a reasonable computational cost. We make use of a code based on a combination of two advanced numerical models implemented in a parallel HPC supercomputer platform: First, a model of the structural response of heterogeneous composite blades, based on a variation of the dimensional reduction technique proposed by Hodges and Yu. This technique has the capacity of reducing the geometrical complexity of the blade section into a stiffness matrix for an equivalent beam. The reduced 1-D strain energy is equivalent to the actual 3-D strain energy in an asymptotic sense, allowing accurate modeling of the blade structure as a 1-D finite-element problem. This substantially reduces the computational effort required to model the structural dynamics at each time step. Second, a novel aerodynamic model based on an advanced implementation of the BEM(Blade ElementMomentum) Theory; where all velocities and forces are re-projected through orthogonal matrices into the instantaneous deformed configuration to fully include the effects of large displacements and rotation of the airfoil sections into the computation of aerodynamic forces. This allows the aerodynamic model to take into account the effects of the complex flexo-torsional deformation that can be captured by the more sophisticated structural model mentioned above. In this thesis we have successfully developed a powerful computational tool for the aeroelastic analysis of wind-turbine blades. Due to the particular features mentioned above in terms of a full representation of the combined modes of deformation of the blade as a complex structural part and their effects on the aerodynamic loads, it constitutes a substantial advancement ahead the state-of-the-art aeroelastic models currently available, like the FAST-Aerodyn suite. In this thesis, we also include the results of several experiments on the NREL-5MW blade, which is widely accepted today as a benchmark blade, together with some modifications intended to explore the capacities of the new code in terms of capturing features on blade-dynamic behavior, which are normally overlooked by the existing aeroelastic models.
Resumo:
The purpose of this dissertation was to explore and describe the factors that influence the safer sex choices of African-American college women. The pandemic of HIV and the prevalence of other sexually transmitted diseases has disproportionately affected African-American females. As young women enter college they are faced with a myriad of choices. Unprotected sexual exploration is one choice that can lead to deadly consequences. This dissertation explores, through in-depth interviews, the factors associated with the decision to practice or not practice safe sex. ^ The first study describes the factors associated with increased sexual risk taking among African-American college women. Sexual risk taking or sex without a condom was found to be more likely when issues of self or partner pleasure were raised. Participants were also likely to have sexual intercourse without a condom if they desired a long term relationship with their partner. ^ The second study examined safe sex decision making processes among a group of African-American college women. Women were found to employ both emotional and philosophical strategies to determine their safe sex behavior. These strategies range from assessing a partner's physical capabilities and appearance to length of the dating relationship. ^ The third study explores the association between knowledge and risk perception as predictors for safer sex behaviors. Knowledge of HIV/AIDS and other STDs was not found to be a determinant of safer sex behavior. Perception of personal risk was also not highly correlated with consistent safer sex behavior. ^ These studies demonstrate the need for risk-based safer sex education and intervention programs. The current climate of knowledge-based program development insures that women will continue to predicate their decision to practice safer sex on their limited perception and understanding of the risks associated with unprotected sexual behavior. Further study into the emotional and philosophical determinants of sexual behavior is necessary for the realistic design of applicable and meaningful interventions. ^
Resumo:
Wetlands store large amounts of carbon, and depending on their status and type, they release specific amounts of methane gas to the atmosphere. The connection between wetland type and methane emission has been investigated in various studies and utilized in climate change monitoring and modelling. For improved estimation of methane emissions, land surface models require information such as the wetland fraction and its dynamics over large areas. Existing datasets of wetland dynamics present the total amount of wetland (fraction) for each model grid cell, but do not discriminate the different wetland types like permanent lakes, periodically inundated areas or peatlands. Wetland types differently influence methane fluxes and thus their contribution to the total wetland fraction should be quantified. Especially wetlands of permafrost regions are expected to have a strong impact on future climate due to soil thawing. In this study ENIVSAT ASAR Wide Swath data was tested for operational monitoring of the distribution of areas with a long-term SW near 1 (hSW) in northern Russia (SW = degree of saturation with water, 1 = saturated), which is a specific characteristic of peatlands. For the whole northern Russia, areas with hSW were delineated and discriminated from dynamic and open water bodies for the years 2007 and 2008. The area identified with this method amounts to approximately 300,000 km**2 in northern Siberia in 2007. It overlaps with zones of high carbon storage. Comparison with a range of related datasets (static and dynamic) showed that hSW represents not only peatlands but also temporary wetlands associated with post-forest fire conditions in permafrost regions. Annual long-term monitoring of change in boreal and tundra environments is possible with the presented approach. Sentinel-1, the successor of ENVISAT ASAR, will provide data that may allow continuous monitoring of these wetland dynamics in the future complementing global observations of wetland fraction.
Resumo:
The geometrical factors defining an adhesive joint are of great importance as its design greatly conditions the performance of the bonding. One of the most relevant geometrical factors is the thickness of the adhesive as it decisively influences the mechanical properties of the bonding and has a clear economic impact on the manufacturing processes or long runs. The traditional mechanical joints (riveting, welding, etc.) are characterised by a predictable performance, and are very reliable in service conditions. Thus, structural adhesive joints will only be selected in industrial applications demanding mechanical requirements and adverse environmental conditions if the suitable reliability (the same or higher than the mechanical joints) is guaranteed. For this purpose, the objective of this paper is to analyse the influence of the adhesive thickness on the mechanical behaviour of the joint and, by means of a statistical analysis based on Weibull distribution, propose the optimum thickness for the adhesive combining the best mechanical performance and high reliability. This procedure, which is applicable without a great deal of difficulty to other joints and adhesives, provides a general use for a more reliable use of adhesive bondings and, therefore, for a better and wider use in the industrial manufacturing processes.
Resumo:
This study characterises the abatement effect of large dams with fixed-crest spillways under extreme design flood conditions. In contrast to previous studies using specific hydrographs for flow into the reservoir and simplifications to obtain analytical solutions, an automated tool was designed for calculations based on a Monte Carlo simulation environment, which integrates models that represent the different physical processes in watersheds with areas of 150?2000 km2. The tool was applied to 21 sites that were uniformly distributed throughout continental Spain, with 105 fixed-crest dam configurations. This tool allowed a set of hydrographs to be obtained as an approximation for the hydrological forcing of a dam and the characterisation of the response of the dam to this forcing. For all cases studied, we obtained a strong linear correlation between the peak flow entering the reservoir and the peak flow discharged by the dam, and a simple general procedure was proposed to characterise the peak-flow attenuation behaviour of the reservoir. Additionally, two dimensionless coefficients were defined to relate the variables governing both the generation of the flood and its abatement in the reservoir. Using these coefficients, a model was defined to allow for the estimation of the flood abatement effect of a reservoir based on the available information. This model should be useful in the hydrological design of spillways and the evaluation of the hydrological safety of dams. Finally, the proposed procedure and model were evaluated and representative applications were presented
Resumo:
The supercritical Rankine power cycle offers a net improvement in plant efficiency compared with a subcritical Rankine cycle. For fossil power plants the minimum supercritical steam turbine size is about 450MW. A recent study between Sandia National Laboratories and Siemens Energy, Inc., published on March 2013, confirmed the feasibility of adapting the Siemens turbine SST-900 for supercritical steam in concentrated solar power plants, with a live steam conditions 230-260 bar and output range between 140-200 MWe. In this context, this analysis is focused on integrating a line-focus solar field with a supercritical Rankine power cycle. For this purpose two heat transfer fluids were assessed: direct steam generation and molten salt Hitec XL. To isolate solar field from high pressure supercritical water power cycle, an intermediate heat exchanger was installed between linear solar collectors and balance of plant. Due to receiver selective coating temperature limitations, turbine inlet temperature was fixed 550ºC. The design-point conditions were 550ºC and 260 bar at turbine inlet, and 165 MWe Gross power output. Plant performance was assessed at design-point in the supercritical power plant (between 43-45% net plant efficiency depending on balance of plantconfiguration), and in the subcritical plant configuration (~40% net plant efficiency). Regarding the balance of plant configuration, direct reheating was adopted as the optimum solution to avoid any intermediate heat exchanger. One direct reheating stage between high pressure turbine and intermediate pressure turbine is the common practice; however, General Electric ultrasupercritical(350 bar) fossil power plants also considered doubled-reheat applications. In this study were analyzed heat balances with single-reheat, double-reheat and even three reheating stages. In all cases were adopted the proper reheating solar field configurations to limit solar collectors pressure drops. As main conclusion, it was confirmed net plant efficiency improvements in supercritical Rankine line-focus (parabolic or linear Fresnel) solar plant configurations are mainly due to the following two reasons: higher number of feed-water preheaters (up to seven)delivering hotter water at solar field inlet, and two or even three direct reheating stages (550ºC reheating temperature) in high or intermediate pressure turbines. However, the turbine manufacturer should confirm the equipment constrains regarding reheating stages and number of steam extractions to feed-water heaters.
Resumo:
Esta tesis presenta un estudio exhaustivo sobre la evaluación de la calidad de experiencia (QoE, del inglés Quality of Experience) percibida por los usuarios de sistemas de vídeo 3D, analizando el impacto de los efectos introducidos por todos los elementos de la cadena de procesamiento de vídeo 3D. Por lo tanto, se presentan varias pruebas de evaluación subjetiva específicamente diseñadas para evaluar los sistemas considerados, teniendo en cuenta todos los factores perceptuales relacionados con la experiencia visual tridimensional, tales como la percepción de profundidad y la molestia visual. Concretamente, se describe un test subjetivo basado en la evaluación de degradaciones típicas que pueden aparecer en el proceso de creación de contenidos de vídeo 3D, por ejemplo debidas a calibraciones incorrectas de las cámaras o a algoritmos de procesamiento de la señal de vídeo (p. ej., conversión de 2D a 3D). Además, se presenta el proceso de generación de una base de datos de vídeos estereoscópicos de alta calidad, disponible gratuitamente para la comunidad investigadora y que ha sido utilizada ampliamente en diferentes trabajos relacionados con vídeo 3D. Asimismo, se presenta otro estudio subjetivo, realizado entre varios laboratorios, con el que se analiza el impacto de degradaciones causadas por la codificación de vídeo, así como diversos formatos de representación de vídeo 3D. Igualmente, se describen tres pruebas subjetivas centradas en el estudio de posibles efectos causados por la transmisión de vídeo 3D a través de redes de televisión sobre IP (IPTV, del inglés Internet Protocol Television) y de sistemas de streaming adaptativo de vídeo. Para estos casos, se ha propuesto una innovadora metodología de evaluación subjetiva de calidad vídeo, denominada Content-Immersive Evaluation of Transmission Impairments (CIETI), diseñada específicamente para evaluar eventos de transmisión simulando condiciones realistas de visualización de vídeo en ámbitos domésticos, con el fin de obtener conclusiones más representativas sobre la experiencia visual de los usuarios finales. Finalmente, se exponen dos experimentos subjetivos comparando varias tecnologías actuales de televisores 3D disponibles en el mercado de consumo y evaluando factores perceptuales de sistemas Super Multiview Video (SMV), previstos a ser la tecnología futura de televisores 3D de consumo, gracias a una prometedora visualización de contenido 3D sin necesidad de gafas específicas. El trabajo presentado en esta tesis ha permitido entender los factores perceptuales y técnicos relacionados con el procesamiento y visualización de contenidos de vídeo 3D, que pueden ser de utilidad en el desarrollo de nuevas tecnologías y técnicas de evaluación de la QoE, tanto metodologías subjetivas como métricas objetivas. ABSTRACT This thesis presents a comprehensive study of the evaluation of the Quality of Experience (QoE) perceived by the users of 3D video systems, analyzing the impact of effects introduced by all the elements of the 3D video processing chain. Therefore, various subjective assessment tests are presented, particularly designed to evaluate the systems under consideration, and taking into account all the perceptual factors related to the 3D visual experience, such as depth perception and visual discomfort. In particular, a subjective test is presented, based on evaluating typical degradations that may appear during the content creation, for instance due to incorrect camera calibration or video processing algorithms (e.g., 2D to 3D conversion). Moreover, the process of generation of a high-quality dataset of 3D stereoscopic videos is described, which is freely available for the research community, and has been already widely used in different works related with 3D video. In addition, another inter-laboratory subjective study is presented analyzing the impact of coding impairments and representation formats of stereoscopic video. Also, three subjective tests are presented studying the effects of transmission events that take place in Internet Protocol Television (IPTV) networks and adaptive streaming scenarios for 3D video. For these cases, a novel subjective evaluation methodology, called Content-Immersive Evaluation of Transmission Impairments (CIETI), was proposed, which was especially designed to evaluate transmission events simulating realistic home-viewing conditions, to obtain more representative conclusions about the visual experience of the end users. Finally, two subjective experiments are exposed comparing various current 3D displays available in the consumer market, and evaluating perceptual factors of Super Multiview Video (SMV) systems, expected to be the future technology for consumer 3D displays thanks to a promising visualization of 3D content without specific glasses. The work presented in this thesis has allowed to understand perceptual and technical factors related to the processing and visualization of 3D video content, which may be useful in the development of new technologies and approaches for QoE evaluation, both subjective methodologies and objective metrics.
Resumo:
La metodología Integrated Safety Analysis (ISA), desarrollada en el área de Modelación y Simulación (MOSI) del Consejo de Seguridad Nuclear (CSN), es un método de Análisis Integrado de Seguridad que está siendo evaluado y analizado mediante diversas aplicaciones impulsadas por el CSN; el análisis integrado de seguridad, combina las técnicas evolucionadas de los análisis de seguridad al uso: deterministas y probabilistas. Se considera adecuado para sustentar la Regulación Informada por el Riesgo (RIR), actual enfoque dado a la seguridad nuclear y que está siendo desarrollado y aplicado en todo el mundo. En este contexto se enmarcan, los proyectos Safety Margin Action Plan (SMAP) y Safety Margin Assessment Application (SM2A), impulsados por el Comité para la Seguridad de las Instalaciones Nucleares (CSNI) de la Agencia de la Energía Nuclear (NEA) de la Organización para la Cooperación y el Desarrollo Económicos (OCDE) en el desarrollo del enfoque adecuado para el uso de las metodologías integradas en la evaluación del cambio en los márgenes de seguridad debidos a cambios en las condiciones de las centrales nucleares. El comité constituye un foro para el intercambio de información técnica y de colaboración entre las organizaciones miembro, que aportan sus propias ideas en investigación, desarrollo e ingeniería. La propuesta del CSN es la aplicación de la metodología ISA, especialmente adecuada para el análisis según el enfoque desarrollado en el proyecto SMAP que pretende obtener los valores best-estimate con incertidumbre de las variables de seguridad que son comparadas con los límites de seguridad, para obtener la frecuencia con la que éstos límites son superados. La ventaja que ofrece la ISA es que permite el análisis selectivo y discreto de los rangos de los parámetros inciertos que tienen mayor influencia en la superación de los límites de seguridad, o frecuencia de excedencia del límite, permitiendo así evaluar los cambios producidos por variaciones en el diseño u operación de la central que serían imperceptibles o complicados de cuantificar con otro tipo de metodologías. La ISA se engloba dentro de las metodologías de APS dinámico discreto que utilizan la generación de árboles de sucesos dinámicos (DET) y se basa en la Theory of Stimulated Dynamics (TSD), teoría de fiabilidad dinámica simplificada que permite la cuantificación del riesgo de cada una de las secuencias. Con la ISA se modelan y simulan todas las interacciones relevantes en una central: diseño, condiciones de operación, mantenimiento, actuaciones de los operadores, eventos estocásticos, etc. Por ello requiere la integración de códigos de: simulación termohidráulica y procedimientos de operación; delineación de árboles de sucesos; cuantificación de árboles de fallos y sucesos; tratamiento de incertidumbres e integración del riesgo. La tesis contiene la aplicación de la metodología ISA al análisis integrado del suceso iniciador de la pérdida del sistema de refrigeración de componentes (CCWS) que genera secuencias de pérdida de refrigerante del reactor a través de los sellos de las bombas principales del circuito de refrigerante del reactor (SLOCA). Se utiliza para probar el cambio en los márgenes, con respecto al límite de la máxima temperatura de pico de vaina (1477 K), que sería posible en virtud de un potencial aumento de potencia del 10 % en el reactor de agua a presión de la C.N. Zion. El trabajo realizado para la consecución de la tesis, fruto de la colaboración de la Escuela Técnica Superior de Ingenieros de Minas y Energía y la empresa de soluciones tecnológicas Ekergy Software S.L. (NFQ Solutions) con el área MOSI del CSN, ha sido la base para la contribución del CSN en el ejercicio SM2A. Este ejercicio ha sido utilizado como evaluación del desarrollo de algunas de las ideas, sugerencias, y los algoritmos detrás de la metodología ISA. Como resultado se ha obtenido un ligero aumento de la frecuencia de excedencia del daño (DEF) provocado por el aumento de potencia. Este resultado demuestra la viabilidad de la metodología ISA para obtener medidas de las variaciones en los márgenes de seguridad que han sido provocadas por modificaciones en la planta. También se ha mostrado que es especialmente adecuada en escenarios donde los eventos estocásticos o las actuaciones de recuperación o mitigación de los operadores pueden tener un papel relevante en el riesgo. Los resultados obtenidos no tienen validez más allá de la de mostrar la viabilidad de la metodología ISA. La central nuclear en la que se aplica el estudio está clausurada y la información relativa a sus análisis de seguridad es deficiente, por lo que han sido necesarias asunciones sin comprobación o aproximaciones basadas en estudios genéricos o de otras plantas. Se han establecido tres fases en el proceso de análisis: primero, obtención del árbol de sucesos dinámico de referencia; segundo, análisis de incertidumbres y obtención de los dominios de daño; y tercero, cuantificación del riesgo. Se han mostrado diversas aplicaciones de la metodología y ventajas que presenta frente al APS clásico. También se ha contribuido al desarrollo del prototipo de herramienta para la aplicación de la metodología ISA (SCAIS). ABSTRACT The Integrated Safety Analysis methodology (ISA), developed by the Consejo de Seguridad Nuclear (CSN), is being assessed in various applications encouraged by CSN. An Integrated Safety Analysis merges the evolved techniques of the usually applied safety analysis methodologies; deterministic and probabilistic. It is considered as a suitable tool for assessing risk in a Risk Informed Regulation framework, the approach under development that is being adopted on Nuclear Safety around the world. In this policy framework, the projects Safety Margin Action Plan (SMAP) and Safety Margin Assessment Application (SM2A), set up by the Committee on the Safety of Nuclear Installations (CSNI) of the Nuclear Energy Agency within the Organization for Economic Co-operation and Development (OECD), were aimed to obtain a methodology and its application for the integration of risk and safety margins in the assessment of the changes to the overall safety as a result of changes in the nuclear plant condition. The committee provides a forum for the exchange of technical information and cooperation among member organizations which contribute their respective approaches in research, development and engineering. The ISA methodology, proposed by CSN, specially fits with the SMAP approach that aims at obtaining Best Estimate Plus Uncertainty values of the safety variables to be compared with the safety limits. This makes it possible to obtain the exceedance frequencies of the safety limit. The ISA has the advantage over other methods of allowing the specific and discrete evaluation of the most influential uncertain parameters in the limit exceedance frequency. In this way the changes due to design or operation variation, imperceptibles or complicated to by quantified by other methods, are correctly evaluated. The ISA methodology is one of the discrete methodologies of the Dynamic PSA framework that uses the generation of dynamic event trees (DET). It is based on the Theory of Stimulated Dynamics (TSD), a simplified version of the theory of Probabilistic Dynamics that allows the risk quantification. The ISA models and simulates all the important interactions in a Nuclear Power Plant; design, operating conditions, maintenance, human actuations, stochastic events, etc. In order to that, it requires the integration of codes to obtain: Thermohydraulic and human actuations; Even trees delineation; Fault Trees and Event Trees quantification; Uncertainty analysis and risk assessment. This written dissertation narrates the application of the ISA methodology to the initiating event of the Loss of the Component Cooling System (CCWS) generating sequences of loss of reactor coolant through the seals of the reactor coolant pump (SLOCA). It is used to test the change in margins with respect to the maximum clad temperature limit (1477 K) that would be possible under a potential 10 % power up-rate effected in the pressurized water reactor of Zion NPP. The work done to achieve the thesis, fruit of the collaborative agreement of the School of Mining and Energy Engineering and the company of technological solutions Ekergy Software S.L. (NFQ Solutions) with de specialized modeling and simulation branch of the CSN, has been the basis for the contribution of the CSN in the exercise SM2A. This exercise has been used as an assessment of the development of some of the ideas, suggestions, and algorithms behind the ISA methodology. It has been obtained a slight increase in the Damage Exceedance Frequency (DEF) caused by the power up-rate. This result shows that ISA methodology allows quantifying the safety margin change when design modifications are performed in a NPP and is specially suitable for scenarios where stochastic events or human responses have an important role to prevent or mitigate the accidental consequences and the total risk. The results do not have any validity out of showing the viability of the methodology ISA. Zion NPP was retired and information of its safety analysis is scarce, so assumptions without verification or approximations based on generic studies have been required. Three phases are established in the analysis process: first, obtaining the reference dynamic event tree; second, uncertainty analysis and obtaining the damage domains; third, risk quantification. There have been shown various applications of the methodology and advantages over the classical PSA. It has also contributed to the development of the prototype tool for the implementation of the ISA methodology (SCAIS).
Resumo:
In this paper some topics related to the design of reinforced concrete (RC) shells are addressed. The influence of the reinforcement layout on the service and ultimate behavior of the shell structure is commented. The well established methodology for dimensioning and verifying RC sections of beam structures is extended. In this way it is possible to treat within a unified procedure the design and verification of RC two dimensional structures, in particular membrane and shell structures. Realistic design situations as multiple steel farnilies and non orthogonal reinforcement layout can be handled. Finally, some examples and applications of the proposed methodology are presented.