15 resultados para zero tolerance
em Universidad Politécnica de Madrid
Resumo:
The classical Kramer sampling theorem, which provides a method for obtaining orthogonal sampling formulas, can be formulated in a more general nonorthogonal setting. In this setting, a challenging problem is to characterize the situations when the obtained nonorthogonal sampling formulas can be expressed as Lagrange-type interpolation series. In this article a necessary and sufficient condition is given in terms of the zero removing property. Roughly speaking, this property concerns the stability of the sampled functions on removing a finite number of their zeros.
Resumo:
In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.
Resumo:
This paper presents an analysis of the fault tolerance achieved by an autonomous, fully embedded evolvable hardware system, which uses a combination of partial dynamic reconfiguration and an evolutionary algorithm (EA). It demonstrates that the system may self-recover from both transient and cumulative permanent faults. This self-adaptive system, based on a 2D array of 16 (4×4) Processing Elements (PEs), is tested with an image filtering application. Results show that it may properly recover from faults in up to 3 PEs, that is, more than 18% cumulative permanent faults. Two fault models are used for testing purposes, at PE and CLB levels. Two self-healing strategies are also introduced, depending on whether fault diagnosis is available or not. They are based on scrubbing, fitness evaluation, dynamic partial reconfiguration and in-system evolutionary adaptation. Since most of these adaptability features are already available on the system for its normal operation, resource cost for self-healing is very low (only some code additions in the internal microprocessor core)
Resumo:
El presente trabajo propone un método para la determinación de los valores de las tolerancias individuales de las piezas que forman un conjunto ensamblado a partir de valores de tolerancias especificados en el conjunto final, optimizando el coste total de fabricación de las piezas individuales a partir de funciones de coste-tolerancia basadas en el proceso de fabricación de cada una de ellas. Para ello se parte de los principales trabajos desarrollados en la línea de asignación de tolerancias y se realiza la propuesta del modelo de trabajo, basado en la optimización de costes a partir de la aplicación del método de los multiplicadores de Lagrange a diversas curvas de coste-tolerancia
Resumo:
Damage tolerance of high strength cold-drawn ferritic–austenitic stainless steel wires is assessed by means of tensile fracture tests of cracked wires. The fatigue crack is transversally propagated from the wire surface. The damage tolerance curve of the wires results from the empirical failure load when given as a function of crack depth. As a consequence of cold drawing, the wire microstructure is orientated along its longitudinal axis and anisotropic fracture behaviour is found at macrostructural level at the tensile failure of the cracked specimens. An in situ optical technique known as video image correlation VIC-2D is used to get an insight into this failure mechanism by tensile testing transversally fatigue cracked plane specimens extracted from the cold-drawn wires. Finally, the experimentally obtained damage tolerance curve of the cold-drawn ferritic–austenitic stainless steel wires is compared with that of an elementary plastic collapse model and existing data of two types of high strength eutectoid steel currently used as prestressing steel for concrete.
Resumo:
Modules are an important part of the CPV system. By pursing, in our objective of a 35% efficiency module, we need to look forward a significant improvement in the state of the art of CPV modules since no commercial module is capable of achieving that efficiency. Achieving this efficiency will require high efficiency cells, progress in the optics lenses that are implemented in these modules, and also integration into module. Basic design of 35% CPV module is presented considering for practical and rapid industry application. The output is 385 W while its weight is only 18 kg. In spite of its high concentration ratio reaching 1,000 X, it acceptance angle is as high as 1.1 degree.
Resumo:
El suelo salino impone un estrés abiótico importante que causa graves problemas en la agricultura ya que la mayoría de los cultivos se ven afectados por la salinidad debido a efectos osmóticos y tóxicos. Por ello, la contaminación y la escasez de agua dulce, la salinización progresiva de tierras y el aumento exponencial de la población humana representan un grave problema que amenaza la seguridad alimentaria mundial para las generaciones futuras. Por lo tanto, aumentar la tolerancia a la salinidad de los cultivos es un objetivo estratégico e ineludible para garantizar el suministro de alimentos en el futuro. Mantener una óptima homeostasis de K+ en plantas que sufren estrés salino es un objetivo importante en el proceso de obtención de plantas tolerantes a la salinidad. Aunque el modelo de la homeostasis de K+ en las plantas está razonablemente bien descrito en términos de entrada de K+, muy poco se sabe acerca de los genes implicados en la salida de K+ o de su liberación desde la vacuola. En este trabajo se pretende aclarar algunos de los mecanismos implicados en la homeostasis de K+ en plantas. Para ello se eligió la briofita Physcomitrella patens, una planta no vascular de estructura simple y de fase haploide dominante que, entre muchas otras cualidades, hacen que sea un modelo ideal. Lo más importante es que no sólo P. patens es muy tolerante a altas concentraciones de Na+, sino que también su posición filogenética en la evolución de las plantas abre la posibilidad de estudiar los cambios claves que, durante el curso de la evolución, se produjeron en las diversas familias de los transportadores de K+. Se han propuesto varios transportadores de cationes como candidatos que podrían tener un papel en la salida de K+ o su liberación desde la vacuola, especialmente miembros de la familia CPA2 que contienen las familias de transportadores KEA y CHX. En este estudio se intenta aumentar nuestra comprensión de las funciones de los transportadores de CHX en las células de las plantas usando P. patens, como ya se ha dicho. En esta especie, se han identificado cuatro genes CHX, PpCHX1-4. Dos de estos genes, PpCHX1 y PpCHX2, se expresan aproximadamente al mismo nivel que el gen PpACT5, y los otros dos genes muestran una expresión muy baja. La expresión de PpCHX1 y PpCHX2 en mutantes de Escherichia coli defectivos en el transporte de K+ restauraron el crecimiento de esta cepa en medios con bajo contenido de K+, lo que viii sugiere que la entrada de K+ es energizada por un mecanismo de simporte con H+. Por otra parte, estos transportadores suprimieron el defecto asociado a la mutación kha1 en Saccharomyces cerevisiae, lo que sugiere que podrían mediar un antiporte en K+/H+. La proteína PpCHX1-GFP expresada transitoriamente en protoplastos de P. patens co-localizó con un marcador de Golgi. En experimentos similares, la proteína PpCHX2-GFP localizó aparentemente en la membrana plasmática y tonoplasto. Se construyeron las líneas mutantes simples de P. patens ΔPpchx1 y ΔPpchx2, y también el mutante doble ΔPpchx2 ΔPphak1. Los mutantes simples crecieron normalmente en todas las condiciones ensayadas y mostraron flujos de entrada normales de K+ y Rb+; la mutación ΔPpchx2 no aumentó el defecto de las plantas ΔPphak1. En experimentos a largo plazo, las plantas ΔPpchx2 mostraron una retención de Rb+ ligeramente superior que las plantas silvestres, lo que sugiere que PpCHX2 promueve la transferencia de Rb+ desde la vacuola al citosol o desde el citosol al medio externo, actuando en paralelo con otros transportadores. Sugerimos que transportadores de K+ de varias familias están involucrados en la homeostasis de pH de orgánulos ya sea mediante antiporte K+/H+ o simporte K+-H+.ix ABSTRACT Soil salinity is a major abiotic stress causing serious problems in agriculture as most crops are affected by it. Moreover, the contamination and shortage of freshwater, progressive land salinization and exponential increase of human population aggravates the problem implying that world food security may not be ensured for the next generations. Thus, a strategic and an unavoidable goal would be increasing salinity tolerance of plant crops to secure future food supply. Maintaining an optimum K+ homeostasis in plants under salinity stress is an important trait to pursue in the process of engineering salt tolerant plants. Although the model of K+ homeostasis in plants is reasonably well described in terms of K+ influx, very little is known about the genes implicated in K+ efflux or release from the vacuole. In this work, we aim to clarify some of the mechanisms involved in K+ homeostasis in plants. For this purpose, we chose the bryophyte plant Physcomitrella patens, a nonvascular plant of simple structure and dominant haploid phase that, among many other characteristics, makes it an ideal model. Most importantly, not only P. patens is very tolerant to high concentrations of Na+, but also its phylogenetic position in land plant evolution opens the possibility to study the key changes that occurred in K+ transporter families during the course of evolution. Several cation transporter candidates have been proposed to have a role in K+ efflux or release from the vacuole especially members of the CPA2 family which contains the KEA and CHX transporter families. We intended in this study to increase our understanding of the functions of CHX transporters in plant cells using P. patens, in which four CHX genes have been identified, PpCHX1-4. Two of these genes, PpCHX1 and PpCHX2, are expressed at approximately the same level as the PpACT5 gene, but the other two genes show an extremely low expression. PpCHX1 and PpCHX2 restored growth of Escherichia coli mutants on low K+-containing media, suggesting they mediated K+ uptake that may be energized by symport with H+. In contrast, these genes suppressed the defect associated to the kha1 mutation in Saccharomyces cerevisiae, which suggest that they might mediate K+/H+ antiport. PpCHX1-GFP protein transiently expressed in P. patens protoplasts co-localized with a Golgi marker. In similar experiments, the PpCHX2-GFP protein appeared to localize to tonoplast and plasma x membrane. We constructed the ΔPpchx1 and ΔPpchx2 single mutant lines, and the ΔPpchx2 ΔPphak1 double mutant. Single mutant plants grew normally under all the conditions tested and exhibited normal K+ and Rb+ influxes; the ΔPpchx2 mutation did not increase the defect of ΔPphak1 plants. In long-term experiments, ΔPpchx2 plants showed a slightly higher Rb+ retention than wild type plants, which suggests that PpCHX2 mediates the transfer of Rb+ from either the vacuole to the cytosol or from the cytosol to the external medium in parallel with other transporters. We suggest that K+ transporters of several families are involved in the pH homeostasis of organelles by mediating either K+/H+ antiport or K+-H+ symport.
Resumo:
En este trabajo se estudia la relacion entre la resistencia al frio de 8 variedades de olivo observada en campo y la medida en laboratorio, a partir de la temperatura de congelacion letal. Se observo una correlacion entre el porcentaje de brotes helado y la temperatura de congelacion letal 50%. Las variedades mas resistentes fueron Cornicabra, Arbequina y Picual. Las mas sensibles fue Empeltre.
Resumo:
Various systems for measuring propellant content in spacecrafts under weightlessness conditions are reviewed. The cavity resonator method is found to be the most suitable measurement; technique. This method is analyzed in detail. A determination of errors intrinsec to the method is carried out.
Resumo:
We extend in this paper some previous results concerning the differential-algebraic index of hybrid models of electrical and electronic circuits. Specifically, we present a comprehensive index characterization which holds without passivity requirements, in contrast to previous approaches, and which applies to nonlinear circuits composed of uncoupled, one-port devices. The index conditions, which are stated in terms of the forest structure of certain digraph minors, do not depend on the specific tree chosen in the formulation of the hybrid equations. Additionally, we show how to include memristors in hybrid circuit models; in this direction, we extend the index analysis to circuits including active memristors, which have been recently used in the design of nonlinear oscillators and chaotic circuits. We also discuss the extension of these results to circuits with controlled sources, making our framework of interest in the analysis of circuits with transistors, amplifiers, and other multiterminal devices.
Resumo:
The annual grass Brachypodium distachyon has been recently recognized as the model plant for functional genomics of temperate grasses, including cereals of economic relevance like wheat and barley. Sixty-two lines of B. distachyon were assessed for response to drought stress and heat tolerance. All these lines, except the reference genotype BD21, derive from specimens collected in 32 distinct locations of the Iberian Peninsula, covering a wide range of geo- climatic conditions. Sixteen lines of Brachypodium hybridum, an allotetraploid closely related to B. distachyon were used as reference of abiotic-stress well-adapted genotypes. Drought tolerance was assessed in a green-house trial. At the rosette-stage, no irrigation was applied to treated plants whereas their replicates at the control were maintained well watered during all the experiment. Thermographic images of treated and control plants were taken after 2 and 3 weeks of drought treatment, when stressed plants showed medium and extreme wilting symptoms. The mean leaf temperature of stressed (LTs) and control (LTc) plants was estimated based upon thermographic records from selected pixels (183 per image) that strictly correspond to leaf tissue. The response to drought was based on the analysis of two parameters: LTs and the thermal difference (TD) between stressed and control plants (LTs – LTc). The response to heat stress was based on LTc. Comparison of the mean values of these parameters showed that: 1) Genotypes better adapted to drought (B. hybridum lines) presented a higher LTs and TD than B. distachyon lines. 2) Under high temperature conditions, watered plants of B. hybridum lines maintained lower LTc than those of B. distachyon. Those results suggest that in these species adaptation to drought is linked to a more efficient stomata regulation: under water stress stomata are closed, increasing foliar temperature but also water use efficiency by reducing transpiration. With high temperature and water availability the results are less definite, but still seems that opening stomata allow plants to increase transpiration and therefore to diminish foliar temperature.
Resumo:
Advanced optical modulation format polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) has become a key ingredient in the design of 100 and 200-Gb/s dense wavelength-division multiplexed (DWDM) networks. The performance of this format varies according to the shape of the pulses employed by the optical carrier: non-return to zero (NRZ), return to zero (RZ) or carrier-suppressed return to zero (CSRZ). In this paper we analyze the tolerance of PDM-QPSK to linear and nonlinear optical impairments: amplified spontaneous emission (ASE) noise, crosstalk, distortion by optical filtering, chromatic dispersion (CD), polarization mode dispersion (PMD) and fiber Kerr nonlinearities. RZ formats with a low duty cycle value reduce pulse-to-pulse interaction obtaining a higher tolerance to CD, PMD and intrachannel nonlinearities.
Resumo:
The study brings new insights on the hydrogen assisted stress corrosion on damage tolerance of a high-strength duplex stainless steel wire which concerns its potential use as active reinforcement for concrete prestressing. The adopted procedure was to experimentally state the effect of hydrogen on the damage tolerance of cylindrical smooth and precracked wire specimens exposed to stress corrosion cracking using the aggressive medium of the standard test developed by FIP (International Prestressing Federation). Stress corrosion testing, mechanical fracture tests and scanning electron microscopy analysis allowed the damage assessment, and explain the synergy between mechanical loading and environment action on the failure sequence of the wire. In presence of previous damage, hydrogen affects the wire behavior in a qualitative sense, consistently to the fracture anisotropy attributable to cold drawing, but it does not produce quantitative changes since the steel fully preserves its damage tolerance.
Resumo:
El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.
Resumo:
Given the global energy and environmental situation, the European Union has been issuing directives with increasingly demanding requirements in term of the energy efficiency in buildings. The international competition of sustainable houses, Solar Decathlon Europe (SDE), is aligned with these European objectives. SDE houses are low energy solar buildings that must reach the near to zero energy houses’ goal. In the 2012 edition, in order to emphasize its significance, the Energy Efficiency Contest was added. SDE houses’ interior comfort, functioning and energy performance is monitored. The monitoring data can give an idea about the efficiency of the houses. However, a jury comprised by international experts is responsible for carrying out the houses energy efficiency evaluation. Passive strategies and houses services are analyzed. Additionally, the jury's assessment has been compared with the behavior of the houses during the monitoring period. Comparative studies make emphasis on the energy aspects, houses functioning and their interior comfort. Conclusions include thoughts related with the evaluation process, the results of the comparative studies and suggestions for the next competitions.