951 resultados para laser induced damage threshold
Resumo:
The generation of collimated electron beams from metal double-gate nanotip arrays excited by near infrared laser pulses is studied. Using electromagnetic and particle tracking simulations, we showed that electron pulses with small rms transverse velocities are efficiently produced from nanotip arrays by laser-induced field emission with the laser wavelength tuned to surface plasmon polariton resonance of the stacked double-gate structure. The result indicates the possibility of realizing a metal nanotip array cathode that outperforms state-of-the-art photocathodes.
Resumo:
The zebrafish heart has the capacity to regenerate after ventricular resection. Although this regeneration model has proved useful for the elucidation of certain regeneration mechanisms, it is based on the removal of heart tissue rather than on tissue damage. We recently characterized the cellular response and regenerative capacity of the zebrafish heart after cryoinjury (CI), an alternative procedure that more closely models the pathophysiological process undergone by the human heart after myocardial infarction (MI). After anesthesia, localized CI with a liquid nitrogen-cooled copper probe induced damage in 25% of the ventricle, in a procedure requiring <5 min. Here we present a detailed description of the technique, which provides a valuable system for the study of the mechanisms of heart regeneration and scar removal after MI in a versatile vertebrate model.
Resumo:
Regulation of tissue size requires fine tuning at the single-cell level of proliferation rate, cell volume, and cell death. Whereas the adjustment of proliferation and growth has been widely studied [1, 2, 3, 4 and 5], the contribution of cell death and its adjustment to tissue-scale parameters have been so far much less explored. Recently, it was shown that epithelial cells could be eliminated by live-cell delamination in response to an increase of cell density [6]. Cell delamination was supposed to occur independently of caspase activation and was suggested to be based on a gradual and spontaneous disappearance of junctions in the delaminating cells [6]. Studying the elimination of cells in the midline region of the Drosophila pupal notum, we found that, contrary to what was suggested before, Caspase 3 activation precedes and is required for cell delamination. Yet, using particle image velocimetry, genetics, and laser-induced perturbations, we confirmed [ 6] that local tissue crowding is necessary and sufficient to drive cell elimination and that cell elimination is independent of known fitness-dependent competition pathways [ 7, 8 and 9]. Accordingly, activation of the oncogene Ras in clones was sufficient to compress the neighboring tissue and eliminate cells up to several cell diameters away from the clones. Mechanical stress has been previously proposed to contribute to cell competition [ 10 and 11]. These results provide the first experimental evidences that crowding-induced death could be an alternative mode of super-competition, namely mechanical super-competition, independent of known fitness markers [ 7, 8 and 9], that could promote tumor growth.
Resumo:
The BCR-ABL fusion gene is the molecular hallmark of Philadelphia-positive leukemias. Normal Bcr is a multifunctional protein, originally localized to the cytoplasm. It has serine kinase activity and has been implicated in cellular signal transduction. Recently, it has been reported that Bcr can interact with xeroderma pigmentosum group B (XPB/ERCC3)—a nuclear protein active in UV-induced DNA repair. Two major Bcr proteins (p160 Bcr and p130Bcr) have been characterized, and our preliminary results using metabolic labeling and immunoblotting demonstrated that, while both the p160 and p130 forms of Bcr localized to the cytoplasm, the p130 form (and to a lesser extent p160) could also be found in the nucleus. Furthermore, electron microscopy confirmed the presence of Bcr in the nucleus and demonstrated that this protein associates with metaphase chromatin as well as condensed interphase heterochromatin. Since serine kinases that associate with condensed DNA are often cell cycle regulatory, these observations suggested a novel role for nuclear Bcr in cell cycle regulation and/or DNA repair. However, cell cycle synchronization analysis did not demonstrate changes in levels of Bcr throughout the cell cycle. Therefore we hypothesized that BCR serves as a DNA repair gene, and its function is altered by formation of BCR-ABL. This hypothesis was investigated using cell lines stably transfected with the BCR-ABL gene, and their parental counterparts (MBA-1 vs. M07E and Bcr-AblT1 vs. 4A2+pZAP), and several DNA repair assays: the Comet assay, a radioinimunoassay for UV-induced cyclobutane pyrimidine dimers (CPDs), and clonogenic assays. Comet assays demonstrated that, after exposure to either ultraviolet (UV)-C (0.5 to 10.0 joules m −2) or to gamma radiation (200–1000 rads) there was greater efficiency of DNA repair in the BCR-ABL-transfected cells compared to their parental controls. Furthermore, after UVC-irradiation, there was less production of CPDs, and a more rapid disappearance of these adducts in BCR-ABL-bearing cells. UV survival, as reflected by clonogenic assays, was also greater in the BCR-ABL-transfected cells. Taken together, these results indicate that, in our systems, BCR-ABL confers resistance to UVC-induced damage in cells, and increases DNA repair efficiency in response to both UVC- and gamma-irradiation. ^
Resumo:
Las "orugas defoliadoras" afectan la producción del cultivo de soja, sobre todo en años secos y con altas temperaturas que favorecen su desarrollo. El objetivo del presente trabajo fue evaluar la eficiencia de control de insecticidas neurotóxicos e IGRs sobre "orugas defoliadoras" en soja. Se realizaron ensayos en lotes comerciales en tres localidades de la provincia de Córdoba en las campañas agrícolas 2008/09 y 2009/10, bajo un diseño de bloques al azar, con seis tratamientos y tres repeticiones. Los tratamientos fueron: T1: Clorpirifos (384 g p.a.ha-1), T2: Cipermetrina (37,5 g p.a.ha-1), T3: Lufenuron+Profenofos (15 + 150 g p.a.ha-1), T4: Metoxifenocide (28,8 g p.a.ha-1), T5: Novaluron (10 g p.a.ha-1) y T6: Testigo. El tamaño de las parcelas fue de 12 surcos de 10 m de largo distanciados a 0,52 m. La aplicación se realizó con una mochila provista de boquillas de cono hueco (40 gotas.cm-2), cuando la plaga alcanzó el umbral de daño económico. En cada parcela se tomaron cinco muestras a los 0, 2, 7 y 14 días después de la aplicación (DDA) utilizando el paño vertical, identificando y cuantificando las orugas vivas mayores a 1,5 cm. A los 14 DDA se extrajeron 30 folíolos por parcela (estrato medio y superior de la planta) y se determinó el porcentaje de defoliación utilizando el software WinFolia Reg. 2004. Se estimó el rendimiento sobre 5 muestras de 1 m2 en cada parcela y se realizó ANOVA y test de comparación de medias LSD de Fisher. El Clorpirifos mostró el mayor poder de volteo y el Metoxifenocide la mayor eficiencia a los 7 DDA. En general los IGRs mostraron mayor poder residual.
Resumo:
Increasing atmospheric CO2 concentration is responsible for progressive ocean acidification, ocean warming as well as decreased thickness of upper mixing layer (UML), thus exposing phytoplankton cells not only to lower pH and higher temperatures but also to higher levels of solar UV radiation. In order to evaluate the combined effects of ocean acidification, UV radiation and temperature, we used the diatom Phaeodactylum tricornutum as a model organism and examined its physiological performance after grown under two CO2 concentrations (390 and 1000 µatm) for more than 20 generations. Compared to the ambient CO2 level (390 µatm), growth at the elevated CO2 concentration increased non-photochemical quenching (NPQ) of cells and partially counteracted the harm to PS II (photosystem II) caused by UV-A and UV-B. Such an effect was less pronounced under increased temperature levels. The ratio of repair to UV-B induced damage decreased with increased NPQ, reflecting induction of NPQ when repair dropped behind the damage, and it was higher under the ocean acidification condition, showing that the increased pCO2 and lowered pH counteracted UV-B induced harm. As for photosynthetic carbon fixation rate which increased with increasing temperature from 15 to 25 °C, the elevated CO2 and temperature levels synergistically interacted to reduce the inhibition caused by UV-B and thus increase the carbon fixation.
Resumo:
The Stark full widths at half of the maximal line intensity (FWHM, ω) have been measured for 25 spectrallines of PbIII (15 measured for the first time) arising from the 5d106s8s, 5d106s7p, 5d106s5f and 5d106s5g electronic configurations, in a lead plasma produced by ablation with a Nd:YAG laser. The optical emission spectroscopy from a laser-induced plasma generated by a 10 640 Å radiation, with an irradiance of 2 × 1010 W cm− 2 on a lead target (99.99% purity) in an atmosphere of argon was analysed in the wavelength interval between 2000 and 7000 Å. The broadening parameters were obtained with the target placed in argon atmosphere at 6 Torr and 400 ns after each laser light pulse, which provides appropriate measurement conditions. A Boltzmann plot was used to obtain the plasma temperature (21,400 K) and published values of the Starkwidths in Pb I, Pb II and PbIII to obtain the electron number density (7 × 1016 cm− 3); with these values, the plasma composition was determined by means of the Saha equation. Local Thermodynamic Equilibrium (LTE) conditions and plasma homogeneity has been checked. Special attention was dedicated to the possible self-absorption of the different transitions. Comparison of the new results with recent available data is also presented.
Resumo:
En los últimos años la tecnología láser se ha convertido en una herramienta imprescindible en la fabricación de dispositivos fotovoltaicos, ayudando a la consecución de dos objetivos claves para que esta opción energética se convierta en una alternativa viable: reducción de costes de fabricación y aumento de eficiencia de dispositivo. Dentro de las tecnologías fotovoltaicas, las basadas en silicio cristalino (c-Si) siguen siendo las dominantes en el mercado, y en la actualidad los esfuerzos científicos en este campo se encaminan fundamentalmente a conseguir células de mayor eficiencia a un menor coste encontrándose, como se comentaba anteriormente, que gran parte de las soluciones pueden venir de la mano de una mayor utilización de tecnología láser en la fabricación de los mismos. En este contexto, esta Tesis hace un estudio completo y desarrolla, hasta su aplicación en dispositivo final, tres procesos láser específicos para la optimización de dispositivos fotovoltaicos de alta eficiencia basados en silicio. Dichos procesos tienen como finalidad la mejora de los contactos frontal y posterior de células fotovoltaicas basadas en c-Si con vistas a mejorar su eficiencia eléctrica y reducir el coste de producción de las mismas. En concreto, para el contacto frontal se han desarrollado soluciones innovadoras basadas en el empleo de tecnología láser en la metalización y en la fabricación de emisores selectivos puntuales basados en técnicas de dopado con láser, mientras que para el contacto posterior se ha trabajado en el desarrollo de procesos de contacto puntual con láser para la mejora de la pasivación del dispositivo. La consecución de dichos objetivos ha llevado aparejado el alcanzar una serie de hitos que se resumen continuación: - Entender el impacto de la interacción del láser con los distintos materiales empleados en el dispositivo y su influencia sobre las prestaciones del mismo, identificando los efectos dañinos e intentar mitigarlos en lo posible. - Desarrollar procesos láser que sean compatibles con los dispositivos que admiten poca afectación térmica en el proceso de fabricación (procesos a baja temperatura), como los dispositivos de heterounión. - Desarrollar de forma concreta procesos, completamente parametrizados, de definición de dopado selectivo con láser, contactos puntuales con láser y metalización mediante técnicas de transferencia de material inducida por láser. - Definir tales procesos de forma que reduzcan la complejidad de la fabricación del dispositivo y que sean de fácil integración en una línea de producción. - Mejorar las técnicas de caracterización empleadas para verificar la calidad de los procesos, para lo que ha sido necesario adaptar específicamente técnicas de caracterización de considerable complejidad. - Demostrar su viabilidad en dispositivo final. Como se detalla en el trabajo, la consecución de estos hitos en el marco de desarrollo de esta Tesis ha permitido contribuir a la fabricación de los primeros dispositivos fotovoltaicos en España que incorporan estos conceptos avanzados y, en el caso de la tecnología de dopado con láser, ha permitido hacer avances completamente novedosos a nivel mundial. Asimismo los conceptos propuestos de metalización con láser abren vías, completamente originales, para la mejora de los dispositivos considerados. Por último decir que este trabajo ha sido posible por una colaboración muy estrecha entre el Centro Láser de la UPM, en el que la autora desarrolla su labor, y el Grupo de Investigación en Micro y Nanotecnologías de la Universidad Politécnica de Cataluña, encargado de la preparación y puesta a punto de las muestras y del desarrollo de algunos procesos láser para comparación. También cabe destacar la contribución de del Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT, en la preparación de experimentos específicos de gran importancia en el desarrollo del trabajo. Dichas colaboraciones se han desarrollado en el marco de varios proyectos, tales como el proyecto singular estratégico PSE-MICROSIL08 (PSE-iv 120000-2006-6), el proyecto INNDISOL (IPT-420000-2010-6), ambos financiados por el Fondo Europeo de Desarrollo Regional FEDER (UE) “Una manera de hacer Europa” y el MICINN, y el proyecto del Plan Nacional AMIC (ENE2010-21384-C04-02), cuya financiación ha permitido en gran parte llevar a término este trabajo. v ABSTRACT. Last years lasers have become a fundamental tool in the photovoltaic (PV) industry, helping this technology to achieve two major goals: cost reduction and efficiency improvement. Among the present PV technologies, crystalline silicon (c-Si) maintains a clear market supremacy and, in this particular field, the technological efforts are focussing into the improvement of the device efficiency using different approaches (reducing for instance the electrical or optical losses in the device) and the cost reduction in the device fabrication (using less silicon in the final device or implementing more cost effective production steps). In both approaches lasers appear ideally suited tools to achieve the desired success. In this context, this work makes a comprehensive study and develops, until their implementation in a final device, three specific laser processes designed for the optimization of high efficiency PV devices based in c-Si. Those processes are intended to improve the front and back contact of the considered solar cells in order to reduce the production costs and to improve the device efficiency. In particular, to improve the front contact, this work has developed innovative solutions using lasers as fundamental processing tools to metalize, using laser induced forward transfer techniques, and to create local selective emitters by means of laser doping techniques. On the other side, and for the back contact, and approached based in the optimization of standard laser fired contact formation has been envisaged. To achieve these fundamental goals, a number of milestones have been reached in the development of this work, namely: - To understand the basics of the laser-matter interaction physics in the considered processes, in order to preserve the functionality of the irradiated materials. - To develop laser processes fully compatible with low temperature device concepts (as it is the case of heterojunction solar cells). - In particular, to parameterize completely processes of laser doping, laser fired contacts and metallization via laser transfer of material. - To define such a processes in such a way that their final industrial implementation could be a real option. - To improve widely used characterization techniques in order to be applied to the study of these particular processes. - To probe their viability in a final PV device. Finally, the achievement of these milestones has brought as a consequence the fabrication of the first devices in Spain incorporating these concepts. In particular, the developments achieved in laser doping, are relevant not only for the Spanish science but in a general international context, with the introduction of really innovative concepts as local selective emitters. Finally, the advances reached in the laser metallization approached presented in this work open the door to future developments, fully innovative, in the field of PV industrial metallization techniques. This work was made possible by a very close collaboration between the Laser Center of the UPM, in which the author develops his work, and the Research Group of Micro y Nanotecnology of the Universidad Politécnica de Cataluña, in charge of the preparation and development of samples and the assessment of some laser processes for comparison. As well is important to remark the collaboration of the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT, in the preparation of specific experiments of great importance in the development of the work. These collaborations have been developed within the framework of various projects such as the PSE-MICROSIL08 (PSE-120000-2006-6), the project INNDISOL (IPT-420000-2010-6), both funded by the Fondo Europeo de Desarrollo Regional FEDER (UE) “Una manera de hacer Europa” and the MICINN, and the project AMIC (ENE2010-21384-C04-02), whose funding has largely allowed to complete this work.
Resumo:
We present experimental and numerical results on intense-laser-pulse-produced fast electron beams transport through aluminum samples, either solid or compressed and heated by laser-induced planar shock propagation. Thanks to absolute K� yield measurements and its very good agreement with results from numerical simulations, we quantify the collisional and resistive fast electron stopping powers: for electron current densities of � 8 � 1010 A=cm2 they reach 1:5 keV=�m and 0:8 keV=�m, respectively. For higher current densities up to 1012 A=cm2, numerical simulations show resistive and collisional energy losses at comparable levels. Analytical estimations predict the resistive stopping power will be kept on the level of 1 keV=�m for electron current densities of 1014 A=cm2, representative of the full-scale conditions in the fast ignition of inertially confined fusion targets.
Resumo:
Helium retention in irradiated tungsten leads to swelling, pore formation, sample exfoliation and embrittlement with deleterious consequences in many applications. In particular, the use of tungsten in future nuclear fusion plants is proposed due to its good refractory properties. However, serious concerns about tungsten survivability stems from the fact that it must withstand severe irradiation conditions. In magnetic fusion as well as in inertial fusion (particularly with direct drive targets), tungsten components will be exposed to low and high energy ion (helium) irradiation, respectively. A common feature is that the most detrimental situations will take place in pulsed mode, i.e., high flux irradiation. There is increasing evidence on a correlation between a high helium flux and an enhancement of detrimental effects on tungsten. Nevertheless, the nature of these effects is not well understood due to the subtleties imposed by the exact temperature profile evolution, ion energy, pulse duration, existence of impurities and simultaneous irradiation with other species. Physically based Kinetic Monte Carlo is the technique of choice to simulate the evolution of radiation-induced damage inside solids in large temporal and space scales. We have used the recently developed code MMonCa (Modular Monte Carlo simulator), presented in this conference for the first time, to study He retention (and in general defect evolution) in tungsten samples irradiated with high intensity helium pulses. The code simulates the interactions among a large variety of defects and impurities (He and C) during the irradiation stage and the subsequent annealing steps. In addition, it allows us to vary the sample temperature to follow the severe thermo-mechanical effects of the pulses. In this work we will describe the helium kinetics for different irradiation conditions. A competition is established between fast helium cluster migration and trapping at large defects, being the temperature a determinant factor. In fact, high temperatures (induced by the pulses) are responsible for large vacancy cluster formation and subsequent additional trapping with respect to low flux irradiation.
Resumo:
Helium retention in irradiated tungsten leads to swelling, pore formation, sample exfoliation and embrittlement with deleterious consequences in many applications. In particular, the use of tungsten in future nuclear fusion plants is proposed due to its good refractory properties. However, serious concerns about tungsten survivability stems from the fact that it must withstand severe irradiation conditions. In magnetic fusion as well as in inertial fusion (particularly with direct drive targets), tungsten components will be exposed to low and high energy ion irradiation (helium), respectively. A common feature is that the most detrimental situations will take place in pulsed mode, i.e., high flux irradiation. There is increasing evidence of a correlation between a high helium flux and an enhancement of detrimental effects on tungsten. Nevertheless, the nature of these effects is not well understood due to the subtleties imposed by the exact temperature profile evolution, ion energy, pulse duration, existence of impurities and simultaneous irradiation with other species. Object Kinetic Monte Carlo is the technique of choice to simulate the evolution of radiation-induced damage inside solids in large temporal and space scales. We have used the recently developed code MMonCa (Modular Monte Carlo simulator), presented at COSIRES 2012 for the first time, to study He retention (and in general defect evolution) in tungsten samples irradiated with high intensity helium pulses. The code simulates the interactions among a large variety of defects and during the irradiation stage and the subsequent annealing steps. The results show that the pulsed mode leads to significantly higher He retention at temperatures higher than 700 K. In this paper we discuss the process of He retention in terms of trap evolution. In addition, we discuss the implications of these findings for inertial fusion.
Resumo:
La presente tesis doctoral presenta una serie de estudios en el campo del patrimonio basados en metodologías de monitorización mediante redes de sensores y técnicas no invasivas con el objetivo de realizar nuevas aportaciones a la conservación preventiva mediante el seguimiento de los daños de deterioro o la prevención de los mismos. Las metodologías de monitorización mediante el despliegue de redes tridimensionales basadas en data loggers abordan estudios microclimáticos, de confort y energéticos a corto plazo, donde se establecen conclusiones relativas a la eficiencia energética de tres sistemas de calefacción muy utilizados en iglesias de la región centro de la Península Ibérica, abordando aspectos de afección de los mismos en el confort de los ocupantes o en el deterioro de los elementos patrimoniales o constructivos. Se desplegaron además distintas plataformas de redes de sensores inalámbricas procediendo a analizar en esta tesis cuál es la que presenta mejores resultados en el ámbito del patrimonio con el objetivo de una monitorización a largo plazo y considerando aspectos de comunicaciones, consumo y configuración de las redes. Una vez conocida la plataforma que presenta mejores resultados comparativos se muestra una metodología de estudio de la calidad de las comunicaciones en múltiples escenarios de patrimonio cultural y natural con la misma, que servirá para establecer una serie de aspectos a considerar en el despliegue de redes de sensores inalámbricas en futuros escenarios a monitorizar. Al igual que ocurre con las redes de sensores basadas en data loggers, las tareas de monitorización desarrolladas en esta tesis mediante el despliegue de las distintas plataformas inalámbricas ha permitido la detección de numerosos fenómenos de deterioro que son descritos a lo largo de la investigación y cuyo seguimiento supone una aportación a la prevención de daños en los distintos escenarios. Asimismo en el desarrollo de la tesis se realiza una aportación para la conservación preventiva mediante la monitorización con distintas técnicas no invasivas como la termografía infrarroja, las medidas de humedad superficial mediante protimeter, las técnicas de prospección de resistividad eléctrica de alta resolución o la prospección georradar. De este modo se desarrollan distintas aportaciones y conclusiones acerca de las ventajas y/o limitaciones de uso de las mismas analizando la idoneidad de aplicar cada una de ellas en distintas fases de análisis o con distintas capacidades de detección o caracterización de los daños. El estudio de imbricación de dichas técnicas ha sido desarrollado en un escenario real que presenta graves daños por humedad, habiendo sido posible la caracterización del origen de los mismos. ABSTRACT This doctoral dissertation discusses field research conducted to monitor heritage assets with sensor networks and other non-invasive techniques. The aim pursued was to contribute to conservation by tracking or preventing decay-induced damage. Monitoring methodologies based on three-dimensional data logger networks were used in short-term micro-climatic, comfort and energy studies to draw conclusions about the energy efficiency of three heating systems widely used in central Iberian churches. The impact of these systems on occupant comfort and decay of heritage or built elements was also explored. Different wireless sensor platforms were deployed and analysed to determine which delivered the best results in the context of long-term heritage monitoring from the standpoints of communications, energy demand and network architecture. A methodology was subsequently designed to study communication quality in a number of cultural and natural heritage scenarios and help establish the considerations to be borne in mind when deploying wireless sensor networks for heritage monitoring in future. As in data logger-based sensor networks, the monitoring conducted in this research with wireless platforms identified many instances of decay, described hereunder. Tracking those situations will help prevent damage in the respective scenarios. The research also contributes to preventive conservation based on non-invasive monitoring using techniques such as infrared thermography, protimeter-based surface damp measurements, high resolution electrical resistivity surveys and georadar analysis. The conclusions drawn address the advantages and drawbacks of each technique and its suitability for the various phases of analysis and capacity to detect or characterise damage. This dissertation also describes the intermeshed usage of these techniques that led to the identification of the origin of severe damp-induced damage in a real scenario.
Resumo:
Different treatments (consolidation and water-repellent) were applied on samples of marble and granite from the Front stage of the Roman Theatre of Merida (Spain). The main goal is to study the effects of these treatments on archaeological stone material, by analyzing the surface changes. X-Ray Fluorescence and Laser-Induced Breakdown Spectroscopy techniques, as well as Nuclear Magnetic Resonance have been used in order to study changes in the surface properties of the material, comparing treated and untreated specimens. The results confirm that silicon (Si) marker tracking allows the detection of applied treatments, increasing the peak signal in treated specimens. Furthermore, it is also possible to prove changes both within the pore system of the materialand in the distribution of surface water, resulting from the application of these products
Resumo:
El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.
Resumo:
A high-fidelity virtual tool for the numerical simulation of low-velocity impact damage in unidirectional composite laminates is proposed. A continuum material model for the simulation of intraply damage phenomena is implemented in a numerical scheme as a user subroutine of the commercially available Abaqus finite element package. Delaminations are simulated using of cohesive surfaces. The use of structured meshes, aligned with fiber directions allows the physically-sound simulation of matrix cracks parallel to fiber directions, and their interaction with the development of delaminations. The implementation of element erosion criteria and the application of intraply and interlaminar friction allow for the simulation of fiber splits and their entanglement, which in turn results in permanent indentation in the impacted laminate. It is shown that this simulation strategy gives sound results for impact energies bellow and above the Barely Visible Impact Damage threshold, up to laminate perforation conditions