15 resultados para Laser damage threshold

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A high-fidelity virtual tool for the numerical simulation of low-velocity impact damage in unidirectional composite laminates is proposed. A continuum material model for the simulation of intraply damage phenomena is implemented in a numerical scheme as a user subroutine of the commercially available Abaqus finite element package. Delaminations are simulated using of cohesive surfaces. The use of structured meshes, aligned with fiber directions allows the physically-sound simulation of matrix cracks parallel to fiber directions, and their interaction with the development of delaminations. The implementation of element erosion criteria and the application of intraply and interlaminar friction allow for the simulation of fiber splits and their entanglement, which in turn results in permanent indentation in the impacted laminate. It is shown that this simulation strategy gives sound results for impact energies bellow and above the Barely Visible Impact Damage threshold, up to laminate perforation conditions

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As an emerging optical material, graphene’s ultrafast dynamics are often probed using pulsed lasers yet the region in which optical damage takes place is largely uncharted. Here, femtosecond laser pulses induced localized damage in single-layer graphene on sapphire. Raman spatial mapping, SEM, and AFM microscopy quantified the damage. The resulting size of the damaged area has a linear correlation with the optical fluence. These results demonstrate local modification of sp2-carbon bonding structures with optical pulse fluences as low as 14 mJ/cm2, an order-of-magnitude lower than measured and theoretical ablation thresholds.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Production of back contact solar cells requires holes generations on the wafers to keep both positive and negative contacts on the back side of the cell. This drilling process weakens the wafer mechanically due to the presence of the holes and the damage introduced during the process as microcracks. In this study, several chemical processes have been applied to drilled wafers in order to eliminate or reduce the damage generated during this fabrication step. The treatments analyzed are the followings: alkaline etching during 1, 3 and 5 minutes, acid etching for 2 and 4 minutes and texturisation. To determine mechanical strength of the samples a common mechanical study has been carried out testing the samples by the Ring on Ring bending test and obtaining the stress state in the moment of failure by FE simulation. Finally the results obtained for each treatment were fitted to a three parameter Weibull distribution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Direct-drive inertial confinement thermonuclear fusion consists in illuminating a shell of cryogenic Deuterium and Tritium (DT) mixture with many intense beams of laser light. Capsule is composed of DT gassurrounded by cryogenic DT as combustible fuel. Basic rules are used to define shell geometry from aspect ratio, fuel mass and layers densities. We define baseline designs using two aspect ratio (A=3 and A=5) who complete HiPER baseline design (A=7.7). Aspect ratio is defined as the ratio of ice DT shell inner radius over DT shell thickness. Low aspect ratio improves hydrodynamics stabilities of imploding shell. Laser impulsion shape and ablator thickness are initially defined by using Lindl (1995) pressure ablation and mass ablation formulae for direct-drive using CH layer as ablator. In flight adiabat parameter is close to one during implosion. Velocitie simplosions chosen are between 260 km/s and 365 km/s. More than thousand calculations are realized for each aspect ratio in order to optimize the laser pulse shape. Calculations are performed using the one-dimensional version of the Lagrangian radiation hydrodynamics FCI2. We choose implosion velocities for each initial aspect ratio, and we compute scaled-target family curves for each one to find self-ignition threshold. Then, we pick points on each curves that potentially product high thermonuclear gain and compute shock ignition in the context of Laser MegaJoule. This systematic analyze reveals many working points which complete previous studies ´allowing to highlight baseline designs, according to laser intensity and energy, combustible mass and initial aspect ratio to be relevant for Laser MegaJoule.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser Shock Processing is developing as a key technology for the improvement of surface mechanical and corrosion resistance properties of metals due to its ability to introduce intense compressive residual stresses fields into high elastic limit materials by means of an intense laser driven shock wave generated by laser with intensities exceeding the 109 W/cm2 threshold, pulse energies in the range of 1 Joule and interaction times in the range of several ns. However, because of the relatively difficult-to-describe physics of shock wave formation in plasma following laser-matter interaction in solid state, only limited knowledge is available in the way of full comprehension and predictive assessment of the characteristic physical processes and material transformations with a specific consideration of real material properties. In the present paper, an account of the physical issues dominating the development of LSP processes from a moderately high intensity laser-matter interaction point of view is presented along with the theoretical and computational methods developed by the authors for their predictive assessment and new experimental contrast results obtained at laboratory scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work the use of ESS-Bilbao fast neutron lines for irradiation of materials for nuclear fusion is studied. For the comparison of ESS-Bilbao with an inertial fusion facility a simplified model of HiPER chamber has been used. Several positions for irradiation at ESS-Bilbao have been also compared. The material chosen for the damage analysis is silica due to its importance on IFC optics. In this work a detailed comparison between the two facilities for silica irradiation is given. The comparison covers the neutron fluxes, doses, defect production and PKA spectra. This study is also intended as a methodological approach or guideline for future works on other materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser processing has been the tool of choice last years to develop improved concepts in contact formation for high efficiency crystalline silicon (c-Si) solar cells. New concepts based on standard laser fired contacts (LFC) or advanced laser doping (LD) techniques are optimal solutions for both the front and back contacts of a number of structures with growing interest in the c-Si PV industry. Nowadays, substantial efforts are underway to optimize these processes in order to be applied industrially in high efficiency concepts. However a critical issue in these devices is that, most of them, demand a very low thermal input during the fabrication sequence and a minimal damage of the structure during the laser irradiation process. Keeping these two objectives in mind, in this work we discuss the possibility of using laser-based processes to contact the rear side of silicon heterojunction (SHJ) solar cells in an approach fully compatible with the low temperature processing associated to these devices. First we discuss the possibility of using standard LFC techniques in the fabrication of SHJ cells on p-type substrates, studying in detail the effect of the laser wavelength on the contact quality. Secondly, we present an alternative strategy bearing in mind that a real challenge in the rear contact formation is to reduce the damage induced by the laser irradiation. This new approach is based on local laser doping techniques previously developed by our groups, to contact the rear side of p-type c-Si solar cells by means of laser processing before rear metallization of dielectric stacks containing Al2O3. In this work we demonstrate the possibility of using this new approach in SHJ cells with a distinct advantage over other standard LFC techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Storm evolution is fundamental for analysing the damage progression of the different failure modes and establishing suitable protocols for maintaining and optimally sizing structures. However, this aspect has hardly been studied and practically the whole of the studies dealing with the subject adopt the Equivalent triangle storm. As against this approach, two new ones are proposed. The first is the Equivalent Triangle Magnitude Storm model (ETMS), whose base, the triangular storm duration, D, is established such that its magnitude (area describing the storm history above the reference threshold level which sets the storm condition),HT, equals the real storm magnitude. The other is the Equivalent Triangle Number of Waves Storm (ETNWS), where the base is referred in terms of the real storm's number of waves,Nz. Three approaches are used for estimating the mean period, Tm, associated to each of the sea states defining the storm evolution, which is necessary to determine the full energy flux withstood by the structure in the course of the extreme event. Two are based on the Jonswap spectrum representativity and the other uses the bivariate Gumbel copula (Hs, Tm), resulting from adjusting the storm peaks. The representativity of the approaches proposed and those defined in specialised literature are analysed by comparing the main armour layer's progressive loss of hydraulic stability caused by real storms and that relating to theoretical ones. An empirical maximum energy flux model is used for this purpose. The agreement between the empirical and theoretical results demonstrates that the representativity of the different approaches depends on the storm characteristics and point towards a need to investigate other geometrical shapes to characterise the storm evolution associated with sea states heavily influenced by swell wave components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Storm evolution is fundamental for analysing the damage progression of the different failure modes and establishing suitable protocols for maintaining and optimally sizing structures. However, this aspect has hardly been studied and practically the whole of the studies dealing with the subject adopt the Equivalent triangle storm. As against this approach, two new ones are proposed. The first is the Equivalent Triangle Magnitude Storm model (ETMS), whose base, the triangular storm duration, D, is established such that its magnitude (area describing the storm history above the reference threshold level which sets the storm condition),HT, equals the real storm magnitude. The other is the Equivalent Triangle Number of Waves Storm (ETNWS), where the base is referred in terms of the real storm's number of waves,Nz. Three approaches are used for estimating the mean period, Tm, associated to each of the sea states defining the storm evolution, which is necessary to determine the full energy flux withstood by the structure in the course of the extreme event. Two are based on the Jonswap spectrum representativity and the other uses the bivariate Gumbel copula (Hs, Tm), resulting from adjusting the storm peaks. The representativity of the approaches proposed and those defined in specialised literature are analysed by comparing the main armour layer's progressive loss of hydraulic stability caused by real storms and that relating to theoretical ones. An empirical maximum energy flux model is used for this purpose. The agreement between the empirical and theoretical results demonstrates that the representativity of the different approaches depends on the storm characteristics and point towards a need to investigate other geometrical shapes to characterise the storm evolution associated with sea states heavily influenced by swell wave components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present direct-drive target design studies for the laser mégajoule using two distinct initial aspect ratios (A = 34 and A = 5). Laser pulse shapes are optimized by a random walk method and drive power variations are used to cover a wide variety of implosion velocities between 260 km/s and 365 km/s. For selected implosion velocities and for each initial aspect ratio, scaled-target families are built in order to find self-ignition threshold. High-gain shock ignition is also investigated in the context of Laser MégaJoule for marginally igniting targets below their own self-ignition threshold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Crystallization and grain growth technique of thin film silicon are among the most promising methods for improving efficiency and lowering cost of solar cells. A major advantage of laser crystallization and annealing over conventional heating methods is its ability to limit rapid heating and cooling to thin surface layers. Laser energy is used to heat the amorphous silicon thin film, melting it and changing the microstructure to polycrystalline silicon (poly-Si) as it cools. Depending on the laser density, the vaporization temperature can be reached at the center of the irradiated area. In these cases ablation effects are expected and the annealing process becomes ineffective. The heating process in the a-Si thin film is governed by the general heat transfer equation. The two dimensional non-linear heat transfer equation with a moving heat source is solve numerically using the finite element method (FEM), particularly COMSOL Multiphysics. The numerical model help to establish the density and the process speed range needed to assure the melting and crystallization without damage or ablation of the silicon surface. The samples of a-Si obtained by physical vapour deposition were irradiated with a cw-green laser source (Millennia Prime from Newport-Spectra) that delivers up to 15 W of average power. The morphology of the irradiated area was characterized by confocal laser scanning microscopy (Leica DCM3D) and Scanning Electron Microscopy (SEM Hitachi 3000N). The structural properties were studied by micro-Raman spectroscopy (Renishaw, inVia Raman microscope).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of low initial aspect ratio direct-drive target designs is carried out by varying the implosion velocity and the fuel mass. Starting from two different spherical targets with a given 300?g-DT mass, optimization of laser pulse and drive power allows to obtain a set of target seeds referenced by their peak implosion velocities and initial aspect ratio (A = 3 and A = 5). Self-ignition is achieved with higher implosion velocity for A = 5-design than for A = 3-design. Then, rescaling is done to extend the set of designs to a huge amount of mass, peak kinetic energies and peak areal densities. Self-ignition kinetic energy threshold Ek is characterized by a dependance of Ek ? v? with ?-values which depart from self-ignition models. Nevertheless, self-ignition energy is seen lower for smaller initial aspect ratio. An analysis of Two-Plasmons Decay threshold and Rayleigh?Taylor instability e-folding is carried out and it is shown that two-plasmon decay threshold is always overpassed for all designs. The hydrodynamic stability analysis is performed by embedded models to deal with linear and non-linear regime. It is found that the A = 5-designs are always at the limit of disruption of the shell.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An advantage of laser crystallization over conventional heating methods is its ability to limit rapid heating and cooling to thin surface layers. Laser energy is used to heat the a-Si thin film to change the microstructure to poly-Si. Thin film samples of a-Si were irradiated with a CW-green laser source. Laser irradiated spots were produced by using different laser powers and irradiation times. These parameters are identified as key variables in the crystallization process. The power threshold for crystallization is reduced as the irradiation time is increased. When this threshold is reached the crystalline fraction increases lineally with power for each irradiation time. The experimental results are analysed with the aid of a numerical thermal model and the presence of two crystallization mechanisms are observed: one due to melting and the other due to solid phase transformation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En los últimos años, el Ge ha ganado de nuevo atención con la finalidad de ser integrado en el seno de las existentes tecnologías de microelectrónica. Aunque no se le considera como un canddato capaz de reemplazar completamente al Si en el futuro próximo, probalemente servirá como un excelente complemento para aumentar las propiedades eléctricas en dispositivos futuros, especialmente debido a su alta movilidad de portadores. Esta integración requiere de un avance significativo del estado del arte en los procesos de fabricado. Técnicas de simulación, como los algoritmos de Monte Carlo cinético (KMC), proporcionan un ambiente atractivo para llevar a cabo investigación y desarrollo en este campo, especialmente en términos de costes en tiempo y financiación. En este estudio se han usado, por primera vez, técnicas de KMC con el fin entender el procesado “front-end” de Ge en su fabricación, específicamente la acumulación de dañado y amorfización producidas por implantación iónica y el crecimiento epitaxial en fase sólida (SPER) de las capas amorfizadas. Primero, simulaciones de aproximación de clisiones binarias (BCA) son usadas para calcular el dañado causado por cada ión. La evolución de este dañado en el tiempo se simula usando KMC sin red, o de objetos (OKMC) en el que sólamente se consideran los defectos. El SPER se simula a través de una aproximación KMC de red (LKMC), siendo capaz de seguir la evolución de los átomos de la red que forman la intercara amorfo/cristalina. Con el modelo de amorfización desarrollado a lo largo de este trabajo, implementado en un simulador multi-material, se pueden simular todos estos procesos. Ha sido posible entender la acumulación de dañado, desde la generación de defectos puntuales hasta la formación completa de capas amorfas. Esta acumulación ocurre en tres regímenes bien diferenciados, empezando con un ritmo lento de formación de regiones de dañado, seguido por una rápida relajación local de ciertas áreas en la fase amorfa donde ambas fases, amorfa y cristalina, coexisten, para terminar en la amorfización completa de capas extensas, donde satura el ritmo de acumulación. Dicha transición ocurre cuando la concentración de dañado supera cierto valor límite, el cual es independiente de las condiciones de implantación. Cuando se implantan los iones a temperaturas relativamente altas, el recocido dinámico cura el dañado previamente introducido y se establece una competición entre la generación de dañado y su disolución. Estos efectos se vuelven especialmente importantes para iones ligeros, como el B, el cual crea dañado más diluido, pequeño y distribuido de manera diferente que el causado por la implantación de iones más pesados, como el Ge. Esta descripción reproduce satisfactoriamente la cantidad de dañado y la extensión de las capas amorfas causadas por implantación iónica reportadas en la bibliografía. La velocidad de recristalización de la muestra previamente amorfizada depende fuertemente de la orientación del sustrato. El modelo LKMC presentado ha sido capaz de explicar estas diferencias entre orientaciones a través de un simple modelo, dominado por una única energía de activación y diferentes prefactores en las frecuencias de SPER dependiendo de las configuraciones de vecinos de los átomos que recristalizan. La formación de maclas aparece como una consecuencia de esta descripción, y es predominante en sustratos crecidos en la orientación (111)Ge. Este modelo es capaz de reproducir resultados experimentales para diferentes orientaciones, temperaturas y tiempos de evolución de la intercara amorfo/cristalina reportados por diferentes autores. Las parametrizaciones preliminares realizadas de los tensores de activación de tensiones son también capaces de proveer una buena correlación entre las simulaciones y los resultados experimentales de velocidad de SPER a diferentes temperaturas bajo una presión hidrostática aplicada. Los estudios presentados en esta tesis han ayudado a alcanzar un mejor entendimiento de los mecanismos de producción de dañado, su evolución, amorfización y SPER para Ge, además de servir como una útil herramienta para continuar el trabajo en este campo. In the recent years, Ge has regained attention to be integrated into existing microelectronic technologies. Even though it is not thought to be a feasible full replacement to Si in the near future, it will likely serve as an excellent complement to enhance electrical properties in future devices, specially due to its high carrier mobilities. This integration requires a significant upgrade of the state-of-the-art of regular manufacturing processes. Simulation techniques, such as kinetic Monte Carlo (KMC) algorithms, provide an appealing environment to research and innovation in the field, specially in terms of time and funding costs. In the present study, KMC techniques are used, for the first time, to understand Ge front-end processing, specifically damage accumulation and amorphization produced by ion implantation and Solid Phase Epitaxial Regrowth (SPER) of the amorphized layers. First, Binary Collision Approximation (BCA) simulations are used to calculate the damage caused by every ion. The evolution of this damage over time is simulated using non-lattice, or Object, KMC (OKMC) in which only defects are considered. SPER is simulated through a Lattice KMC (LKMC) approach, being able to follow the evolution of the lattice atoms forming the amorphous/crystalline interface. With the amorphization model developed in this work, implemented into a multi-material process simulator, all these processes can be simulated. It has been possible to understand damage accumulation, from point defect generation up to full amorphous layers formation. This accumulation occurs in three differentiated regimes, starting at a slow formation rate of the damage regions, followed by a fast local relaxation of areas into the amorphous phase where both crystalline and amorphous phases coexist, ending in full amorphization of extended layers, where the accumulation rate saturates. This transition occurs when the damage concentration overcomes a certain threshold value, which is independent of the implantation conditions. When implanting ions at relatively high temperatures, dynamic annealing takes place, healing the previously induced damage and establishing a competition between damage generation and its dissolution. These effects become specially important for light ions, as B, for which the created damage is more diluted, smaller and differently distributed than that caused by implanting heavier ions, as Ge. This description successfully reproduces damage quantity and extension of amorphous layers caused by means of ion implantation reported in the literature. Recrystallization velocity of the previously amorphized sample strongly depends on the substrate orientation. The presented LKMC model has been able to explain these differences between orientations through a simple model, dominated by one only activation energy and different prefactors for the SPER rates depending on the neighboring configuration of the recrystallizing atoms. Twin defects formation appears as a consequence of this description, and are predominant for (111)Ge oriented grown substrates. This model is able to reproduce experimental results for different orientations, temperatures and times of evolution of the amorphous/crystalline interface reported by different authors. Preliminary parameterizations for the activation strain tensors are able to also provide a good match between simulations and reported experimental results for SPER velocities at different temperatures under the appliance of hydrostatic pressure. The studies presented in this thesis have helped to achieve a greater understanding of damage generation, evolution, amorphization and SPER mechanisms in Ge, and also provide a useful tool to continue research in this field.