901 resultados para \damage threshold


Relevância:

60.00% 60.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A high-fidelity virtual tool for the numerical simulation of low-velocity impact damage in unidirectional composite laminates is proposed. A continuum material model for the simulation of intraply damage phenomena is implemented in a numerical scheme as a user subroutine of the commercially available Abaqus finite element package. Delaminations are simulated using of cohesive surfaces. The use of structured meshes, aligned with fiber directions allows the physically-sound simulation of matrix cracks parallel to fiber directions, and their interaction with the development of delaminations. The implementation of element erosion criteria and the application of intraply and interlaminar friction allow for the simulation of fiber splits and their entanglement, which in turn results in permanent indentation in the impacted laminate. It is shown that this simulation strategy gives sound results for impact energies bellow and above the Barely Visible Impact Damage threshold, up to laminate perforation conditions

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The patterns of rock comminution within tumbling mills, as well as the nature of forces, are of significant practical importance. Discrete element modelling (DEM) has been used to analyse the pattern of specific energy applied to rock, in terms of spatial distribution within a pilot AG/SAG mill. We also analysed in some detail the nature of the forces, which may result in rock comminution. In order to examine the distribution of energy applied within the mill, the DEM models were compared with measured particle mass losses, in small scale AG and SAG mill experiments. The intensity of contact stresses was estimated using the Hertz theory of elastic contacts. The results indicate that in the case of the AG mill, the highest intensity stresses and strains are likely to occur deep within the charge, and close to the base. This effect is probably more pronounced for large AG mills. In the SAG mill case, the impacts of the steel balls on the surface of the charge are likely to be the most potent. In both cases, the spatial pattern of medium-to-high energy collisions is affected by the rotational speed of the mill. Based on an assumed damage threshold for rock, in terms of specific energy introduced per single collision, the spatial pattern of productive collisions within each charge was estimated and compared with rates of mass loss. We also investigated the nature of the comminution process within AG vs. SAG mill, in order to explain the observed differences in energy utilisation efficiency, between two types of milling. All experiments were performed using a laboratory scale mill of 1.19 m diameter and 0.31 m length, equipped with 14 square section lifters of height 40 mm. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper reports on buried waveguides fabricated in lithium niobate (LN) by the method of direct femtosecond (fs) laser inscription. 5% MgO doped LiNbO3 was chosen as the host material because of its high quality and damage threshold, as well as relatively low cost. Direct fs inscription by astigmatically shaped beam in crystals usually produces multiple 'smooth' tracks (with reduced refractive index), which encircle the light guiding 'core', thus creating a depressed cladding WG. A high-repetition rate fs laser system was used for inscription at a depth of approximately 500 μm. Using numerical modelling, it was demonstrated that the properties of fs-written WGs can be controlled by the WG geometry. Buried, depressed-cladding WGs in LN host with circular cross-section were also demonstrated. Combining control over the WG dispersion with quasi-phase matching will allow various ultralow-pump-power, highly-efficient, nonlinear light-guiding devices - all in an integrated optics format.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nonlinear optics is a broad field of research and technology that encompasses subject matter in the field of Physics, Chemistry, and Engineering. It is the branch of Optics that describes the behavior of light in nonlinear media, that is, media in which the dielectric polarization P responds nonlinearly to the electric field E of the light. This nonlinearity is typically only observed at very high light intensities. This area has applications in all optical and electro optical devices used for communication, optical storage and optical computing. Many nonlinear optical effects have proved to be versatile probes for understanding basic and applied problems. Nonlinear optical devices use nonlinear dependence of refractive index or absorption coefficient on the applied field. These nonlinear optical devices are passive devices and are referred to as intelligent or smart materials owing to the fact that the sensing, processing and activating functions required for optical processes are inherent to them which are otherwise separate in dynamic devices.The large interest in nonlinear optical crystalline materials has been motivated by their potential use in the fabrication of all-optical photonic devices. Transparent crystalline materials can exhibit different kinds of optical nonlinearities which are associated with a nonlinear polarization. The choice of the most suitable crystal material for a given application is often far from trivial; it should involve the consideration of many aspects. A high nonlinearity for frequency conversion of ultra-short pulses does not help if the interaction length is strongly limited by a large group velocity mismatch and the low damage threshold limits the applicable optical intensities. Also, it can be highly desirable to use a crystal material which can be critically phasematched at room temperature. Among the different types of nonlinear crystals, metal halides and tartrates have attracted due to their importance in photonics. Metal halides like lead halides have drawn attention because they exhibit interesting features from the stand point of the electron-lattice interaction .These materials are important for their luminescent properties. Tartrate single crystals show many interesting physical properties such as ferroelectric, piezoelectric, dielectric and optical characteristics. They are used for nonlinear optical devices based on their optical transmission characteristics. Among the several tartrate compounds, Strontium tartrate, Calcium tartrate and Cadmium tartrate have received greater attention on account of their ferroelectric, nonlinear optical and spectral characteristics. The present thesis reports the linear and nonlinear aspects of these crystals and their potential applications in the field of photonics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, many tidal turbine projects have been developed using composites blades. Tidal turbine blades are subject to ocean forces and sea water aggressions, and the reliability of these components is crucial to the profitability of ocean energy recovery systems. The majority of tidal turbine developers have preferred carbon/epoxy blades, so there is a need to understand how prolonged immersion in the ocean affects these composites. In this study the long term behaviour of different carbon/epoxy composites has been studied using accelerated ageing tests. A significant reduction of composite strengths has been observed after saturation of water in the material. For longer immersions only small further changes in these properties occur. No significant changes have been observed for moduli nor for composite toughness. The effect of sea water ageing on damage thresholds and kinetics has been studied and modelled. After saturation, the damage threshold is modified while kinetics of damage development remain the same.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fatigue crack growth rate tests have been performed on Nimonic AP1, a powder formed Ni-base superalloy, in air and vacuum at room temperature. These show that threshold values are higher, and near-threshold (faceted) crack growth rates are lower, in vacuum than in air, although at high growth rates, in the “structure-insensitive” regime, R-ratio and a dilute environment have little effect. Changing the R-ratio from 0.1 to 0.5 in vacuum does not alter near-threshold crack growth rates very much, despite more extensive secondary cracking being noticeable at R= 0.5. In vacuum, rewelding occurs at contact points across the crack as ΔK falls. This leads to the production of extensive fracture surface damage and bulky fretting debris, and is thought to be a significant contributory factor to the observed increase in threshold values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV-220 keV were superimposed on a geometric DNA model composed of 2.7 × 10(6) nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy ETH. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when ETH ≈ 10.79 eV, but deviate significantly for higher ETH values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their implementations. The authors' show that, for the four studied models, different yields are expected by up to 54% for SSBs and by up to 32% for DSBs, as a function of the incident electrons energy and of the models being compared. MCTS simulations allow to compare direct DNA damage types and complexities induced by ionizing radiation. However, simulation results depend to a large degree on user-defined parameters, definitions, and algorithms such as: DNA model, dose distribution, SB definition, and the DNA damage clustering algorithm. These interdependencies should be well controlled during the simulations and explicitly reported when comparing results to experiments or calculations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Mecânica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis was investigated the radiation hardness of the building blocks of a future flexible X-ray sensor system. The characterized building blocks for the pixel addressing and signal amplification electronics are high mobility semiconducting oxide transistors (HMSO-TFTs) and organic transistors (OTFTs), whereas the photonic detection system is based on organic semiconducting single crystals (OSSCs). TFT parameters such as mobility, threshold voltage and subthreshold slope were measured as function of cumulative X-ray dose. Instead for OSSCs conductivity and X-ray sensitivity were analysed after various radiation steps. The results show that ionizing radiation does not lead to degradation in HMSO-TFTs. Instead OTFTs show instability in mobility which is reduced up to 73% for doses of 1 kGy. OSSC demonstrate stable detector properties for the tested total dose range. As conclusion, HMSO-TFTs and OSSCs can be readily employed in the X-ray detector system allowing operation for total doses exceeding 1 kGy of ionizing radiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluated the effect of initial pH values of 4.5, 6.5 and 8.5 of the attractant (protein bait) Milhocina® and borax (sodium borate) in the field, on the capture of fruit flies in McPhail traps, using 1, 2, 4 and 8 traps per hectare, in order to estimate control thresholds in a Hamlin orange grove in the central region of the state of São Paulo. The most abundant fruit fly species was Ceratitis capitata, comprising almost 99% of the fruit flies captured, of which 80% were females. The largest captures of C. capitata were found in traps baited with Milhocina® and borax at pH 8.5. Captures per trap for the four densities were similar, indicating that the population can be estimated with one trap per hectare in areas with high populations. It was found positive relationships between captures of C. capitata and the number of Hamlin oranges damaged, 2 and 3 weeks after capture. It was obtained equations that correlate captures and damage levels which can be used to estimate control thresholds. The average loss caused in Hamlin orange fruits by C. capitata was 2.5 tons per hectare or 7.5% of production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser‐induced damage and ablation thresholds of bulk superconducting samples of Bi2(SrCa)xCu3Oy(x=2, 2.2, 2.6, 2.8, 3) and Bi1.6 (Pb)xSr2Ca2Cu3 Oy (x=0, 0.1, 0.2, 0.3, 0.4) for irradiation with a 1.06 μm beam from a Nd‐YAG laser have been determined as a function of x by the pulsed photothermal deflection technique. The threshold values of power density for ablation as well as damage are found to increase with increasing values of x in both systems while in the Pb‐doped system the threshold values decrease above a specific value of x, coinciding with the point at which the Tc also begins to fall.  

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esse trabalho tem por objetivo o desenvolvimento de um sistema inteligente para detecção da queima no processo de retificação tangencial plana através da utilização de uma rede neural perceptron multi camadas, treinada para generalizar o processo e, conseqüentemente, obter o limiar de queima. em geral, a ocorrência da queima no processo de retificação pode ser detectada pelos parâmetros DPO e FKS. Porém esses parâmetros não são eficientes nas condições de usinagem usadas nesse trabalho. Os sinais de emissão acústica e potência elétrica do motor de acionamento do rebolo são variáveis de entrada e a variável de saída é a ocorrência da queima. No trabalho experimental, foram empregados um tipo de aço (ABNT 1045 temperado) e um tipo de rebolo denominado TARGA, modelo ART 3TG80.3 NVHB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural health monitoring (SHM) is related to the ability of monitoring the state and deciding the level of damage or deterioration within aerospace, civil and mechanical systems. In this sense, this paper deals with the application of a two-step auto-regressive and auto-regressive with exogenous inputs (AR-ARX) model for linear prediction of damage diagnosis in structural systems. This damage detection algorithm is based on the. monitoring of residual error as damage-sensitive indexes, obtained through vibration response measurements. In complex structures there are. many positions under observation and a large amount of data to be handed, making difficult the visualization of the signals. This paper also investigates data compression by using principal component analysis. In order to establish a threshold value, a fuzzy c-means clustering is taken to quantify the damage-sensitive index in an unsupervised learning mode. Tests are made in a benchmark problem, as proposed by IASC-ASCE with different damage patterns. The diagnosis that was obtained showed high correlation with the actual integrity state of the structure. Copyright © 2007 by ABCM.