944 resultados para second order condition


Relevância:

90.00% 90.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A first-order Lagrangian L ∇ variationally equivalent to the second-order Einstein- Hilbert Lagrangian is introduced. Such a Lagrangian depends on a symmetric linear connection, but the dependence is covariant under diffeomorphisms. The variational problem defined by L ∇ is proved to be regular and its Hamiltonian formulation is studied, including its covariant Hamiltonian attached to ∇ .

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Graphs of second harmonic generation coefficients and electro-optic coefficients (measured by ellipsometry, attenuated total reflection, and two-slit interference modulation) as a function of chromophore number density (chromophore loading) are experimentally observed to exhibit maxima for polymers containing chromophores characterized by large dipole moments and polarizabilities. Modified London theory is used to demonstrated that this behavior can be attributed to the competition of chromophore-applied electric field and chromophore–chromophore electrostatic interactions. The comparison of theoretical and experimental data explains why the promise of exceptional macroscopic second-order optical nonlinearity predicted for organic materials has not been realized and suggests routes for circumventing current limitations to large optical nonlinearity. The results also suggest extensions of measurement and theoretical methods to achieve an improved understanding of intermolecular interactions in condensed phase materials including materials prepared by sequential synthesis and block copolymer methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present the first detailed numerical study in three dimensions of a first-order phase transition that remains first order in the presence of quenched disorder (specifically, the ferromagnetic-paramagnetic transition of the site-diluted four states Potts model). A tricritical point, which lies surprisingly near the pure-system limit and is studied by means of finite-size scaling, separates the first-order and second-order parts of the critical line. This investigation has been made possible by a new definition of the disorder average that avoids the diverging-variance probability distributions that plague the standard approach. Entropy, rather than free energy, is the basic object in this approach that exploits a recently introduced microcanonical Monte Carlo method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a detailed numerical study on the effects of adding quenched impurities to a three dimensional system which in the pure case undergoes a strong first order phase transition (specifically, the ferromagnetic/paramagnetic transition of the site-diluted four states Potts model). We can state that the transition remains first-order in the presence of quenched disorder (a small amount of it) but it turns out to be second order as more impurities are added. A tricritical point, which is studied by means of Finite-Size Scaling, separates the first-order and second-order parts of the critical line. The results were made possible by a new definition of the disorder average that avoids the diverging-variance probability distributions that arise using the standard methodology. We also made use of a recently proposed microcanonical Monte Carlo method in which entropy, instead of free energy, is the basic quantity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Le traumatisme craniocérébral léger (TCCL) a des effets complexes sur plusieurs fonctions cérébrales, dont l’évaluation et le suivi peuvent être difficiles. Les problèmes visuels et les troubles de l’équilibre font partie des plaintes fréquemment rencontrées après un TCCL. En outre, ces problèmes peuvent continuer à affecter les personnes ayant eu un TCCL longtemps après la phase aiguë du traumatisme. Cependant, les évaluations cliniques conventionnelles de la vision et de l’équilibre ne permettent pas, la plupart du temps, d’objectiver ces symptômes, surtout lorsqu’ils s’installent durablement. De plus, il n’existe pas, à notre connaissance, d’étude longitudinale ayant étudié les déficits visuels perceptifs, en tant que tels, ni les troubles de l’équilibre secondaires à un TCCL, chez l’adulte. L’objectif de ce projet était donc de déterminer la nature et la durée des effets d’un tel traumatisme sur la perception visuelle et sur la stabilité posturale, en évaluant des adultes TCCL et contrôles sur une période d’un an. Les mêmes sujets, exactement, ont participé aux deux expériences, qui ont été menées les mêmes jours pour chacun des sujets. L’impact du TCCL sur la perception visuelle de réseaux sinusoïdaux définis par des attributs de premier et de second ordre a d’abord été étudié. Quinze adultes diagnostiqués TCCL ont été évalués 15 jours, 3 mois et 12 mois après leur traumatisme. Quinze adultes contrôles appariés ont été évalués à des périodes identiques. Des temps de réaction (TR) de détection de clignotement et de discrimination de direction de mouvement ont été mesurés. Les niveaux de contraste des stimuli de premier et de second ordre ont été ajustés pour qu’ils aient une visibilité comparable, et les moyennes, médianes, écarts-types (ET) et écarts interquartiles (EIQ) des TR correspondant aux bonnes réponses ont été calculés. Le niveau de symptômes a également été évalué pour le comparer aux données de TR. De façon générale, les TR des TCCL étaient plus longs et plus variables (plus grands ET et EIQ) que ceux des contrôles. De plus, les TR des TCCL étaient plus courts pour les stimuli de premier ordre que pour ceux de second ordre, et plus variables pour les stimuli de premier ordre que pour ceux de second ordre, dans la condition de discrimination de mouvement. Ces observations se sont répétées au cours des trois sessions. Le niveau de symptômes des TCCL était supérieur à celui des participants contrôles, et malgré une amélioration, cet écart est resté significatif sur la période d’un an qui a suivi le traumatisme. La seconde expérience, elle, était destinée à évaluer l’impact du TCCL sur le contrôle postural. Pour cela, nous avons mesuré l’amplitude d’oscillation posturale dans l’axe antéropostérieur et l’instabilité posturale (au moyen de la vitesse quadratique moyenne (VQM) des oscillations posturales) en position debout, les pieds joints, sur une surface ferme, dans cinq conditions différentes : les yeux fermés, et dans un tunnel virtuel tridimensionnel soit statique, soit oscillant de façon sinusoïdale dans la direction antéropostérieure à trois vitesses différentes. Des mesures d’équilibre dérivées de tests cliniques, le Bruininks-Oseretsky Test of Motor Proficiency 2nd edition (BOT-2) et le Balance Error Scoring System (BESS) ont également été utilisées. Les participants diagnostiqués TCCL présentaient une plus grande instabilité posturale (une plus grande VQM des oscillations posturales) que les participants contrôles 2 semaines et 3 mois après le traumatisme, toutes conditions confondues. Ces troubles de l’équilibre secondaires au TCCL n’étaient plus présents un an après le traumatisme. Ces résultats suggèrent également que les déficits affectant les processus d’intégration visuelle mis en évidence dans la première expérience ont pu contribuer aux troubles de l’équilibre secondaires au TCCL. L’amplitude d’oscillation posturale dans l’axe antéropostérieur de même que les mesures dérivées des tests cliniques d’évaluation de l’équilibre (BOT-2 et BESS) ne se sont pas révélées être des mesures sensibles pour quantifier le déficit postural chez les sujets TCCL. L’association des mesures de TR à la perception des propriétés spécifiques des stimuli s’est révélée être à la fois une méthode de mesure particulièrement sensible aux anomalies visuomotrices secondaires à un TCCL, et un outil précis d’investigation des mécanismes sous-jacents à ces anomalies qui surviennent lorsque le cerveau est exposé à un traumatisme léger. De la même façon, les mesures d’instabilité posturale se sont révélées suffisamment sensibles pour permettre de mesurer les troubles de l’équilibre secondaires à un TCCL. Ainsi, le développement de tests de dépistage basés sur ces résultats et destinés à l’évaluation du TCCL dès ses premières étapes apparaît particulièrement intéressant. Il semble également primordial d’examiner les relations entre de tels déficits et la réalisation d’activités de la vie quotidienne, telles que les activités scolaires, professionnelles ou sportives, pour déterminer les impacts fonctionnels que peuvent avoir ces troubles des fonctions visuomotrice et du contrôle de l’équilibre.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work deals with the random free vibration of functionally graded laminates with general boundary conditions and subjected to a temperature change, taking into account the randomness in a number of independent input variables such as Young's modulus, Poisson's ratio and thermal expansion coefficient of each constituent material. Based on third-order shear deformation theory, the mixed-type formulation and a semi-analytical approach are employed to derive the standard eigenvalue problem in terms of deflection, mid-plane rotations and stress function. A mean-centered first-order perturbation technique is adopted to obtain the second-order statistics of vibration frequencies. A detailed parametric study is conducted, and extensive numerical results are presented in both tabular and graphical forms for laminated plates that contain functionally graded material which is made of aluminum and zirconia, showing the effects of scattering in thermo-clastic material constants, temperature change, edge support condition, side-to-thickness ratio, and plate aspect ratio on the stochastic characteristics of natural frequencies. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we consider analytical and numerical solutions to the Dirichlet boundary-value problem for the biharmonic partial differential equation on a disc of finite radius in the plane. The physical interpretation of these solutions is that of the harmonic oscillations of a thin, clamped plate. For the linear, fourth-order, biharmonic partial differential equation in the plane, it is well known that the solution method of separation in polar coordinates is not possible, in general. However, in this paper, for circular domains in the plane, it is shown that a method, here called quasi-separation of variables, does lead to solutions of the partial differential equation. These solutions are products of solutions of two ordinary linear differential equations: a fourth-order radial equation and a second-order angular differential equation. To be expected, without complete separation of the polar variables, there is some restriction on the range of these solutions in comparison with the corresponding separated solutions of the second-order harmonic differential equation in the plane. Notwithstanding these restrictions, the quasi-separation method leads to solutions of the Dirichlet boundary-value problem on a disc with centre at the origin, with boundary conditions determined by the solution and its inward drawn normal taking the value 0 on the edge of the disc. One significant feature for these biharmonic boundary-value problems, in general, follows from the form of the biharmonic differential expression when represented in polar coordinates. In this form, the differential expression has a singularity at the origin, in the radial variable. This singularity translates to a singularity at the origin of the fourth-order radial separated equation; this singularity necessitates the application of a third boundary condition in order to determine a self-adjoint solution to the Dirichlet boundary-value problem. The penultimate section of the paper reports on numerical solutions to the Dirichlet boundary-value problem; these results are also presented graphically. Two specific cases are studied in detail and numerical values of the eigenvalues are compared with the results obtained in earlier studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Experimental investigations of 10×118 Gbit/s DP-QPSK WDM transmission using three types of distributed Raman amplification techniques are presented. Novel ultra-long Raman fibre laser based amplification with second order counter-propagated pumping is compared with conventional first order and dual order counter-pumped Raman amplification. We demonstrate that URFL based amplification can extend the transmission reach up to a distance of 7520 km in comparison with 5010 km and 6180 km using first order and dual order Raman amplification respectively. © 2014 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mathematics Subject Class.: 33C10,33D60,26D15,33D05,33D15,33D90

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We investigate numerically the effect of ultralong Raman laser fiber amplifier design parameters, such as span length, pumping distribution and grating reflectivity, on the RIN transfer from the pump to the transmitted signal. Comparison is provided to the performance of traditional second-order Raman amplified schemes, showing a relative performance penalty for ultralong laser systems that gets smaller as span length increases. We show that careful choice of system parameters can be used to partially offset such penalty. © 2010 Optical Society of America.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The presences of heavy metals, organic contaminants and natural toxins in natural water bodies pose a serious threat to the environment and the health of living organisms. Therefore, there is a critical need to identify sustainable and environmentally friendly water treatment processes. In this dissertation, I focus on the fundamental studies of advanced oxidation processes and magnetic nano-materials as promising new technologies for water treatments. Advanced oxidation processes employ reactive oxygen species (ROS) which can lead to the mineralization of a number of pollutants and toxins. The rates of formation, steady-state concentrations, and kinetic parameters of hydroxyl radical and singlet oxygen produced by various TiO2 photocatalysts under UV or visible irradiations were measured using selective chemical probes. Hydroxyl radical is the dominant ROS, and its generation is dependent on experimental conditions. The optimal condition for generation of hydroxyl radical by of TiO2 coated glass microspheres is studied by response surface methodology, and the optimal conditions are applied for the degradation of dimethyl phthalate. Singlet oxygen (1O2) also plays an important role for advanced processes, so the degradation of microcystin-LR by rose bengal, an 1O2 sensitizer was studied. The measured bimolecular reaction rate constant between MC-LR and 1O2 is ∼ 106 M-1 s-1 based on competition kinetics with furfuryl alcohol. The typical adsorbent needs separation after the treatment, while magnetic iron oxides can be easily removed by a magnetic field. Maghemite and humic acid coated magnetite (HA-Fe3O4) were synthesized, characterized and applied for chromium(VI) removal. The adsorption of chromium(VI) by maghemite and HA-Fe3O4 follow a pseudo-second-order kinetic process. The adsorption of chromium(VI) by maghemite is accurately modeled using adsorption isotherms, and solution pH and presence of humic acid influence adsorption. Humic acid coated magnetite can adsorb and reduce chromium(VI) to non-toxic chromium (III), and the reaction is not highly dependent on solution pH. The functional groups associated with humic acid act as ligands lead to the Cr(III) complex via a coupled reduction-complexation mechanism. Extended X-ray absorption fine structure spectroscopy demonstrates the Cr(III) in the Cr-loaded HA-Fe 3O4 materials has six neighboring oxygen atoms in an octahedral geometry with average bond lengths of 1.98 Å.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Among the different possible amplification solutions offered by Raman scattering in optical fibers, ultra-long Raman lasers are particularly promising as they can provide quasi-losless second order amplification with reduced complexity, displaying excellent potential in the design of low-noise long-distance communication systems. Still, some of their advantages can be partially offset by the transfer of relative intensity noise from the pump sources and cavity-generated Stokes to the transmitted signal. In this paper we study the effect of ultra-long cavity design (length, pumping, grating reflectivity) on the transfer of RIN to the signal, demonstrating how the impact of noise can be greatly reduced by carefully choosing appropriate cavity parameters depending on the intended application of the system. © 2010 Copyright SPIE - The International Society for Optical Engineering.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Under contact metamorphic conditions, carbonate rocks in the direct vicinity of the Adamello pluton reflect a temperature-induced grain coarsening. Despite this large-scale trend, a considerable grain size scatter occurs on the outcrop-scale indicating local influence of second-order effects such as thermal perturbations, fluid flow and second-phase particles. Second-phase particles, whose sizes range from nano- to the micron-scale, induce the most pronounced data scatter resulting in grain sizes too small by up to a factor of 10, compared with theoretical grain growth in a pure system. Such values are restricted to relatively impure samples consisting of up to 10 vol.% micron-scale second-phase particles, or to samples containing a large number of nano-scale particles. The obtained data set suggests that the second phases induce a temperature-controlled reduction on calcite grain growth. The mean calcite grain size can therefore be expressed in the form D 1⁄4 C2 eQ*/RT(dp/fp)m*, where C2 is a constant, Q* is an activation energy, T the temperature and m* the exponent of the ratio dp/fp, i.e. of the average size of the second phases divided by their volume fraction. However, more data are needed to obtain reliable values for C2 and Q*. Besides variations in the average grain size, the presence of second-phase particles generates crystal size distribution (CSD) shapes characterized by lognormal distributions, which differ from the Gaussian-type distributions of the pure samples. In contrast, fluid-enhanced grain growth does not change the shape of the CSDs, but due to enhanced transport properties, the average grain sizes increase by a factor of 2 and the variance of the distribution increases. Stable d18O and d13C isotope ratios in fluid-affected zones only deviate slightly from the host rock values, suggesting low fluid/rock ratios. Grain growth modelling indicates that the fluid-induced grain size variations can develop within several ka. As inferred from a combination of thermal and grain growth modelling, dykes with widths of up to 1 m have only a restricted influence on grain size deviations smaller than a factor of 1.1.To summarize, considerable grain size variations of up to one order of magnitude can locally result from second-order effects. Such effects require special attention when comparing experimentally derived grain growth kinetics with field studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research is about producing recombinant Trichoderma reesei endoglucanase Cel7B by using Kluyveromyces lactis, transformed with chromosomally integrated Cel7B cDNA, as a host cell (K. lactis Cel7B). Cel7B is one of the glycoside hydrolyze family of proteins that are produced by T. reesei. Cel7B together with other endoglucanases, exoglucanases, and â-glucosidases hydrolyze cellulose to glucose, which can then be fermented to biofuels or other value-added products. The research objective of this MS project is to examine favorable fermentation conditions for recombinant Cel7B enzyme production and improved activity. Production of enzyme on different types of media was examined, and the activity of the enzyme was measured by using different tools or procedures. The first condition tested for was using different concentrations of galactose as a carbon and energy source; however galactose also acts as a potent promoter of recombinant Cel7B expression in K. lactis Cel7B. The purpose of this method is to determine the relationship between production of enzyme with increasing sugar concentration. The second culture condition test was using different types of media: a complex medium-yeast extract, peptone, galactose (YPGal); a minimal medium-yeast nitrogen base (YNB) with galactose; and a minimal medium with supplement-yeast nitrogen base with casamino acid (YBC), a nitrogen source, with galactose. The third condition was using different types of reactors or fermenters: a small reactor (shake flask) and a larger automated bioreactor (BioFlo 3000 fermenter). The purpose of this method is to determine the quantity of the protein produced by using different environments of production. Different tools to determine the presence and activity of Cel7B enzyme were used. For the presence of enzyme, sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was used. Secondly, to detect enzyme activity, the carboxymethyl cellulose- 3,5-dinitrosalicylic acid (CMC- DNS) assay was employed. SDS-PAGE showed that the enzyme band was at 67 kDa, which is larger than native Cel7B (52 kDa.), likely due to over glycolylation during post-translational processing in K. lactis. For the different types of media used in our fermentation, recombinant Cel7B was produced from yeast extract peptone galactose (YPGal), and yeast nitrogen base with casamino acid (YBC), but was not produced and no activity was detected from yeast nitrogen base (YNB). This experiment concluded that the Cel7B production requires the amino acid resources as part of fermentation medium. In experiments where recombinant Cel7B net activity was measured at 1% galactose initial concentration in YPGal and YBC media, higher enzyme activity was detected for the complex medium YPGal. Higher activity of recombinant Cel7B was detected for flask culture in 2% galactose compared to 1% galactose for YBC medium. Two bioreactor experiments were conducted under these culture conditions at 30°C, pH 7.0, dissolved oxygen of 50% of saturation, and 250 rpm agitation (variable depending on DO control) K. lactis-Cel7B yeast growth curves were quite reproducible with maximum optical density (O.D) at 600 nm of between 7 and 8 (when factoring dilution of 10:1). Galactose was consumed rapidly during the first 15 hours of bioreactor culture and recombinant Cel7B started to appear in the culture at 10-15 hours and increased thereafter up to a maximum of between 0.9 and 1.6 mg/mL/hr in these experiments. These bioreactor enzyme activity results are much higher than comparable experiments conducted with flask-scale culture (0.5 mg/mL/hr). In order to achieve the highest recombinant Cel7B activity from batch culture of K. lactis-Cel7B, based on this research it is best to use a complex medium, 2% initial galactose concentration, and an automated bioreactor where good control of temperature, pH, and dissolved oxygen can be achieved.