941 resultados para generalized second order conditions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of boundary conditions is one of the points where the SPH methodology still has some work to do. The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [1] boundary integrals. A Pouseuille flow has been used as a example to gradually evaluate the accuracy of the different implementations. Our goal is to test the behavior of the second-order differential operator with the proposed boundary extensions when the smoothing length h and other dicretization parameters as dx/h tend simultaneously to zero. First, using a smoothed continuous approximation of the unidirectional Pouseuille problem, the evolution of the velocity profile has been studied focusing on the values of the velocity and the viscous shear at the boundaries, where the exact solution should be approximated as h decreases. Second, to evaluate the impact of the discretization of the problem, an Eulerian SPH discrete version of the former problem has been implemented and similar results have been monitored. Finally, for the sake of completeness, a 2D Lagrangian SPH implementation of the problem has been also studied to compare the consequences of the particle movement

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Internal Structure of Hydrogen-Air Diffusion Flames. Tho purpose of this paper is to study finite rate chemistry effects in diffusion controlled hydrogenair flames undor conditions appearing in some cases in a supersonic combustor. Since for large reaction rates the flame is close to chemical equilibrium, the reaction takes place in a very thin region, so thata "singular perturbation "treatment" of the problem seems appropriate. It has been shown previously that, within the inner or reaction zone, convection effects may be neglocted, the temperature is constant across the flame, and tho mass fraction distributions are given by ordinary differential equations, whore tho only independent variable involved is tho coordinate normal to the flame surface. Tho solution of the outer problom, which is a pure mixing problem with the additional condition that fuol and oxidizer do not coexist in any zone, provides t h e following information: tho flame position, rates of fuel consumption, temperature, concentrators of species, fluid velocity outside of tho flame, and the boundary conditions required to solve the "inner problem." The main contribution of this paper consists in the introduction of a fairly complicated chemical kinetic scheme representing hydrogen-oxygen reaction. The nonlinear equations expressing the conservation of chemical species are approximately integrated by means of an integral method. It has boen found that, in the case considered of a near-equilibrium diffusion flame, tho role played by the dissociation-recombination reactions is purely marginal, and that somo of the second order "shuffling" reactions are close to equilibrium. The method shown here may be applied to compute the distanco from the injector corresponding to a given separation from equilibrium, say ten to twenty percent. For the casos whore this length is a small fraction of the combustion zone length, the equilibrium treatment describes properly tho flame behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surface tension induced convection in a liquid bridge held between two parallel, coaxial, solid disks is considered. The surface tension gradient is produced by a small temperature gradient parallel Co the undisturbed surface. The study is performed by using a mathematical regular perturbation approach based on a small parameter, e, which measures the deviation of the imposed temperature field from its mean value. The first order velocity field is given by a Stokes-type problem (viscous terms are dominant) with relatively simple boundary conditions. The first order temperature field is that imposed from the end disks on a liquid bridge immersed in a non-conductive fluid. Radiative effects are supposed to be negligible. The second order temperature field, which accounts for convective effects, is split into three components, one due to the bulk motion, and the other two to the distortion of the free surface. The relative importance of these components in terms of the heat transfer to or from the end disks is assessed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis aborda la formulación, análisis e implementación de métodos numéricos de integración temporal para la solución de sistemas disipativos suaves de dimensión finita o infinita de manera que su estructura continua sea conservada. Se entiende por dichos sistemas aquellos que involucran acoplamiento termo-mecánico y/o efectos disipativos internos modelados por variables internas que siguen leyes continuas, de modo que su evolución es considerada suave. La dinámica de estos sistemas está gobernada por las leyes de la termodinámica y simetrías, las cuales constituyen la estructura que se pretende conservar de forma discreta. Para ello, los sistemas disipativos se describen geométricamente mediante estructuras metriplécticas que identifican claramente las partes reversible e irreversible de la evolución del sistema. Así, usando una de estas estructuras conocida por las siglas (en inglés) de GENERIC, la estructura disipativa de los sistemas es identificada del mismo modo que lo es la Hamiltoniana para sistemas conservativos. Con esto, métodos (EEM) con precisión de segundo orden que conservan la energía, producen entropía y conservan los impulsos lineal y angular son formulados mediante el uso del operador derivada discreta introducido para asegurar la conservación de la Hamiltoniana y las simetrías de sistemas conservativos. Siguiendo estas directrices, se formulan dos tipos de métodos EEM basados en el uso de la temperatura o de la entropía como variable de estado termodinámica, lo que presenta importantes implicaciones que se discuten a lo largo de esta tesis. Entre las cuales cabe destacar que las condiciones de contorno de Dirichlet son naturalmente impuestas con la formulación basada en la temperatura. Por último, se validan dichos métodos y se comprueban sus mejores prestaciones en términos de la estabilidad y robustez en comparación con métodos estándar. This dissertation is concerned with the formulation, analysis and implementation of structure-preserving time integration methods for the solution of the initial(-boundary) value problems describing the dynamics of smooth dissipative systems, either finite- or infinite-dimensional ones. Such systems are understood as those involving thermo-mechanical coupling and/or internal dissipative effects modeled by internal state variables considered to be smooth in the sense that their evolutions follow continuos laws. The dynamics of such systems are ruled by the laws of thermodynamics and symmetries which constitutes the structure meant to be preserved in the numerical setting. For that, dissipative systems are geometrically described by metriplectic structures which clearly identify the reversible and irreversible parts of their dynamical evolution. In particular, the framework known by the acronym GENERIC is used to reveal the systems' dissipative structure in the same way as the Hamiltonian is for conserving systems. Given that, energy-preserving, entropy-producing and momentum-preserving (EEM) second-order accurate methods are formulated using the discrete derivative operator that enabled the formulation of Energy-Momentum methods ensuring the preservation of the Hamiltonian and symmetries for conservative systems. Following these guidelines, two kind of EEM methods are formulated in terms of entropy and temperature as a thermodynamical state variable, involving important implications discussed throughout the dissertation. Remarkably, the formulation in temperature becomes central to accommodate Dirichlet boundary conditions. EEM methods are finally validated and proved to exhibit enhanced numerical stability and robustness properties compared to standard ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A first-order Lagrangian L ∇ variationally equivalent to the second-order Einstein- Hilbert Lagrangian is introduced. Such a Lagrangian depends on a symmetric linear connection, but the dependence is covariant under diffeomorphisms. The variational problem defined by L ∇ is proved to be regular and its Hamiltonian formulation is studied, including its covariant Hamiltonian attached to ∇ .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Averaged event-related potential (ERP) data recorded from the human scalp reveal electroencephalographic (EEG) activity that is reliably time-locked and phase-locked to experimental events. We report here the application of a method based on information theory that decomposes one or more ERPs recorded at multiple scalp sensors into a sum of components with fixed scalp distributions and sparsely activated, maximally independent time courses. Independent component analysis (ICA) decomposes ERP data into a number of components equal to the number of sensors. The derived components have distinct but not necessarily orthogonal scalp projections. Unlike dipole-fitting methods, the algorithm does not model the locations of their generators in the head. Unlike methods that remove second-order correlations, such as principal component analysis (PCA), ICA also minimizes higher-order dependencies. Applied to detected—and undetected—target ERPs from an auditory vigilance experiment, the algorithm derived ten components that decomposed each of the major response peaks into one or more ICA components with relatively simple scalp distributions. Three of these components were active only when the subject detected the targets, three other components only when the target went undetected, and one in both cases. Three additional components accounted for the steady-state brain response to a 39-Hz background click train. Major features of the decomposition proved robust across sessions and changes in sensor number and placement. This method of ERP analysis can be used to compare responses from multiple stimuli, task conditions, and subject states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graphs of second harmonic generation coefficients and electro-optic coefficients (measured by ellipsometry, attenuated total reflection, and two-slit interference modulation) as a function of chromophore number density (chromophore loading) are experimentally observed to exhibit maxima for polymers containing chromophores characterized by large dipole moments and polarizabilities. Modified London theory is used to demonstrated that this behavior can be attributed to the competition of chromophore-applied electric field and chromophore–chromophore electrostatic interactions. The comparison of theoretical and experimental data explains why the promise of exceptional macroscopic second-order optical nonlinearity predicted for organic materials has not been realized and suggests routes for circumventing current limitations to large optical nonlinearity. The results also suggest extensions of measurement and theoretical methods to achieve an improved understanding of intermolecular interactions in condensed phase materials including materials prepared by sequential synthesis and block copolymer methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sulfite oxidase catalyzes the terminal reaction in the degradation of sulfur amino acids. Genetic deficiency of sulfite oxidase results in neurological abnormalities and often leads to death at an early age. The mutation in the sulfite oxidase gene responsible for sulfite oxidase deficiency in a 5-year-old girl was identified by sequence analysis of cDNA obtained from fibroblast mRNA to be a guanine to adenine transition at nucleotide 479 resulting in the amino acid substitution of Arg-160 to Gln. Recombinant protein containing the R160Q mutation was expressed in Escherichia coli, purified, and characterized. The mutant protein contained its full complement of molybdenum and heme, but exhibited 2% of native activity under standard assay conditions. Absorption spectroscopy of the isolated molybdenum domains of native sulfite oxidase and of the R160Q mutant showed significant differences in the 480- and 350-nm absorption bands, suggestive of altered geometry at the molybdenum center. Kinetic analysis of the R160Q protein showed an increase in Km for sulfite combined with a decrease in kcat resulting in a decrease of nearly 1,000-fold in the apparent second-order rate constant kcat/Km. Kinetic parameters for the in vitro generated R160K mutant were found to be intermediate in value between those of the native protein and the R160Q mutant. Native sulfite oxidase was rapidly inactivated by phenylglyoxal, yielding a modified protein with kinetic parameters mimicking those of the R160Q mutant. It is proposed that Arg-160 attracts the anionic substrate sulfite to the binding site near the molybdenum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary sensory neurons that respond to noxious stimulation and project to the spinal cord are known to fall into two distinct groups: one sensitive to nerve growth factor and the other sensitive to glial cell-line-derived neurotrophic factor. There is currently considerable interest in the ways in which these factors may regulate nociceptor properties. Recently, however, it has emerged that another trophic factor—brain-derived neurotrophic factor (BDNF)—may play an important neuromodulatory role in the dorsal horn of the spinal cord. BDNF meets many of the criteria necessary to establish it as a neurotransmitter/neuromodulator in small-diameter nociceptive neurons. It is synthesized by these neurons and packaged in dense core vesicles in nociceptor terminals in the superficial dorsal horn. It is markedly up-regulated in inflammatory conditions in a nerve growth factor-dependent fashion. Postsynaptic cells in this region express receptors for BDNF. Spinal neurons show increased excitability to nociceptive inputs after treatment with exogenous BDNF. There are both electrophysiological and behavioral data showing that antagonism of BDNF at least partially prevents some aspects of central sensitization. Together, these findings suggest that BDNF may be released from primary sensory nociceptors with activity, particularly in some persistent pain states, and may then increase the excitability of rostrally projecting second-order systems. BDNF released from nociceptive terminals may thus contribute to the sensory abnormalities associated with some pathophysiological states, notably inflammatory conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first detailed numerical study in three dimensions of a first-order phase transition that remains first order in the presence of quenched disorder (specifically, the ferromagnetic-paramagnetic transition of the site-diluted four states Potts model). A tricritical point, which lies surprisingly near the pure-system limit and is studied by means of finite-size scaling, separates the first-order and second-order parts of the critical line. This investigation has been made possible by a new definition of the disorder average that avoids the diverging-variance probability distributions that plague the standard approach. Entropy, rather than free energy, is the basic object in this approach that exploits a recently introduced microcanonical Monte Carlo method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a detailed numerical study on the effects of adding quenched impurities to a three dimensional system which in the pure case undergoes a strong first order phase transition (specifically, the ferromagnetic/paramagnetic transition of the site-diluted four states Potts model). We can state that the transition remains first-order in the presence of quenched disorder (a small amount of it) but it turns out to be second order as more impurities are added. A tricritical point, which is studied by means of Finite-Size Scaling, separates the first-order and second-order parts of the critical line. The results were made possible by a new definition of the disorder average that avoids the diverging-variance probability distributions that arise using the standard methodology. We also made use of a recently proposed microcanonical Monte Carlo method in which entropy, instead of free energy, is the basic quantity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose cotunneling as the microscopic mechanism that makes possible inelastic electron tunneling spectroscopy of magnetic atoms in surfaces for a wide range of systems, including single magnetic adatoms, molecules, and molecular stacks. We describe electronic transport between the scanning tip and the conducting surface through the magnetic system (MS) with a generalized Anderson model, without making use of effective spin models. Transport and spin dynamics are described with an effective cotunneling Hamiltonian in which the correlations in the magnetic system are calculated exactly and the coupling to the electrodes is included up to second order in the tip MS and MS substrate. In the adequate limit our approach is equivalent to the phenomenological Kondo exchange model that successfully describes the experiments. We apply our method to study in detail inelastic transport in two systems, stacks of cobalt phthalocyanines and a single Mn atom on Cu2N. Our method accounts for both the large contribution of the inelastic spin exchange events to the conductance and the observed conductance asymmetry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electric vehicle (EV) market has seen a rapid growth in the recent past. With an increase in the number of electric vehicles on road, there is an increase in the number of high capacity battery banks interfacing the grid. The battery bank of an EV, besides being the fuel tank, is also a huge energy storage unit. Presently, it is used only when the vehicle is being driven and remains idle for rest of the time, rendering it underutilized. Whereas on the other hand, there is a need of large energy storage units in the grid to filter out the fluctuations of supply and demand during a day. EVs can help bridge this gap. The EV battery bank can be used to store the excess energy from the grid to vehicle (G2V) or supply stored energy from the vehicle to grid (V2G ), when required. To let power flow happen, in both directions, a bidirectional AC-DC converter is required. This thesis concentrates on the bidirectional AC-DC converters which have a control on power flow in all four quadrants for the application of EV battery interfacing with the grid. This thesis presents a bidirectional interleaved full bridge converter topology. This helps in increasing the power processing and current handling capability of the converter which makes it suitable for the purpose of EVs. Further, the benefit of using the interleaved topology is that it increases the power density of the converter. This ensures optimization of space usage with the same power handling capacity. The proposed interleaved converter consists of two full bridges. The corresponding gate pulses of each switch, in one cell, are phase shifted by 180 degrees from those of the other cell. The proposed converter control is based on the one-cycle controller. To meet the challenge of new requirements of reactive power handling capabilities for grid connected converters, posed by the utilities, the controller is modified to make it suitable to process the reactive power. A fictitious current derived from the grid voltage is introduced in the controller, which controls the converter performance. The current references are generated using the second order generalized integrators (SOGI) and phase locked loop (PLL). A digital implementation of the proposed control ii scheme is developed and implemented using DSP hardware. The simulated and experimental results, based on the converter topology and control technique discussed here, are presented to show the performance of the proposed theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is focused on the synthesis and application of glycerol-based carbon materials (GBCM200, GBCM300 and GBCM350) as adsorbents for the removal of the antibiotic compounds flumequine and tetracycline from aqueous solution. The synthesis enrolled the partial carbonization of a glycerol-sulfuric acid mixture, followed by thermal treatments under inert conditions and further thermal activation under oxidative atmosphere. The textural properties were investigated through N2 adsorption–desorption isotherms, and the presence of oxygenated groups was discussed based on zeta potential and Fourier transform infrared (FTIR) data. The kinetic data revealed that the equilibrium time for flumequine adsorption was achieved within 96 h, while for tetracycline, it was reached after 120 h. Several kinetic models, i.e., pseudo-first order, pseudo-second order, fractional power, Elovich and Weber–Morris models, were applied, finding that the pseudo-second order model was the most suitable for the fitting of the experimental kinetic data. The estimated surface diffusion coefficient values, Ds, of 3.88 and 5.06 10 14 m2 s 1, suggests that the pore diffusion is the rate limiting step of the adsorption process. Finally, as it is based on SSE values, Sips model well-fitted the experimental FLQ and TCN adsorption isotherm data, followed by Freundlich equation. The maximum adsorption capacities for flumequine and tetracycline was of 41.5 and 58.2 mg g 1 by GBCM350 activated carbon.