935 resultados para Whether costs may be awarded on indemnity basis
Resumo:
El carácter de la heterogeneidad social, económica y productiva de la agriculturización se evidencia en los diversos tipos de productores existentes en la pampa húmeda. La agriculturización se asocia a modificaciones en la calidad de las tierras, en la estructura socio-productiva, en las estrategias productivas aplicadas y en las formas de uso del suelo. Suele plantearse sin embargo que las problemáticas edáficas asociadas a dicha agriculturización, son consecuencia de la aplicación de paquetes tecnológicos relativamente homogéneos, independientemente de los diferentes tipos de productores que los llevan a cabo. Nuestra hipótesis consiste en cambio, en que el deterioro en Argiudoles típicos pampeanos es el resultado, dadas distintas posiciones del relieve, de complejas combinaciones de diversas estrategias productivas adoptadas por diferentes tipos de productores. En el partido de Luján tipificamos los productores tomando en consideración sus niveles de capitalización (capitalizados y no capitalizados) y su organización del trabajo (familiar y no familiar). Definimos cinco estrategias productivas (4 agrícolas -con uno o dos cultivos por año, con siembra directa o labranza convencional- y 1 ganadera) y dos ambientes (loma y bajo). A partir del catastro municipal aplicamos una encuesta a una muestra estratificada estadísticamente representativa, por ubicación y tamaño de lotes. Sobre la misma realizamos el muestreo que nos permitió analizar los siguientes parámetros: profundidad del horizonte, densidad aparente, materia orgánica, acidez, nitrógeno, fósforo, potasio. Calculamos el contenido de materia orgánica y de nitrógeno por hectárea, y el deterioro relativo. Realizamos test de hipótesis de comparación de medias, prueba F y prueba t; y, calculamos finalmente los deterioros relativos. Utilizamos como indicador el contenido de materia orgánica por hectárea, dada su mayor sensibilidad a los cambios en las condiciones del suelo. Dos de nuestros principales hallazgos nos indican, en primer lugar, que al cultivar en lomas y con siembra directa, todos los tipos de productores presentaron los valores más bajos de deterioro relativo, excepto los familiares no capitalizados que obtienen los menores deterioros relativos con labranza convencional, aún en los bajos. Estos productores utilizan la ganadería en rotación con los cultivos como táctica de cuidado del suelo. Los empresarios capitalizados presentan los mayores valores de deterioro relativo en los bajos. Los familiares presentan menores pérdidas, cuando son capitalizados logran las mejores situaciones relativas al aplicar siembra directa en los bajos. En segundo lugar, ninguna dimensión -tipo de productor, estrategias productivas, ambientes- analizada aisladamente determina el deterioro relativo de los suelos. Pero, al mismo tiempo, ninguno puede ser descartado sino que debería ser incluido en una combinación que integre las condiciones de ambiente y las estrategias productivas de los diferentes tipos de productores que manejan los suelos.
Resumo:
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariance matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Resumo:
Detailed mineralogical and geochemical studies were performed on samples from selected time intervals recovered during Leg 79 on the Mazagan Plateau. The uppermost Albian and Cenomanian sediments of Sites 545 and 547 can be correlated on the basis of mineralogy and geochemistry; these sediments illustrate differential settling processes and the existence of hot climates with alternating humid and dry seasons in the African coastal zone. The upper Aptian to Albian black shales of Site 545 point to an irregular alternation of tectonic activity and relaxation stages, allowing different behaviors in the reworking of soils, crystalline rocks, and sediments born in peri-marine basins. The barren lower Mesozoic reddish sediments and evaporitic series of Sites 546 and 547 are characterized by a strong physical erosion of sialic landscapes, without clear evidence of post-depositional metamorphic events. At Site 546 strong early diagenetic processes in a confined evaporitic environment affect both the mineralogy and the geochemistry of pre-Miocene rocks.
Resumo:
The software PanXML is a tool to create XML files needed for DOI registration at the German National Library of Science and Technology (TIB). PanXML is distributed as freeware for the operating systems Microsoft Windows, Apple OS X and Linux. An XML file created by PanXML is based on the XSD file article-doi_v3.2.xsd. Further schemas may be added on request.
Resumo:
Ocean acidification influences sediment/water nitrogen fluxes, possibly by impacting on the microbial process of ammonia oxidation. To investigate this further, undisturbed sediment cores collected from Ny Alesund harbour (Svalbard) were incubated with seawater adjusted to CO2 concentrations of 380, 540, 760, 1,120 and 3,000 µatm. DNA and RNA were extracted from the sediment surface after 14 days' exposure and the abundance of bacterial and archaeal ammonia oxidising (amoA) genes and transcripts quantified using quantitative polymerase chain reaction. While there was no change to the abundance of bacterial amoA genes, an increase to 760 µatm pCO2 reduced the abundance of bacterial amoA transcripts by 65 %, and this was accompanied by a shift in the composition of the active community. In contrast, archaeal amoA gene and transcript abundance both doubled at 3,000 µatm, with an increase in species richness also apparent. This suggests that ammonia oxidising bacteria and archaea in marine sediments have different pH optima, and the impact of elevated CO2 on N cycling may be dependent on the relative abundances of these two major microbial groups. Further evidence of a shift in the balance of key N cycling groups was also evident: the abundance of nirS-type denitrifier transcripts decreased alongside bacterial amoA transcripts, indicating that NO3 ? produced by bacterial nitrification fuelled denitrification. An increase in the abundance of Planctomycete-specific 16S rRNA, the vast majority of which grouped with known anammox bacteria, was also apparent at 3,000 µatm pCO2. This could indicate a possible shift from coupled nitrification-denitrification to anammox activity at elevated CO2.
Resumo:
The need for the simulation of spectrum compatible earthquake time histories has existed since earthquake engineering for complicated structures began. More than the safety of the main structure, the analysis of the equipment (piping, racks, etc.) can only be assessed on the basis of the time history of the floor in which they are contained. This paper presents several methods for calculating simulated spectrum compatible earthquakes as well as a comparison between them. As a result of this comparison, the use of the phase content in real earthquakes as proposed by Ohsaki appears as an effective alternative to the classical methods. With this method, it is possible to establish an approach without the arbitrary modulation commonly used in other methods. Different procedures are described as is the influence of the different parameters which appear in the analysis. Several numerical examples are also presented, and the effectiveness of Ohsaki's method is confirmed.
Resumo:
A unified solution framework is presented for one-, two- or three-dimensional complex non-symmetric eigenvalue problems, respectively governing linear modal instability of incompressible fluid flows in rectangular domains having two, one or no homogeneous spatial directions. The solution algorithm is based on subspace iteration in which the spatial discretization matrix is formed, stored and inverted serially. Results delivered by spectral collocation based on the Chebyshev-Gauss-Lobatto (CGL) points and a suite of high-order finite-difference methods comprising the previously employed for this type of work Dispersion-Relation-Preserving (DRP) and Padé finite-difference schemes, as well as the Summationby- parts (SBP) and the new high-order finite-difference scheme of order q (FD-q) have been compared from the point of view of accuracy and efficiency in standard validation cases of temporal local and BiGlobal linear instability. The FD-q method has been found to significantly outperform all other finite difference schemes in solving classic linear local, BiGlobal, and TriGlobal eigenvalue problems, as regards both memory and CPU time requirements. Results shown in the present study disprove the paradigm that spectral methods are superior to finite difference methods in terms of computational cost, at equal accuracy, FD-q spatial discretization delivering a speedup of ð (10 4). Consequently, accurate solutions of the three-dimensional (TriGlobal) eigenvalue problems may be solved on typical desktop computers with modest computational effort.
Resumo:
Los pseudo-despolarizadores espaciales, entre los que se encuentran los denominados prismas despolarizadores, son dispositivos ópticos capaces de modificar el estado de polarización en cada punto del plano transversal de los haces de luz, obteniendo un continuo de estados de polarización y por tanto un grado de polarización estándar próximo a cero. En la literatura, en reiteradas ocasiones se utilizan estos prismas asumiendo la aproximación de que en su cara oblicua no existe doble reflexión. En el presente trabajo se investigan en detalle las características de polarización locales y globales de haces monocromáticos planos a la salida de un prisma despolarizador teniendo en cuenta la doble reflexión y se comparan los resultados con el caso de despreciar esta doble reflexión. Como consecuencia se obtienen las condiciones bajo las cuales el efecto de la doble reflexión puede ser despreciable. ABSTRACT: Spatial pseudo-depolarizers are optical devices that modify the polarization state along the transverse plane of a light beam in such a way that a continuum of polarization states are obtained. Consequently, a standard degree of polarization near to zero is reached when average over the whole beam section is taken. Depolarizing prism is an example of spatial pseudo-depolarizer. Double reflection effects are usually not considered in literature for such devices. In the present work, the polarization properties at the exit of a depolarizing prism are studied taking into account the effect of double reflection and the results are compared to those obtained when double reflection is neglected. Conditions under which the double reflection effect may be negligible on depolarizing action of this class of prisms are obtained.
Resumo:
Biomass has always been associated with the development of the population in the Canary Islands as the first source of elemental energy that was in the archipelago and the main cause of deforestation of forests, which over the years has been replaced by forest fossil fuels. The Canary Islands store a large amount of energy in the form of biomass. This may be important on a small scale for the design of small power plants with similar fuels from agricultural activities, and these plants could supply rural areas that could have self-sufficiency energy. The problem with the Canary Islands for a boost in this achievement is to ensure the supply to the consumer centers or power plants for greater efficiency that must operate continuously, allowing them to have a resource with regularity, quality and at an acceptable cost. In the Canary Islands converge also a unique topography with a very rugged terrain that makes it greater difficult to use and significantly more expensive. In this work all these aspects are studied, giving conclusions, action paths and theoretical potentials.
Resumo:
El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.
Resumo:
Glucose production by liver is a major physiological function, which is required to prevent development of hypoglycemia in the postprandial and fasted states. The mechanism of glucose release from hepatocytes has not been studied in detail but was assumed instead to depend on facilitated diffusion through the glucose transporter GLUT2. Here, we demonstrate that in the absence of GLUT2 no other transporter isoforms were overexpressed in liver and only marginally significant facilitated diffusion across the hepatocyte plasma membrane was detectable. However, the rate of hepatic glucose output was normal. This was evidenced by (i) the hyperglycemic response to i.p. glucagon injection; (ii) the in vivo measurement of glucose turnover rate; and (iii) the rate of release of neosynthesized glucose from isolated hepatocytes. These observations therefore indicated the existence of an alternative pathway for hepatic glucose output. Using a [14C]-pyruvate pulse-labeling protocol to quantitate neosynthesis and release of [14C]glucose, we demonstrated that this pathway was sensitive to low temperature (12°C). It was not inhibited by cytochalasin B nor by the intracellular traffic inhibitors brefeldin A and monensin but was blocked by progesterone, an inhibitor of cholesterol and caveolae traffic from the endoplasmic reticulum to the plasma membrane. Our observations thus demonstrate that hepatic glucose release does not require the presence of GLUT2 nor of any plasma membrane glucose facilitative diffusion mechanism. This implies the existence of an as yet unsuspected pathway for glucose release that may be based on a membrane traffic mechanism.
Resumo:
Long-range promoter–enhancer interactions are a crucial regulatory feature of many eukaryotic genes yet little is known about the mechanisms involved. Using cloned chicken βA-globin genes, either individually or within the natural chromosomal locus, enhancer-dependent transcription is achieved in vitro at a distance of 2 kb with developmentally staged erythroid extracts. This occurs by promoter derepression and is critically dependent upon DNA topology. In the presence of the enhancer, genes must exist in a supercoiled conformation to be actively transcribed, whereas relaxed or linear templates are inactive. Distal protein–protein interactions in vitro may be favored on supercoiled DNA because of topological constraints. In this system, enhancers act primarily to increase the probability of rapid and efficient transcription complex formation and initiation. Repressor and activator proteins binding within the promoter, including erythroid-specific GATA-1, mediate this process.
Resumo:
Strains of Bacteroides fragilis associated with diarrheal disease (enterotoxigenic B. fragilis) produce a 20-kDa zinc-dependent metalloprotease toxin (B. fragilis enterotoxin; BFT) that reversibly stimulates chloride secretion and alters tight junctional function in polarized intestinal epithelial cells. BFT alters cellular morphology and physiology most potently and rapidly when placed on the basolateral membrane of epithelial cells, suggesting that the cellular substrate for BFT may be present on this membrane. Herein, we demonstrate that BFT specifically cleaves within 1 min the extracellular domain of the zonula adherens protein, E-cadherin. Cleavage of E-cadherin by BFT is ATP-independent and essential to the morphologic and physiologic activity of BFT. However, the morphologic changes occurring in response to BFT are dependent on target-cell ATP. E-cadherin is shown here to be a cellular substrate for a bacterial toxin and represents the identification of a mechanism of action, cell-surface proteolytic activity, for a bacterial toxin.
Resumo:
The immunodominant, CD8+ cytotoxic T lymphocyte (CTL) response to the HLA-B8-restricted peptide, RAKFKQLL, located in the Epstein–Barr virus immediate-early antigen, BZLF1, is characterized by a diverse T cell receptor (TCR) repertoire. Here, we show that this diversity can be partitioned on the basis of crossreactive cytotoxicity patterns involving the recognition of a self peptide—RSKFRQIV—located in a serine/threonine kinase and a bacterial peptide—RRKYKQII—located in Staphylococcus aureus replication initiation protein. Thus CTL clones that recognized the viral, self, and bacterial peptides expressed a highly restricted αβ TCR phenotype. The CTL clones that recognized viral and self peptides were more oligoclonal, whereas clones that strictly recognized the viral peptide displayed a diverse TCR profile. Interestingly, the self and bacterial peptides equally were substantially less effective than the cognate viral peptide in sensitizing target cell lysis, and also resulted only in a weak reactivation of memory CTLs in limiting dilution assays, whereas the cognate peptide was highly immunogenic. The described crossreactions show that human antiviral, CD8+ CTL responses can be shaped by peptide ligands derived from autoantigens and environmental bacterial antigens, thereby providing a firm structural basis for molecular mimicry involving class I-restricted CTLs in the pathogenesis of autoimmune disease.
Resumo:
In the goldfish (Carassius auratus) the two endogenous forms of gonadotropin-releasing hormone (GnRH), namely chicken GnRH II ([His5,Trp7,Tyr8]GnRH) and salmon GnRH ([Trp7,Leu8]GnRH), stimulate the release of both gonadotropins and growth hormone from the pituitary. This control is thought to occur by means of the stimulation of distinct GnRH receptors. These receptors can be distinguished on the basis of differential gonadotropin and growth hormone releasing activities of naturally occurring GnRHs and GnRHs with variant amino acids in position 8. We have cloned the cDNAs of two GnRH receptors, GfA and GfB, from goldfish brain and pituitary. Although the receptors share 71% identity, there are marked differences in their ligand selectivity. Both receptors are expressed in the pituitary but are differentially expressed in the brain, ovary, and liver. Thus we have found and cloned two full-length cDNAs that appear to correspond to different forms of GnRH receptor, with distinct pharmacological characteristics and tissue distribution, in a single species.