859 resultados para REPRESENTATIONS OF PARTIALLY ORDERED SETS
Resumo:
El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.
Resumo:
The LU Board of Curators ordered its president, Sherman Scruggs, to have a law school up and running and ready for Lloyd Gaines by September 1, 1939. This task seemed insurmountable; establishing a law school on an equal par with that of MU in eight months would, in the least, be miraculous.
Resumo:
Analysis of the genetic changes in human tumors is often problematical because of the presence of normal stroma and the limited availability of pure tumor DNA. However, large amounts of highly reproducible “representations” of tumor and normal genomes can be made by PCR from nanogram amounts of restriction endonuclease cleaved DNA that has been ligated to oligonucleotide adaptors. We show here that representations are useful for many types of genetic analyses, including measuring relative gene copy number, loss of heterozygosity, and comparative genomic hybridization. Representations may be prepared even from sorted nuclei from fixed and archived tumor biopsies.
Resumo:
This study addresses the extent of divergence in the ascending somatosensory pathways of primates. Divergence of inputs from a particular body part at each successive synaptic step in these pathways results in a potential magnification of the representation of that body part in the somatosensory cortex, so that the representation can be expanded when peripheral input from other parts is lost, as in nerve lesions or amputations. Lesions of increasing size were placed in the representation of a finger in the ventral posterior thalamic nucleus (VPL) of macaque monkeys. After a survival period of 1–5 weeks, area 3b of the somatosensory cortex ipsilateral to the lesion was mapped physiologically, and the extent of the representation of the affected and adjacent fingers was determined. Lesions affecting less than 30% of the thalamic VPL nucleus were without effect upon the cortical representation of the finger whose thalamic representation was at the center of the lesion. Lesions affecting about 35% of the VPL nucleus resulted in a shrinkage of the cortical representation of the finger whose thalamic representation was lesioned, with concomitant expansion of the representations of adjacent fingers. Beyond 35–40%, the whole cortical representation of the hand became silent. These results suggest that divergence of brainstem and thalamocortical projections, although normally not expressed, are sufficiently great to maintain a representation after a major loss of inputs from the periphery. This is likely to be one mechanism of representational plasticity in the cerebral cortex.
Resumo:
Individuals with hemophilia A require frequent infusion of preparations of coagulation factor VIII. The activity of factor VIII (FVIII) as a cofactor for factor IXa in the coagulation cascade is limited by its instability after activation by thrombin. Activation of FVIII occurs through proteolytic cleavage and generates an unstable FVIII heterotrimer that is subject to rapid dissociation of its subunits. In addition, further proteolytic cleavage by thrombin, factor Xa, factor IXa, and activated protein C can lead to inactivation. We have engineered and characterized a FVIII protein, IR8, that has enhanced in vitro stability of FVIII activity due to resistance to subunit dissociation and proteolytic inactivation. FVIII was genetically engineered by deletion of residues 794-1689 so that the A2 domain is covalently attached to the light chain. Missense mutations at thrombin and activated protein C inactivation cleavage sites provided resistance to proteolysis, resulting in a single-chain protein that has maximal activity after a single cleavage after arginine-372. The specific activity of partially purified protein produced in transfected COS-1 monkey cells was 5-fold higher than wild-type (WT) FVIII. Whereas WT FVIII was inactivated by thrombin after 10 min in vitro, IR8 still retained 38% of peak activity after 4 hr. Whereas binding of IR8 to von Willebrand factor (vWF) was reduced 10-fold compared with WT FVIII, in the presence of an anti-light chain antibody, ESH8, binding of IR8 to vWF increased 5-fold. These results demonstrate that residues 1690–2332 of FVIII are sufficient to support high-affinity vWF binding. Whereas ESH8 inhibited WT factor VIII activity, IR8 retained its activity in the presence of ESH8. We propose that resistance to A2 subunit dissociation abrogates inhibition by the ESH8 antibody. The stable FVIIIa described here provides the opportunity to study the activated form of this critical coagulation factor and demonstrates that proteins can be improved by rationale design through genetic engineering technology.
Resumo:
The twn2 mutant of Arabidopsis exhibits a defect in early embryogenesis where, following one or two divisions of the zygote, the decendents of the apical cell arrest. The basal cells that normally give rise to the suspensor proliferate abnormally, giving rise to multiple embryos. A high proportion of the seeds fail to develop viable embryos, and those that do, contain a high proportion of partially or completely duplicated embryos. The adult plants are smaller and less vigorous than the wild type and have a severely stunted root. The twn2-1 mutation, which is the only known allele, was caused by a T-DNA insertion in the 5′ untranslated region of a putative valyl-tRNA synthetase gene, valRS. The insertion causes reduced transcription of the valRS gene in reproductive tissues and developing seeds but increased expression in leaves. Analysis of transcript initiation sites and the expression of promoter–reporter fusions in transgenic plants indicated that enhancer elements inside the first two introns interact with the border of the T-DNA to cause the altered pattern of expression of the valRS gene in the twn2 mutant. The phenotypic consequences of this unique mutation are interpreted in the context of a model, suggested by Vernon and Meinke [Vernon, D. M. & Meinke, D. W. (1994) Dev. Biol. 165, 566–573], in which the apical cell and its decendents normally suppress the embryogenic potential of the basal cell and its decendents during early embryo development.
Resumo:
Phospholipid signaling mediated by lipid-derived second messengers or biologically active lipids is still new and is not well established in plants. We recently have found that lysophosphatidylethanolamine (LPE), a naturally occurring lipid, retards senescence of leaves, flowers, and postharvest fruits. Phospholipase D (PLD) has been suggested as a key enzyme in mediating the degradation of membrane phospholipids during the early stages of plant senescence. Here we report that LPE inhibited the activity of partially purified cabbage PLD in a cell-free system in a highly specific manner. Inhibition of PLD by LPE was dose-dependent and increased with the length and unsaturation of the LPE acyl chain whereas individual molecular components of LPE such as ethanolamine and free fatty acid had no effect on PLD activity. Enzyme-kinetic analysis suggested noncompetitive inhibition of PLD by LPE. In comparison, the related lysophospholipids such as lysophosphatidylcholine, lysophosphatidylglycerol, and lysophosphotidylserine had no significant effect on PLD activity whereas PLD was stimulated by lysophosphatidic acid and inhibited by lysophosphatidylinositol. Membrane-associated and soluble PLD, extracted from cabbage and castor bean leaf tissues, also was inhibited by LPE. Consistent with acyl-specific inhibition of PLD by LPE, senescence of cranberry fruits as measured by ethylene production was more effectively inhibited according to the increasing acyl chain length and unsaturation of LPE. There are no known specific inhibitors of PLD in plants and animals. We demonstrate specific inhibitory regulation of PLD by a lysophospholipid.
Resumo:
We created a simulation based on experimental data from bacteriophage T7 that computes the developmental cycle of the wild-type phage and also of mutants that have an altered genome order. We used the simulation to compute the fitness of more than 105 mutants. We tested these computations by constructing and experimentally characterizing T7 mutants in which we repositioned gene 1, coding for T7 RNA polymerase. Computed protein synthesis rates for ectopic gene 1 strains were in moderate agreement with observed rates. Computed phage-doubling rates were close to observations for two of four strains, but significantly overestimated those of the other two. Computations indicate that the genome organization of wild-type T7 is nearly optimal for growth: only 2.8% of random genome permutations were computed to grow faster, the highest 31% faster, than wild type. Specific discrepancies between computations and observations suggest that a better understanding of the translation efficiency of individual mRNAs and the functions of qualitatively “nonessential” genes will be needed to improve the T7 simulation. In silico representations of biological systems can serve to assess and advance our understanding of the underlying biology. Iteration between computation, prediction, and observation should increase the rate at which biological hypotheses are formulated and tested.
Resumo:
Understanding the mechanisms of action of membrane proteins requires the elucidation of their structures to high resolution. The critical step in accomplishing this by x-ray crystallography is the routine availability of well-ordered three-dimensional crystals. We have devised a novel, rational approach to meet this goal using quasisolid lipidic cubic phases. This membrane system, consisting of lipid, water, and protein in appropriate proportions, forms a structured, transparent, and complex three-dimensional lipidic array, which is pervaded by an intercommunicating aqueous channel system. Such matrices provide nucleation sites (“seeding”) and support growth by lateral diffusion of protein molecules in the membrane (“feeding”). Bacteriorhodopsin crystals were obtained from bicontinuous cubic phases, but not from micellar systems, implying a critical role of the continuity of the diffusion space (the bilayer) on crystal growth. Hexagonal bacteriorhodopsin crystals diffracted to 3.7 Å resolution, with a space group P63, and unit cell dimensions of a = b = 62 Å, c = 108 Å; α = β = 90° and γ = 120°.
Resumo:
Capacity is an important numerical invariant of symplectic manifolds. This paper studies when a subset of a symplectic manifold is null, i.e., can be removed without affecting the ambient capacity. After examples of open null sets and codimension-2 non-null sets, geometric techniques are developed to perturb any isotopy of a loop to a hamiltonian flow; it follows that sets of dimension 0 and 1 are null. For isotropic sets of higher dimensions, obstructions to the perturbation are found in homotopy groups of the orthogonal groups.
Resumo:
Ets factors play a critical role in oncogenic Ras- and growth factor-mediated regulation of the proximal rat prolactin (rPRL) promoter in pituitary cells. The rPRL promoter contains two key functional Ets binding sites (EBS): a composite EBS/Pit-1 element located at –212 and an EBS that co-localizes with the basal transcription element (BTE, or A-site) located at –96. Oncogenic Ras exclusively signals to the –212 site, which we have named the Ras response element (RRE); whereas the response of multiple growth factors (FGFs, EGF, IGF, insulin and TRH) maps to both EBSs. Although Ets-1 and GA binding protein (GABP) have been implicated in the Ras and insulin responses, respectively, the precise identity of the pituitary Ets factors that specifically bind to the RRE and BTE sites remains unknown. In order to identify the Ets factor(s) present in GH4 and GH3 nuclear extracts (GH4NE and GH3NE) that bind to the EBSs contained in the RRE and BTE, we used EBS-RRE and BTE oligonucleotides in electrophoretic mobility shift assays (EMSAs), antibody supershift assays, western blot analysis of partially purified fractions and UV-crosslinking studies. EMSAs, using either the BTE or EBS-RRE probes, identified a specific protein–DNA complex, designated complex A, which contains an Ets factor as determined by oligonucleotide competition studies. Using western blot analysis of GH3 nuclear proteins that bind to heparin–Sepharose, we have shown that Ets-1 and GABP, which are MAP kinase substrates, co-purify with complex A, and supershift analysis with specific antisera revealed that complex A contains Ets-1, GABPα and GABPβ1. In addition, we show that recombinant full-length Ets-1 binds equivalently to BTE and EBS-RRE probes, while recombinant GABPα/β preferentially binds to the BTE probe. Furthermore, comparing the DNA binding of GH4NE containing both Ets-1 and GABP and HeLa nuclear extracts devoid of Ets-1 but containing GABP, we were able to show that the EBS-RRE preferentially binds Ets-1, while the BTE binds both GABP and Ets-1. Finally, UV-crosslinking experiments with radiolabeled EBS-RRE and BTE oligonucleotides showed that these probes specifically bind to a protein of ∼64 kDa, which is consistent with binding to Ets-1 (54 kDa) and/or the DNA binding subunit of GABP, GABPα (57 kDa). These studies show that endogenous, pituitary-derived GABP and Ets-1 bind to the BTE, whereas Ets-1 preferentially binds to the EBS-RRE. Taken together, these data provide important insights into the mechanisms by which the combination of distinct Ets members and EBSs transduce differential growth factor responses.
Resumo:
The Mouse Genome Database (MGD) is the community database resource for the laboratory mouse, a key model organism for interpreting the human genome and for understanding human biology and disease (http://www.informatics.jax.org). MGD provides standard nomenclature and consensus map positions for mouse genes and genetic markers; it provides a curated set of mammalian homology records, user-defined chromosomal maps, experimental data sets and the definitive mouse ‘gene to sequence’ reference set for the research community. The integration and standardization of these data sets facilitates the transition between mouse DNA sequence, gene and phenotype annotations. A recent focus on allele and phenotype representations enhances the ability of MGD to organize and present data for exploring the relationship between genotype and phenotype. This link between the genome and the biology of the mouse is especially important as phenotype information grows from large mutagenesis projects and genotype information grows from large-scale sequencing projects.
Resumo:
Toward the goal of identifying complete sets of transcription factor (TF)-binding sites in the genomes of several gamma proteobacteria, and hence describing their transcription regulatory networks, we present a phylogenetic footprinting method for identifying these sites. Probable transcription regulatory sites upstream of Escherichia coli genes were identified by cross-species comparison using an extended Gibbs sampling algorithm. Close examination of a study set of 184 genes with documented transcription regulatory sites revealed that when orthologous data were available from at least two other gamma proteobacterial species, 81% of our predictions corresponded with the documented sites, and 67% corresponded when data from only one other species were available. That the remaining predictions included bona fide TF-binding sites was proven by affinity purification of a putative transcription factor (YijC) bound to such a site upstream of the fabA gene. Predicted regulatory sites for 2097 E.coli genes are available at http://www.wadsworth.org/resnres/bioinfo/.
Resumo:
Using monoclonal tubulin and actin antibodies, Al-mediated alterations to microtubules (MTs) and actin microfilaments (MFs) were shown to be most prominent in cells of the distal part of the transition zone (DTZ) of an Al-sensitive maize (Zea mays L.) cultivar. An early response to Al (1 h, 90 μm) was the depletion of MTs in cells of the DTZ, specifically in the outermost cortical cell file. However, no prominent changes to the MT cytoskeleton were found in elongating cells treated with Al for 1 h in spite of severe inhibition of root elongation. Al-induced early alterations to actin MFs were less dramatic and consisted of increased actin fluorescence of partially disintegrated MF arrays in cells of the DTZ. These tissue- and development-specific alterations to the cytoskeleton were preceded by and/or coincided with Al-induced depolarization of the plasma membrane and with callose formation, particularly in the outer cortex cells of the DTZ. Longer Al supplies (>6 h) led to progressive enhancements of lesions to the MT cytoskeleton in the epidermis and two to three outer cortex cell files. Our data show that the cytoskeleton in the cells of the DTZ is especially sensitive to Al, consistent with the recently proposed specific Al sensitivity of this unique, apical maize root zone.
Resumo:
Human area V1 offers an excellent opportunity to study, using functional MRI, a range of properties in a specific cortical visual area, whose borders are defined objectively and convergently by retinotopic criteria. The retinotopy in V1 (also known as primary visual cortex, striate cortex, or Brodmann’s area 17) was defined in each subject by using both stationary and phase-encoded polar coordinate stimuli. Data from V1 and neighboring retinotopic areas were displayed on flattened cortical maps. In additional tests we revealed the paired cortical representations of the monocular “blind spot.” We also activated area V1 preferentially (relative to other extrastriate areas) by presenting radial gratings alternating between 6% and 100% contrast. Finally, we showed evidence for orientation selectivity in V1 by measuring transient functional MRI increases produced at the change in response to gratings of differing orientations. By systematically varying the orientations presented, we were able to measure the bandwidth of the orientation “transients” (45°).