942 resultados para Close to Convex Function
Resumo:
El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.
Resumo:
The pararotor is a decelerator device based on the autorotation of a rotating wing. When it is dropped, it generates an aerodynamic force parallel to the main motion direction, acting as a decelerating force. In this paper, the rotational motion equations are shown for the vertical flight without any lateral wind component and some simplifying assumptions are introduced to obtain analytic solutions of the motion. First, the equilibrium state is obtained as a function of the main parameters. Then the equilibrium stability is analyzed. The motion stability depends on two nondimensional parameters, which contain geometric, inertia, and aerodynamic characteristics of the device. Based on these two parameters a stability diagram can be defined. Some stability regions with different types of stability trajectories (nodes, spirals, focuses) can be identified for spinning motion around axes close to the major, minor, and intermediate principal axes. It is found that the blades contribute to stability in a case of spin around the intermediate principal inertia axis, which is otherwise unstable. Subsequently, the equations for determining the angles of nutation and spin of the body are obtained, thus defining the orientation of the body for a stationary motion and the parameters on which that position depends.
Resumo:
La obra de Emilio Pérez Piñero que se desarrolla entre los años 1961 y 1972 año en el que muere en un accidente de tráfico volviendo de Figueras, se centra principalmente en artefactos desplegables y desmontables, ejecutando prototipos que en el presente trabajo se han dividido en dos grupos; la cúpula reticular y la infraestructura. No pudo por tanto acudir al Congreso de 1972 de la UIA a recoger el premio Auguste Perret a la innovación tecnológica, que en años anteriores habían recibido Félix Candela, Jean Prouvé, Hans Scharoun o Frei Otto, y que en aquella ocasión tuvo que recoger su viuda. Parámetros como el de la movilidad, indeterminación, intercambiabilidad, obsolescencia y otros que se analizan en el presente trabajo, aparecen a lo largo de toda su obra ya que muchos de sus artefactos están ubicados en no-lugares y tienen un carácter itinerante y por tanto se hace indispensable su rápido montaje y desmontaje, que unas veces se resuelve mediante la desmontabilidad y otras con la plegabilidad de éstos. Aunque pueda parecer Piñero una figura autárquica, lo cierto es que durante la década donde concentra su trabajo se produce una explosión en torno a al arquetipo que será denominado de forma genérica `artefacto´, ligado conceptualmente a los parámetros que aquí se definen. Entendemos artefacto como objeto material realizado por una o varias personas para cumplir una función, es sinónimo de máquina y aparato y deriva de las palabras latinas ars o artis (técnica) y facto (hecho), para designar a los objetos cuya fabricación requiere alguna destreza. El término latino `ars´ engloba a las técnicas y a las artes, lo que no ocurre con el término castellano arte que deriva de él. Los movimientos utópicos que comparte la década con Piñero, utilizan el arquetipo infraestructural, ligero y high tech, para a través de una arquitectura más ligada a la ciencia ficción, realizar una crítica al Movimiento Moderno institucionalizado, todos ellos comparten cierta obsesión por la movilidad, ligada ésta a la idea de espacio flexible, dinámico, nómada. Este concepto de neo-nomadismo, que representa un habitar dinámico, aglutina las nuevas formas de vivir donde la movilidad social y geográfica son habituales. El nomadismo, por otra parte se entiende como sinónimo de democracia y libertad. La arquitectura pasa de ser pesada, estática, permanente a ser un elemento dinámico en continuo movimiento. A veces con connotaciones biológicas se asimilan los artefactos a organismos vivos y les confieren dichas propiedades de crecimiento y autonomía energética, acumulándose en torno a megaestructuras, donde quedan `enchufados´. En este intento de dotar movilidad a lo inmueble, se buscan estructuras vivas y modificables que crecen en una asimilación de las leyes naturales utilizando los parámetros de metamorfosis, simbiosis y cambio. Estos movimientos de vanguardia tienen también ciertas connotaciones políticas y sociales basadas en la libertad y la movilidad y reniegan del consumismo institucionalizado, de la arquitectura como instrumento de consumo, como objeto de usar en la cultura de masas. El carácter político de la autogestión, de la customización como parámetro proyectual, de la autosuficiencia energética, que anticipa la llegada de la crisis energética del año 1973. Objeto de este trabajo será relacionar los conceptos que aparecen fuertemente en el entorno de la década de los años sesenta del siglo XX, en el trabajo de Emilio Pérez Piñero. Parámetros encontrados como conceptos en los grupos de vanguardia y utopía a su vez fuertemente influenciados por las figuras del ingeniero Richard Buckminster Fuller y del arquitecto Konrad Wachsmann. Se analizará que posible influencia tiene la obra de Fuller, principalmente el prototipo denominado cúpula reticular, en la obra de Pérez Piñero y sus coetáneos analizando sus pensamientos teóricos en torno a parámetros como la energía, principalmente en las teorías relativas a Synergetics. El término inventado por Richard B. Fuller es una contracción de otro más largo que en inglés agrupa tres palabras; synergetic-energetic geometry. La definición de sinergia es la cooperación, es decir es el resultado de la acción conjunta de dos o más causas, pero con un efecto superior a la suma de estas causas. El segundo término, energetics geometry, que traducido sería geometría energética hace referencia en primer lugar a la geometría; ya que desarrolla el sistema de referencia que utiliza la naturaleza para construir sus sistemas y en segundo lugar a la energía; ya que además debe ser el sistema que establezca las relaciones más económicas utilizando el mínimo de energía. Por otro lado se analiza la repercusión del prototipo denominado Infraestructura, término acuñado por Yona Friedman y basado estructuralmente y conceptualmente en los desarrollos sobre grandes estructuras de Konrad Wachsmann. El arquitecto alemán divulga su conocimiento en seminarios impartidos por todo el mundo como el que imparte en Tokio y se denomina Wachsmann´s Seminar donde participan algunos de los componentes del grupo Metabolista que sorprenderán al mundo con sus realizaciones en la exposición de Osaka de 1970. El intervalo temporal entre 1961 hasta 1972 hace referencia a la horquilla donde Pérez Piñero realiza su obra arquitectónica, que comienza en 1961 cuando gana el concurso convocado en Londres por la UIA (Unión Internacional de Arquitectos) con el proyecto conocido como Teatro Ambulante, hasta 1972 que es cuando fallece volviendo de Figueras donde está realizando dos encargos de Salvador Dalí; la cubrición del escenario del futuro Teatro-Museo Salvador Dalí y la Vidriera Hipercúbica que debía cerrar la boca de tal escenario. Bajo el título de `Artefactos energéticos. De Fuller a Piñero (1961-1972)´, se presenta esta Tesis doctoral, que tiene la intención de vincular la obra de Emilio Pérez Piñero con la de las neo vanguardias producidas por una serie de arquitectos que operan en el ámbito internacional. Estas vinculaciones se producen de una forma general, donde a través de una serie de estrategias según la metodología que posteriormente se describe se buscan relaciones de la obra del autor español con algunos de los movimientos más significativos que aparecen en dicha década y de manera específica estableciendo relaciones con las obras y pensamientos de los autores que pertenecen a estos movimientos y donde estas relaciones se hacen más evidentes. El objeto del presente trabajo es analizar y explicar la obra del arquitecto Emilio Pérez Piñero, que espacialmente se localiza en el territorio español, desde el punto de vista de estos movimientos para posteriormente poder determinar si existen puntos en común y si el arquitecto español no solo comparte la década temporalmente sino también conceptualmente y por tanto utiliza el ideario que utilizan sus coetáneos que forman parte de las neovanguardias de los años sesenta de siglo XX. ABSTRACT ABSTRACT The Work of Emilio Perez Piñero was developed between the years 1961 and 1972 when he died in a car accident coming back from Figueres, where he was building a geodesic dome to close the building that enclose the Dali’s museum. All his Work is mainly centered in artifact that could be collapsible and removable, taking the two prototypes that are described in this work as a recurrent element in all his creation. These are the reticular dome and the infrastructure that are very influenced by the work from Richard B. Fuller and Konrad Wachsmann. Emilio Pérez Piñero could not receive the Auguste Perret Prize in 1972 awarded by the UIA that years before have received architects as Felix Candela, Jean Prouvé, Hans Scharoun or Frei Otto, and this time Pérez Piñero´s wife will accept it because of his death. Parameters like mobility, changeability, expendability, indetermination and others appear currently in his Work. All the inventions that Piñero had been patented and all of the artifacts that he created are usually located in no-places, because they do have a shifting identity. This kind of building has to be quickly set on site, and this problem has to be solved in term of foldability or demounting. In the decade where his work focuses, an explosion has occurred around this archetype to be generally called artifact that is usually linked to mobility. We understand artifact as a material object made by one or more people to work in a particular way. It is sometimes equated with the terms machinery and apparatus and it is derived from the Latin word `ars´ or `artis´, what means techniques and `facto´ (fact). And we use this term to refer to objects whose manufacture requires the same skill, in fact the Latin word `ars´ covers the techniques and arts, which does not occur with the Castillan term `arte´ that derives from it and means only art. The term neo-nomadic is a relatively new name used for a dynamic life, commonly referred to new forms of life where social and geographical mobility are common. On the other hand nomadic could be understood as a synonymous for democracy and freedom. The architecture is not going to be hard and static anymore but a dynamic element in the move. The Neo-avant-garde movement that shares the decade with Piñero uses this infrastructural archetype, which is light and high-tech, to criticize the institutionalized Modern Movement through architecture linked to science fiction. They all share an obsession with mobility, a concept that is connected to the terms `dynamic´, `nomadic´, `flexibile´, etc. Sometimes, with biological connotations, the utopian assimilate the artifacts to living organisms and give them these properties of growth and energy autonomy, and they apparently grow around megastructures where they are plugged. In this attempt to provide mobility to the inertness, living structures and possibility of change are sought in order to make them grow like a living organism and to assimilate the natural laws of growth. According to a definition from architecture provided by Fernández- Galiano who calls it `exosomatic artifact´, he understand architecture as artifact of the human environment that regulates natural energy flows and channels the energy stored in fuels for the benefit of living beings that inhabit. It is also true that during the sixties a new environmental awareness in public opinion is formed and that is due to the exploitation and disproportionate use of energy resources, acceleration of technological processes and mass consumption. Consequently a new concept is born: energy autonomy, it is very close to rational use of natural energy. Such a concept will be culturally assimilated with the requirement of independence not only in the management but also in the building construction until we arrive at energy autonomy. The individuals become energy consumer, which in turn can enter the energy produced in the system to `life in an eco-mode way´. The objectives of this research are analyzing all of these parameters and concepts that are coming into view in the surrounding of the decade and relate them with the Work of Pérez Piñero. Terms strongly present in the avant-garde movements around the decade, a young architect’s generation strongly influenced by Richard B. Fuller and Konrad Wachsmann. However, it will be analyzed how important the influence of Buckminster Fuller's Work was and his theoretical text about energy on the Work of Pérez Piñero and his fellows of the decade. The term Synergetic was invented by Fuller and came from the words synergy and energetic geometry. Synergy is the cooperation or interaction of two or more agents to produce a greater effect than the sum of their separate effects. Energetic geometry is related to the geometries that the Nature is using to build their construction but always using low energy consumption. On the other hand, the influences from Wachsmann around the prototype called Infrastructure have been analyzed. The German architect has developed knowledge around huge structures that he has spread all around the world through seminars that he has been conducted. One of these was the Wachsmann´s seminar in Tokyo, where same of the members of the Metabolist group were taking part of. Later these young architects will surprise the world with his artifacts at the World Exposition in Osaka in 1970. Between 1961 and 1972 Pérez Piñero produced his architectural work. It began in 1961 when he received the first prize with his project Mobile Theatre in the competition organized by the UIA in London. In 1972 the Auguste Perret Prize was granted by the UIA too. He could not accept it because he died before in a car accident when he was coming from Figueres, when he was designing two projects for Dali. With the title `Energetic Artifacts. From Fuller to Piñero (1961- 1972)´, this thesis relates the Work of Emilio Pérez Piñero with the neo avant-garde made by a young architects’ generation who is sharing the time with him. Several strategies have been used to formed relationships between them. They are described in the present work to set up a method that allows us to relate the work and ideas of the architects of the neo avant-garde with the ones from Piñero. This work is intended to analyze and explained the work of Pérez Piñero from the point of view of the international architects’ generation who is operating at the same time and finally to determinate if Piñero is not sharing the time with them but the concepts, ideas and architectural parameters.
Resumo:
In electric vehicles, passengers sit very close to an electric system of significant power. The high currents achieved in these vehicles mean that the passengers could be exposed to significant magnetic fields. One of the electric devices present in the power train are the batteries. In this paper, a methodology to evaluate the magnetic field created by these batteries is presented. First, the magnetic field generated by a single battery is analyzed using finite elements simulations. Results are compared to laboratory measurements, taken from a real battery, in order to validate the model. After this, the magnetic field created by a complete battery pack is estimated and results are discussed.
Resumo:
Subunits a and c of Fo are thought to cooperatively catalyze proton translocation during ATP synthesis by the Escherichia coli F1Fo ATP synthase. Optimizing mutations in subunit a at residues A217, I221, and L224 improves the partial function of the cA24D/cD61G double mutant and, on this basis, these three residues were proposed to lie on one face of a transmembrane helix of subunit a, which then interacted with the transmembrane helix of subunit c anchoring the essential aspartyl group. To test this model, in the present work Cys residues were introduced into the second transmembrane helix of subunit c and the predicted fourth transmembrane helix of subunit a. After treating the membrane vesicles of these mutants with Cu(1,10-phenanthroline)2SO4 at 0°, 10°, or 20°C, strong a–c dimer formation was observed at all three temperatures in membranes of 7 of the 65 double mutants constructed, i.e., in the aS207C/cI55C, aN214C/cA62C, aN214C/cM65C, aI221C/cG69C, aI223C/cL72C, aL224C/cY73C, and aI225C/cY73C double mutant proteins. The pattern of cross-linking aligns the helices in a parallel fashion over a span of 19 residues with the aN214C residue lying close to the cA62C and cM65C residues in the middle of the membrane. Lesser a–c dimer formation was observed in nine other double mutants after treatment at 20°C in a pattern generally supporting that indicated by the seven landmark residues cited above. Cross-link formation was not observed between helix-1 of subunit c and helix-4 of subunit a in 19 additional combinations of doubly Cys-substituted proteins. These results provide direct chemical evidence that helix-2 of subunit c and helix-4 of subunit a pack close enough to each other in the membrane to interact during function. The proximity of helices supports the possibility of an interaction between Arg210 in helix-4 of subunit a and Asp61 in helix-2 of subunit c during proton translocation, as has been suggested previously.
Resumo:
Tetraethylammonium (TEA+) is widely used for reversible blockade of K channels in many preparations. We noticed that intracellular perfusion of voltage-clamped squid giant axons with a solution containing K+ and TEA+ irreversibly decreased the potassium current when there was no K+ outside. Five minutes of perfusion with 20 mM TEA+, followed by removal of TEA+, reduced potassium current to <5% of its initial value. The irreversible disappearance of K channels with TEA+ could be prevented by addition of ≥ 10 mM K+ to the extracellular solution. The rate of disappearance of K channels followed first-order kinetics and was slowed by reducing the concentration of TEA+. Killing is much less evident when an axon is held at −110 mV to tightly close all of the channels. The longer-chain TEA+ derivative decyltriethylammonium (C10+) had irreversible effects similar to TEA+. External K+ also protected K channels against the irreversible action of C10+. It has been reported that removal of all K+ internally and externally (dekalification) can result in the disappearance of K channels, suggesting that binding of K+ within the pore is required to maintain function. Our evidence further suggests that the crucial location for K+ binding is external to the (internal) TEA+ site and that TEA+ prevents refilling of this location by intracellular K+. Thus in the absence of extracellular K+, application of TEA+ (or C10+) has effects resembling dekalification and kills the K channels.
Resumo:
It is clear that the initial analysis of visual motion takes place in the striate cortex, where directionally selective cells are found that respond to local motion in one direction but not in the opposite direction. Widely accepted motion models postulate as inputs to directional units two or more cells whose spatio-temporal receptive fields (RFs) are approximately 90° out of phase (quadrature) in space and in time. Simple cells in macaque striate cortex differ in their spatial phases, but evidence is lacking for the varying time delays required for two inputs to be in temporal quadrature. We examined the space-time RF structure of cells in macaque striate cortex and found two subpopulations of (nondirectional) simple cells, some that show strongly biphasic temporal responses, and others that are weakly biphasic if at all. The temporal impulse responses of these two classes of cells are very close to 90° apart, with the strongly biphasic cells having a shorter latency than the weakly biphasic cells. A principal component analysis of the spatio-temporal RFs of directionally selective simple cells shows that their RFs could be produced by a linear combination of two components; these two components correspond closely in their respective latencies and biphasic characters to those of strongly biphasic and weakly biphasic nondirectional simple cells, respectively. This finding suggests that the motion system might acquire the requisite temporal quadrature by combining inputs from these two classes of nondirectional cells (or from their respective lateral geniculate inputs, which appear to be from magno and parvo lateral geniculate cells, respectively).
Resumo:
Upstream A-tracts stimulate transcription from a variety of bacterial promoters, and this has been widely attributed to direct effects of the intrinsic curvature of A-tract-containing DNA. In this work we report experiments that suggest a different mechanism for the effects of upstream A-tracts on transcription. The similarity of A-tract-containing sequences to the adenine- and thymine-rich upstream recognition elements (UP elements) found in some bacterial promoters suggested that A-tracts might increase promoter activity by interacting with the α subunit of RNA polymerase (RNAP). We found that an A-tract-containing sequence placed upstream of the Escherichia coli lac or rrnB P1 promoters stimulated transcription both in vivo and in vitro, and that this stimulation required the C-terminal (DNA-binding) domain of the RNAP α subunit. The A-tract sequence was protected by wild-type RNAP but not by α-mutant RNAPs in footprints. The effect of the A-tracts on transcription was not as great as that of the most active UP elements, consistent with the degree of similarity of the A-tract sequence to the UP element consensus. A-tracts functioned best when positioned close to the −35 hexamer rather than one helical turn farther upstream, similar to the positioning optimal for UP element function. We conclude that A-tracts function as UP elements, stimulating transcription by providing binding site(s) for the RNAP αCTD, and we suggest that these interactions could contribute to the previously described wrapping of promoter DNA around RNAP.
Resumo:
The parasitic bacterium Mycoplasma genitalium has a small, reduced genome with close to a basic set of genes. As a first step toward determining the families of protein domains that form the products of these genes, we have used the multiple sequence programs psi-blast and geanfammer to match the sequences of the 467 gene products of M. genitalium to the sequences of the domains that form proteins of known structure [Protein Data Bank (PDB) sequences]. PDB sequences (274) match all of 106 M. genitalium sequences and some parts of another 85; thus, 41% of its total sequences are matched in all or part. The evolutionary relationships of the PDB domains that match M. genitalium are described in the structural classification of proteins (SCOP) database. Using this information, we show that the domains in the matched M. genitalium sequences come from 114 superfamilies and that 58% of them have arisen by gene duplication. This level of duplication is more than twice that found by using pairwise sequence comparisons. The PDB domain matches also describe the domain structure of the matched sequences: just over a quarter contain one domain and the rest have combinations of two or more domains.
Resumo:
When lipid synthesis is limited in HepG2 cells, apoprotein B100 (apoB100) is not secreted but rapidly degraded by the ubiquitin-proteasome pathway. To investigate apoB100 biosynthesis and secretion further, the physical and functional states of apoB100 destined for either degradation or lipoprotein assembly were studied under conditions in which lipid synthesis, proteasomal activity, and microsomal triglyceride transfer protein (MTP) lipid-transfer activity were varied. Cells were pretreated with a proteasomal inhibitor (which remained with the cells throughout the experiment) and radiolabeled for 15 min. During the chase period, labeled apoB100 remained associated with the microsomes. Furthermore, by crosslinking sec61β to apoB100, we showed that apoB100 remained close to the translocon at the same time apoB100–ubiquitin conjugates could be detected. When lipid synthesis and lipoprotein assembly/secretion were stimulated by adding oleic acid (OA) to the chase medium, apoB100 was deubiquitinated, and its interaction with sec61β was disrupted, signifying completion of translocation concomitant with the formation of lipoprotein particles. MTP participates in apoB100 translocation and lipoprotein assembly. In the presence of OA, when MTP lipid-transfer activity was inhibited at the end of pulse labeling, apoB100 secretion was abolished. In contrast, when the labeled apoB100 was allowed to accumulate in the cell for 60 min before adding OA and the inhibitor, apoB100 lipidation and secretion were no longer impaired. Overall, the data imply that during most of its association with the endoplasmic reticulum, apoB100 is close to or within the translocon and is accessible to both the ubiquitin-proteasome and lipoprotein-assembly pathways. Furthermore, MTP lipid-transfer activity seems to be necessary only for early translocation and lipidation events.
Resumo:
Griffonia simplicifolia leaf lectin II (GSII), a plant defense protein against certain insects, consists of an N-acetylglucosamine (GlcNAc)-binding large subunit with a small subunit having sequence homology to class III chitinases. Much of the insecticidal activity of GSII is attributable to the large lectin subunit, because bacterially expressed recombinant large subunit (rGSII) inhibited growth and development of the cowpea bruchid, Callosobruchus maculatus (F). Site-specific mutations were introduced into rGSII to generate proteins with altered GlcNAc binding, and the different rGSII proteins were evaluated for insecticidal activity when added to the diet of the cowpea bruchid. At pH 5.5, close to the physiological pH of the cowpea bruchid midgut lumen, rGSII recombinant proteins were categorized as having high (rGSII, rGSII-Y134F, and rGSII-N196D mutant proteins), low (rGSII-N136D), or no (rGSII-D88N, rGSII-Y134G, rGSII-Y134D, and rGSII-N136Q) GlcNAc-binding activity. Insecticidal activity of the recombinant proteins correlated with their GlcNAc-binding activity. Furthermore, insecticidal activity correlated with the resistance to proteolytic degradation by cowpea bruchid midgut extracts and with GlcNAc-specific binding to the insect digestive tract. Together, these results establish that insecticidal activity of GSII is functionally linked to carbohydrate binding, presumably to the midgut epithelium or the peritrophic matrix, and to biochemical stability of the protein to digestive proteolysis.
Resumo:
Kinetic anomalies in protein folding can result from changes of the kinetic ground states (D, I, and N), changes of the protein folding transition state, or both. The 102-residue protein U1A has a symmetrically curved chevron plot which seems to result mainly from changes of the transition state. At low concentrations of denaturant the transition state occurs early in the folding reaction, whereas at high denaturant concentration it moves close to the native structure. In this study we use this movement to follow continuously the formation and growth of U1A's folding nucleus by φ analysis. Although U1A's transition state structure is generally delocalized and displays a typical nucleation–condensation pattern, we can still resolve a sequence of folding events. However, these events are sufficiently coupled to start almost simultaneously throughout the transition state structure.