954 resultados para Two-component Regulatory System
Resumo:
The solar irradiation that a crop receives is directly related to the physical and biological processes that affect the crop. However, the assessment of solar irradiation poses certain problems when it must be measured through fruit inside the canopy of a tree. In such cases, it is necessary to check many test points, which usually requires an expensive data acquisition system. The use of conventional irradiance sensors increases the cost of the experiment, making them unsuitable. Nevertheless, it is still possible to perform a precise irradiance test with a reduced price by using low-cost sensors based on the photovoltaic effect. The aim of this work is to develop a low-cost sensor that permits the measurement of the irradiance inside the tree canopy. Two different technologies of solar cells were analyzed for their use in the measurement of solar irradiation levels inside tree canopies. Two data acquisition system setups were also tested and compared. Experiments were performed in Ademuz (Valencia, Spain) in September 2011 and September 2012 to check the validity of low-cost sensors based on solar cells and their associated data acquisition systems. The observed difference between solar irradiation at high and low positions was of 18.5% ± 2.58% at a 95% confidence interval. Large differences were observed between the operations of the two tested sensors. In the case of a-Si cells based mini-modules, an effect of partial shadowing was detected due to the larger size of the devices, the use of individual c-Si cells is recommended over a-Si cells based mini-modules.
Resumo:
Intermediate band formation on silicon layers for solar cell applications was achieved by titanium implantation and laser annealing. A two-layer heterogeneous system, formed by the implanted layer and by the un-implanted substrate, was formed. In this work, we present for the first time electrical characterization results which show that recombination is suppressed when the Ti concentration is high enough to overcome the Mott limit, in agreement with the intermediate band theory. Clear differences have been observed between samples implanted with doses under or over the Mott limit. Samples implanted under the Mott limit have capacitance values much lower than the un-implanted ones as corresponds to a highly doped semiconductor Schottky junction. However, when the Mott limit is surpassed, the samples have much higher capacitance, revealing that the intermediate band is formed. The capacitance increasing is due to the big amount of charge trapped at the intermediate band, even at low temperatures. Ti deep levels have been measured by admittance spectroscopy. These deep levels are located at energies which vary from 0.20 to 0.28?eV below the conduction band for implantation doses in the range 1013-1014 at./cm2. For doses over the Mott limit, the implanted atoms become nonrecombinant. Capacitance voltage transient technique measurements prove that the fabricated devices consist of two-layers, in which the implanted layer and the substrate behave as an n+/n junction.
Resumo:
El principal objetivo del presente trabajo de investigación fue determinar las diferencias en distintas variables relacionadas con el rendimiento físico entre atletas de distinto nivel durante la prueba de los 60 metros vallas. Un total de 59 vallistas masculinos (los 31 participantes en el Campeonato del Mundo Absoluto de Pista Cubierta y los 28 participantes en el Campeonato de España Absoluto de Pista Cubierta, ambos celebrados en Valencia en el año 2008) formaron la muestra del estudio. El análisis biomecánico se realizó mediante un sistema fotogramétrico en dos dimensiones que permitió calcular, aplicando algoritmos basados en el procedimiento de la DLT (Abdel-Aziz y Karara, 1971), las coordenadas (x, y) de los sucesivos apoyos de los pies de los atletas sobre toda la superficie de competición. La filmación de las pruebas se llevó a cabo con seis cámaras de vídeo, ubicadas sobre la gradas, con una frecuencia de muestreo para el tratamiento de los datos de 50 Hz. En la fase de salida, los atletas de nivel superior mostraron una menor longitud (p<0,05) y tiempo de zancada (p<0,001), debido a un menor tiempo de vuelo (p<0,05). En la fase de vallas, los atletas de nivel más elevado presentaron mayores distancia de ataque a la valla (p<0,001), así como menores distancias de caída de la valla (p<0,001), tiempos de zancada (p<0,01-0,001) y de apoyo (p<0,01-0,001 ) en los cuatro pasos que conforman cada ciclo de vallas, así como un menor tiempo de vuelo en el paso de valla (p<0,001) y en el paso de transición (p<0,001). De manera adicional, se encontraron importantes diferencias en el reparto de los pasos entre vallas entre la primera y tercera valla y el resto de obstáculos. En la fase final, se observó una mayor longitud de zancada en los atletas de nivel superior (p<0,001), así como un menor tiempo de zancada (p<0,01) y de apoyo (p<0,01). Los resultados obtenidos en el presente estudio de investigación avalan la utilización de la fotogrametría en dos dimensiones para el análisis biomecánico de la prueba de 60 metros vallas en competición. Su aplicación en competiciones del máximo nivel internacional ha posibilitado conocer las características de los vallistas a lo largo de toda la prueba y determinar posibles implicaciones de cara al proceso de entrenamiento. ABSTRACT The aim of this research was to determine the differences in different variables related to physical performance among athletes of different levels during the race of 60 meter hurdles. A total of 59 male hurdlers (the 31 participants in the World Indoor Championship and the 28 participants in the Spanish Indoor Championship, both held in Valencia in 2008) formed the sample of the study. The biomechanical analysis of athletes was performed using a two-dimensional photogrammetric system which enabled calculation, applying algorithms based on the DLT method (Abdel -Aziz y Karara , 1971), the coordinates (x , y) of the successive supports of the feet on the entire competition surface. Filming test was conducted with six video cameras, located on the bleachers, with a sampling frequency for data processing of 50 Hz. In the approach run phase, the top-level athletes showed a smaller length step (p<0.05), and shorter step time (p<0.001), due to a shorter step flight time (p<0.05). In the hurdle unit phase, the higher level athletes had greater take-off distances (p<0.001), shorter landing distances (p<0.001), smaller step times (p<0.01-0.001), and support times (p<0.01- 0.001) in the four steps that comprised each hurdle unit, and smaller flight times in the hurdle step (p < 0.001), and the recovery step (p<0.001). Additionally, differences in the distribution of hurdle unit steps between the first and third hurdle, and other hurdles were found. In the run-in phase, a greater step length in top-level athletes (p<0.001), and a shorter step time (p<0.01) and contact time (p<0.01) was observed. The results obtained in this study support the use of photogrammetry in two dimensions for biomechanical analysis in 60 meter hurdles competition events. Its application at the highest international level competitions has allowed to know the characteristics of the hurdlers over the entire race and identify possible implications for the training process.
Resumo:
El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.
Resumo:
This paper presents a dynamic LM adaptation based on the topic that has been identified on a speech segment. We use LSA and the given topic labels in the training dataset to obtain and use the topic models. We propose a dynamic language model adaptation to improve the recognition performance in "a two stages" AST system. The final stage makes use of the topic identification with two variants: the first on uses just the most probable topic and the other one depends on the relative distances of the topics that have been identified. We perform the adaptation of the LM as a linear interpolation between a background model and topic-based LM. The interpolation weight id dynamically adapted according to different parameters. The proposed method is evaluated on the Spanish partition of the EPPS speech database. We achieved a relative reduction in WER of 11.13% over the baseline system which uses a single blackground LM.
Resumo:
La metodología Integrated Safety Analysis (ISA), desarrollada en el área de Modelación y Simulación (MOSI) del Consejo de Seguridad Nuclear (CSN), es un método de Análisis Integrado de Seguridad que está siendo evaluado y analizado mediante diversas aplicaciones impulsadas por el CSN; el análisis integrado de seguridad, combina las técnicas evolucionadas de los análisis de seguridad al uso: deterministas y probabilistas. Se considera adecuado para sustentar la Regulación Informada por el Riesgo (RIR), actual enfoque dado a la seguridad nuclear y que está siendo desarrollado y aplicado en todo el mundo. En este contexto se enmarcan, los proyectos Safety Margin Action Plan (SMAP) y Safety Margin Assessment Application (SM2A), impulsados por el Comité para la Seguridad de las Instalaciones Nucleares (CSNI) de la Agencia de la Energía Nuclear (NEA) de la Organización para la Cooperación y el Desarrollo Económicos (OCDE) en el desarrollo del enfoque adecuado para el uso de las metodologías integradas en la evaluación del cambio en los márgenes de seguridad debidos a cambios en las condiciones de las centrales nucleares. El comité constituye un foro para el intercambio de información técnica y de colaboración entre las organizaciones miembro, que aportan sus propias ideas en investigación, desarrollo e ingeniería. La propuesta del CSN es la aplicación de la metodología ISA, especialmente adecuada para el análisis según el enfoque desarrollado en el proyecto SMAP que pretende obtener los valores best-estimate con incertidumbre de las variables de seguridad que son comparadas con los límites de seguridad, para obtener la frecuencia con la que éstos límites son superados. La ventaja que ofrece la ISA es que permite el análisis selectivo y discreto de los rangos de los parámetros inciertos que tienen mayor influencia en la superación de los límites de seguridad, o frecuencia de excedencia del límite, permitiendo así evaluar los cambios producidos por variaciones en el diseño u operación de la central que serían imperceptibles o complicados de cuantificar con otro tipo de metodologías. La ISA se engloba dentro de las metodologías de APS dinámico discreto que utilizan la generación de árboles de sucesos dinámicos (DET) y se basa en la Theory of Stimulated Dynamics (TSD), teoría de fiabilidad dinámica simplificada que permite la cuantificación del riesgo de cada una de las secuencias. Con la ISA se modelan y simulan todas las interacciones relevantes en una central: diseño, condiciones de operación, mantenimiento, actuaciones de los operadores, eventos estocásticos, etc. Por ello requiere la integración de códigos de: simulación termohidráulica y procedimientos de operación; delineación de árboles de sucesos; cuantificación de árboles de fallos y sucesos; tratamiento de incertidumbres e integración del riesgo. La tesis contiene la aplicación de la metodología ISA al análisis integrado del suceso iniciador de la pérdida del sistema de refrigeración de componentes (CCWS) que genera secuencias de pérdida de refrigerante del reactor a través de los sellos de las bombas principales del circuito de refrigerante del reactor (SLOCA). Se utiliza para probar el cambio en los márgenes, con respecto al límite de la máxima temperatura de pico de vaina (1477 K), que sería posible en virtud de un potencial aumento de potencia del 10 % en el reactor de agua a presión de la C.N. Zion. El trabajo realizado para la consecución de la tesis, fruto de la colaboración de la Escuela Técnica Superior de Ingenieros de Minas y Energía y la empresa de soluciones tecnológicas Ekergy Software S.L. (NFQ Solutions) con el área MOSI del CSN, ha sido la base para la contribución del CSN en el ejercicio SM2A. Este ejercicio ha sido utilizado como evaluación del desarrollo de algunas de las ideas, sugerencias, y los algoritmos detrás de la metodología ISA. Como resultado se ha obtenido un ligero aumento de la frecuencia de excedencia del daño (DEF) provocado por el aumento de potencia. Este resultado demuestra la viabilidad de la metodología ISA para obtener medidas de las variaciones en los márgenes de seguridad que han sido provocadas por modificaciones en la planta. También se ha mostrado que es especialmente adecuada en escenarios donde los eventos estocásticos o las actuaciones de recuperación o mitigación de los operadores pueden tener un papel relevante en el riesgo. Los resultados obtenidos no tienen validez más allá de la de mostrar la viabilidad de la metodología ISA. La central nuclear en la que se aplica el estudio está clausurada y la información relativa a sus análisis de seguridad es deficiente, por lo que han sido necesarias asunciones sin comprobación o aproximaciones basadas en estudios genéricos o de otras plantas. Se han establecido tres fases en el proceso de análisis: primero, obtención del árbol de sucesos dinámico de referencia; segundo, análisis de incertidumbres y obtención de los dominios de daño; y tercero, cuantificación del riesgo. Se han mostrado diversas aplicaciones de la metodología y ventajas que presenta frente al APS clásico. También se ha contribuido al desarrollo del prototipo de herramienta para la aplicación de la metodología ISA (SCAIS). ABSTRACT The Integrated Safety Analysis methodology (ISA), developed by the Consejo de Seguridad Nuclear (CSN), is being assessed in various applications encouraged by CSN. An Integrated Safety Analysis merges the evolved techniques of the usually applied safety analysis methodologies; deterministic and probabilistic. It is considered as a suitable tool for assessing risk in a Risk Informed Regulation framework, the approach under development that is being adopted on Nuclear Safety around the world. In this policy framework, the projects Safety Margin Action Plan (SMAP) and Safety Margin Assessment Application (SM2A), set up by the Committee on the Safety of Nuclear Installations (CSNI) of the Nuclear Energy Agency within the Organization for Economic Co-operation and Development (OECD), were aimed to obtain a methodology and its application for the integration of risk and safety margins in the assessment of the changes to the overall safety as a result of changes in the nuclear plant condition. The committee provides a forum for the exchange of technical information and cooperation among member organizations which contribute their respective approaches in research, development and engineering. The ISA methodology, proposed by CSN, specially fits with the SMAP approach that aims at obtaining Best Estimate Plus Uncertainty values of the safety variables to be compared with the safety limits. This makes it possible to obtain the exceedance frequencies of the safety limit. The ISA has the advantage over other methods of allowing the specific and discrete evaluation of the most influential uncertain parameters in the limit exceedance frequency. In this way the changes due to design or operation variation, imperceptibles or complicated to by quantified by other methods, are correctly evaluated. The ISA methodology is one of the discrete methodologies of the Dynamic PSA framework that uses the generation of dynamic event trees (DET). It is based on the Theory of Stimulated Dynamics (TSD), a simplified version of the theory of Probabilistic Dynamics that allows the risk quantification. The ISA models and simulates all the important interactions in a Nuclear Power Plant; design, operating conditions, maintenance, human actuations, stochastic events, etc. In order to that, it requires the integration of codes to obtain: Thermohydraulic and human actuations; Even trees delineation; Fault Trees and Event Trees quantification; Uncertainty analysis and risk assessment. This written dissertation narrates the application of the ISA methodology to the initiating event of the Loss of the Component Cooling System (CCWS) generating sequences of loss of reactor coolant through the seals of the reactor coolant pump (SLOCA). It is used to test the change in margins with respect to the maximum clad temperature limit (1477 K) that would be possible under a potential 10 % power up-rate effected in the pressurized water reactor of Zion NPP. The work done to achieve the thesis, fruit of the collaborative agreement of the School of Mining and Energy Engineering and the company of technological solutions Ekergy Software S.L. (NFQ Solutions) with de specialized modeling and simulation branch of the CSN, has been the basis for the contribution of the CSN in the exercise SM2A. This exercise has been used as an assessment of the development of some of the ideas, suggestions, and algorithms behind the ISA methodology. It has been obtained a slight increase in the Damage Exceedance Frequency (DEF) caused by the power up-rate. This result shows that ISA methodology allows quantifying the safety margin change when design modifications are performed in a NPP and is specially suitable for scenarios where stochastic events or human responses have an important role to prevent or mitigate the accidental consequences and the total risk. The results do not have any validity out of showing the viability of the methodology ISA. Zion NPP was retired and information of its safety analysis is scarce, so assumptions without verification or approximations based on generic studies have been required. Three phases are established in the analysis process: first, obtaining the reference dynamic event tree; second, uncertainty analysis and obtaining the damage domains; third, risk quantification. There have been shown various applications of the methodology and advantages over the classical PSA. It has also contributed to the development of the prototype tool for the implementation of the ISA methodology (SCAIS).
Resumo:
La presente tesis doctoral se enmarca dentro del concepto de la sistematización del conocimiento en arquitectura, más concretamente en el campo de las construcciones arquitectónicas y la toma de decisiones en la fase de proyecto de envolventes arquitectónicas multicapa. Por tanto, el objetivo principal es el establecimiento de las bases para una toma de decisiones informadas durante el proyecto de una envolvente multicapa con el fin de colaborar en su optimización. Del mismo modo que la historia de la arquitectura está relacionada con la historia de la innovación en construcción, la construcción está sujeta a cambios como respuesta a los fracasos anteriores. En base a esto, se identifica la toma de decisiones en la fase de proyecto como el estadio inicial para establecer un punto estratégico de reflexión y de control sobre los procesos constructivos. La presente investigación, conceptualmente, define los parámetros intervinientes en el proyecto de envolventes arquitectónicas multicapa a partir de una clasificación y sistematización de todos los componentes (elementos, unidades y sistemas constructivos) utilizados en las fachadas multicapa. Dicha sistematización se materializa en una hoja matriz de datos en la que, dentro de una organización a modo de árbol, se puede acceder a la consulta de cada componente y de su caracterización. Dicha matriz permite la incorporación futura de cualquier componente o sistema nuevo que aparezca en el mercado, relacionándolo con aquellos con los que comparta ubicación, tipo de material, etc. Con base en esa matriz de datos, se diseña la sistematización de la toma de decisiones en la fase de proyecto de una envolvente arquitectónica, en concreto, en el caso de una fachada. Operativamente, el resultado se presenta como una herramienta que permite al arquitecto o proyectista reflexionar y seleccionar el sistema constructivo más adecuado, al enfrentarse con las distintas decisiones o elecciones posibles. La herramienta se basa en las elecciones iniciales tomadas por el proyectista y se estructura, a continuación y sucesivamente, en distintas aproximaciones, criterios, subcriterios y posibilidades que responden a los distintos avances en la definición del sistema constructivo. Se proponen una serie de fichas operativas de comprobación que informan sobre el estadio de decisión y de definición de proyecto alcanzados en cada caso. Asimismo, el sistema permite la conexión con otros sistemas de revisión de proyectos para fomentar la reflexión sobre la normalización de los riesgos asociados tanto al proprio sistema como a su proceso constructivo y comportamiento futuros. La herramienta proporciona un sistema de ayuda para ser utilizado en el proceso de toma de decisiones en la fase de diseño de una fachada multicapa, minimizando la arbitrariedad y ofreciendo una cualificación previa a la cuantificación que supondrá la elaboración del detalle constructivo y de su medición en las sucesivas fases del proyecto. Al mismo tiempo, la sistematización de dicha toma de decisiones en la fase del proyecto puede constituirse como un sistema de comprobación en las diferentes fases del proceso de decisión proyectual y de definición de la envolvente de un edificio. ABSTRACT The central issue of this doctoral Thesis is founded on the framework of the concept of the systematization of knowledge in architecture, in particular, in respect of the field of building construction and the decision making in the design stage of multilayer building envelope projects. Therefore, the main objective is to establish the bases for knowledgeable decision making during a multilayer building envelope design process, in order to collaborate with its optimization. Just as the history of architecture is connected to the history of innovation in construction, construction itself is subject to changes as a response to previous failures. On this basis, the decisions made during the project design phase are identified as the initial state to establish an strategic point for reflection and control, referred to the constructive processes. Conceptually, this research defines the parameters involving the multilayer building envelope projects, on the basis of a classification and systematization for all the components (elements, constructive units and constructive systems) used in multilayer façades. The mentioned systematization is materialized into a data matrix sheet in which, following a tree‐like organization, the access to every single component and its characterization is possible. The above data matrix allows the future inclusion of any new component or system that may appear in the construction market. That new component or system can be put into a relationship with another, which it shares location, type of material,… with. Based on the data matrix, the systematization of the decision making process for a building envelope design stage is designed, more particularly in the case of a façade. Putting this into practice, it is represented as a tool which allows the architect or the designer, to reflect and to select the appropriate building system when facing the different elections or the different options. The tool is based on the initial elections taken by the designer. Then and successively, it is shaped on the form of different operative steps, criteria, sub‐criteria and possibilities which respond to a different progress in the definition of the building construction system. In order to inform about the stage of the decision and the definition reached by the project in every particular case, a range of operative sheets are proposed. Additionally, the system allows the connection with other reviewing methods for building projects. The aim of this last possibility is to encourage the reflection on standardization of the associated risks to the building system itself and its future performance. The tool provides a helping system to be used during the decision making process for a multilayer façade design. It minimizes the arbitrariness and offers a qualification previous to the quantification that will be done with the development of the construction details and their bill of quantities, that in subsequent project stages will be executed. At the same time, the systematization of the mentioned decision making during the design phase, can be found as a checking system in the different stages of the decision making design process and in the different stages of the building envelope definition.
Resumo:
The Escherichia coli umuDC operon is induced in response to replication-blocking DNA lesions as part of the SOS response. UmuD protein then undergoes an RecA-facilitated self-cleavage reaction that removes its N-terminal 24 residues to yield UmuD′. UmuD′, UmuC, RecA, and some form of the E. coli replicative DNA polymerase, DNA polymerase III holoenzyme, function in translesion synthesis, the potentially mutagenic process of replication over otherwise blocking lesions. Furthermore, it has been proposed that, before cleavage, UmuD together with UmuC acts as a DNA damage checkpoint system that regulates the rate of DNA synthesis in response to DNA damage, thereby allowing time for accurate repair to take place. Here we provide direct evidence that both uncleaved UmuD and UmuD′ interact physically with the catalytic, proofreading, and processivity subunits of the E. coli replicative polymerase. Consistent with our model proposing that uncleaved UmuD and UmuD′ promote different events, UmuD and UmuD′ interact differently with DNA polymerase III: whereas uncleaved UmuD interacts more strongly with β than it does with α, UmuD′ interacts more strongly with α than it does with β. We propose that the protein–protein interactions we have characterized are part of a higher-order regulatory system of replication fork management that controls when the umuDC gene products can gain access to the replication fork.
Resumo:
Two-component systems, sensor kinase-response regulator pairs, dominate bacterial signal transduction. Regulation is exerted by phosphorylation of an Asp in receiver domains of response regulators. Lability of the acyl phosphate linkage has limited structure determination for the active, phosphorylated forms of receiver domains. As assessed by both functional and structural criteria, beryllofluoride yields an excellent analogue of aspartyl phosphate in response regulator NtrC, a bacterial enhancer-binding protein. Beryllofluoride also appears to activate the chemotaxis, sporulation, osmosensing, and nitrate/nitrite response regulators CheY, Spo0F, OmpR, and NarL, respectively. NMR spectroscopic studies indicate that beryllofluoride will facilitate both biochemical and structural characterization of the active forms of receiver domains.
Resumo:
The murine B29 (Igβ) promoter is B cell specific and contains essential SP1, ETS, OCT, and Ikaros motifs. Flanking 5′ DNA sequences inhibit B29 promoter activity, suggesting this region contains silencer elements. Two adjacent 5′ DNA segments repress transcription by the murine B29 promoter in a position- and orientation-independent manner, analogous to known silencers. Both these 5′ segments also inhibit transcription by several heterologous promoters in B cells, including mb-1, c-fos, and human B29. These 5′ segments also inhibit transcription by the c-fos promoter in T cells suggesting they are not B cell-specific elements. DNase I footprint analyses show an approximately 70-bp protected region overlapping the boundary between the two negative regulatory DNA segments and corresponding to binding sites for at least two different DNA-binding proteins. Within this footprint, two unrelated 30-bp cis-acting DNA motifs (designated TOAD and FROG) function as position- and orientation-independent silencers when located directly 5′ of the murine B29 promoter. These two silencer motifs act cooperatively to restrict the transcriptional activity of the B29 promoter. Neither of these motifs resembles any known silencers. Mutagenesis of the TOAD and FROG motifs in their respective 5′ DNA segments eliminates the silencing activity of these upstream regions, indicating these two motifs as the principal B29 silencer elements within these regions.
Resumo:
Rho1p is a yeast homolog of mammalian RhoA small GTP-binding protein. Rho1p is localized at the growth sites and required for bud formation. We have recently shown that Bni1p is a potential target of Rho1p and that Bni1p regulates reorganization of the actin cytoskeleton through interactions with profilin, an actin monomer-binding protein. Using the yeast two-hybrid screening system, we cloned a gene encoding a protein that interacted with Bni1p. This protein, Spa2p, was known to be localized at the bud tip and to be implicated in the establishment of cell polarity. The C-terminal 254 amino acid region of Spa2p, Spa2p(1213–1466), directly bound to a 162-amino acid region of Bni1p, Bni1p(826–987). Genetic analyses revealed that both the bni1 and spa2 mutations showed synthetic lethal interactions with mutations in the genes encoding components of the Pkc1p-mitogen-activated protein kinase pathway, in which Pkc1p is another target of Rho1p. Immunofluorescence microscopic analysis showed that Bni1p was localized at the bud tip in wild-type cells. However, in the spa2 mutant, Bni1p was not localized at the bud tip and instead localized diffusely in the cytoplasm. A mutant Bni1p, which lacked the Rho1p-binding region, also failed to be localized at the bud tip. These results indicate that both Rho1p and Spa2p are involved in the localization of Bni1p at the growth sites where Rho1p regulates reorganization of the actin cytoskeleton through Bni1p.
Resumo:
Thionein (T) has not been isolated previously from biological material. However, it is generated transiently in situ by removal of zinc from metallothionein under oxidoreductive conditions, particularly in the presence of selenium compounds. T very rapidly activates a group of enzymes in which zinc is bound at an inhibitory site. The reaction is selective, as is apparent from the fact that T does not remove zinc from the catalytic sites of zinc metalloenzymes. T instantaneously reverses the zinc inhibition with a stoichiometry commensurate with its known capacity to bind seven zinc atoms in the form of clusters in metallothionein. The zinc inhibition is much more pronounced than was previously reported, with dissociation constants in the low nanomolar range. Thus, T is an effective, endogenous chelating agent, suggesting the existence of a hitherto unknown and unrecognized biological regulatory system. T removes the metal from an inhibitory zinc-specific enzymatic site with a resultant marked increase of activity. The potential significance of this system is supported by the demonstration of its operations in enzymes involved in glycolysis and signal transduction.
Resumo:
Spatial learning requires the septohippocampal pathway. The interaction of learning experience with gene products to modulate the function of a pathway may underlie use-dependent plasticity. The regulated release of nerve growth factor (NGF) from hippocampal cultures and hippocampus, as well as its actions on cholinergic septal neurons, suggest it as a candidate protein to interact with a learning experience. A method was used to evaluate NGF gene-experience interaction on the septohippocampal neural circuitry in mice. The method permits brain region-specific expression of a new gene by using a two-component approach: a virus vector directing expression of cre recombinase; and transgenic mice carrying genomic recombination substrates rendered transcriptionally inactive by a “floxed” stop cassette. Cre recombinase vector delivery into transgenic mouse hippocampus resulted in recombination in 30% of infected cells and the expression of a new gene in those cells. To examine the interaction of the NGF gene and experience, adult mice carrying a NGF transgene with a floxed stop cassette (NGFXAT) received a cre recombinase vector to produce localized unilateral hippocampal NGF gene expression, so-called “activated” mice. Activated and control nonactivated NGFXAT mice were subjected to different experiences: repeated spatial learning, repeated rote performance, or standard vivarium housing. Latency, the time to complete the learning task, declined in the repeated spatial learning groups. The measurement of interaction between NGF gene expression and experience on the septohippocampal circuitry was assessed by counting retrogradely labeled basal forebrain cholinergic neurons projecting to the hippocampal site of NGF gene activation. Comparison of all NGF activated groups revealed a graded effect of experience on the septohippocampal pathway, with the largest change occurring in activated mice provided with repeated learning experience. These data demonstrate that plasticity of the adult spatial learning circuitry can be robustly modulated by experience-dependent interactions with a specific hippocampal gene product.
Resumo:
A transition as a function of increasing temperature from harmonic to anharmonic dynamics has been observed in globular proteins by using spectroscopic, scattering, and computer simulation techniques. We present here results of a dynamic neutron scattering analysis of the solvent dependence of the picosecond-time scale dynamic transition behavior of solutions of a simple single-subunit enzyme, xylanase. The protein is examined in powder form, in D2O, and in four two-component perdeuterated single-phase cryosolvents in which it is active and stable. The scattering profiles of the mixed solvent systems in the absence of protein are also determined. The general features of the dynamic transition behavior of the protein solutions follow those of the solvents. The dynamic transition in all of the mixed cryosolvent–protein systems is much more gradual than in pure D2O, consistent with a distribution of energy barriers. The differences between the dynamic behaviors of the various cryosolvent protein solutions themselves are remarkably small. The results are consistent with a picture in which the picosecond-time scale atomic dynamics respond strongly to melting of pure water solvent but are relatively invariant in cryosolvents of differing compositions and melting points.
Resumo:
Cyclin D1 is expressed at abnormally high levels in many cancers and has been specifically implicated in the development of breast cancer. In this report we have extensively analyzed the cyclin D1 promoter in a variety of cancer cell lines that overexpress the protein and identified two critical regulatory elements (CREs), a previously identified CRE at –52 and a novel site at –30. In vivo footprinting experiments demonstrated factors binding at both sites. We have used a novel DNA-binding ligand, GL020924, to target the site at –30 (–30–21) of the cyclin D1 promoter in MCF7 breast cancer cells. A binding site for this novel molecule was constructed by mutating 2 bp of the wild-type cyclin D1 promoter at the –30–21 site. Treatment with GL020924 specifically inhibited expression of the targeted cyclin D1 promoter construct in MCF7 cells in a concentration-dependent manner, thus validating the –30–21 site as a target for minor groove-binding ligands. In addition, this result validates our approach to regulating the expression of genes implicated in disease by targeting small DNA-binding ligands to key regulatory elements in the promoters of those genes.