895 resultados para time interval
Resumo:
The current study presents quantitative reconstructions of tree cover, annual precipitation and mean July temperature derived from the pollen record from Lake Billyakh (65°17'N, 126°47'E, 340 m above sea level) spanning the last ca. 50 kyr. The reconstruction of tree cover suggests presence of woody plants through the entire analyzed time interval, although trees played only a minor role in the vegetation around Lake Billyakh prior to 14 kyr BP (<5%). This result corroborates low percentages of tree pollen and low scores of the cold deciduous forest biome in the PG1755 record from Lake Billyakh. The reconstructed values of the mean temperature of the warmest month ~8-10 °C do not support larch forest or woodland around Lake Billyakh during the coldest phase of the last glacial between ~32 and ~15 kyr BP. However, modern cases from northern Siberia, ca. 750 km north of Lake Billyakh, demonstrate that individual larch plants can grow within shrub and grass tundra landscape in very low mean July temperatures of about 8 °C. This makes plausible our hypothesis that the western and southern foreland of the Verkhoyansk Mountains could provide enough moist and warm microhabitats and allow individual larch specimens to survive climatic extremes of the last glacial. Reconstructed mean values of precipitation are about 270 mm/yr during the last glacial interval. This value is almost 100 mm higher than modern averages reported for the extreme-continental north-eastern Siberia east of Lake Billyakh, where larch-dominated cold deciduous forest grows at present. This suggests that last glacial environments around Lake Billyakh were never too dry for larch to grow and that the summer warmth was the main factor, which limited tree growth during the last glacial interval. The n-alkane analysis of the Siberian plants presented in this study demonstrates rather complex alkane distribution patterns, which challenge the interpretation of the fossil records. In particular, extremely low n-alkane concentrations in the leaves of local coniferous trees and shrubs suggest that their contribution to the litter and therefore to the fossil lake sediments might be not high enough for tracing the Quaternary history of the needleleaved taxa using the n-alkane biomarker method.
Resumo:
In this work we propose a method to accelerate time dependent numerical solvers of systems of PDEs that require a high cost in computational time and memory. The method is based on the combined use of such numerical solver with a proper orthogonal decomposition, from which we identify modes, a Galerkin projection (that provides a reduced system of equations) and the integration of the reduced system, studying the evolution of the modal amplitudes. We integrate the reduced model until our a priori error estimator indicates that our approximation in not accurate. At this point we use again our original numerical code in a short time interval to adapt the POD manifold and continue then with the integration of the reduced model. Application will be made to two model problems: the Ginzburg-Landau equation in transient chaos conditions and the two-dimensional pulsating cavity problem, which describes the motion of liquid in a box whose upper wall is moving back and forth in a quasi-periodic fashion. Finally, we will discuss a way of improving the performance of the method using experimental data or information from numerical simulations
Resumo:
Natural regeneration is an ecological key-process that makes plant persistence possible and, consequently, it constitutes an essential element of sustainable forest management. In this respect, natural regeneration in even-aged stands of Pinus pinea L. located in the Spanish Northern Plateau has not always been successfully achieved despite over a century of pine nut-based management. As a result, natural regeneration has recently become a major concern for forest managers when we are living a moment of rationalization of investment in silviculture. The present dissertation is addressed to provide answers to forest managers on this topic through the development of an integral regeneration multistage model for P. pinea stands in the region. From this model, recommendations for natural regeneration-based silviculture can be derived under present and future climate scenarios. Also, the model structure makes it possible to detect the likely bottlenecks affecting the process. The integral model consists of five submodels corresponding to each of the subprocesses linking the stages involved in natural regeneration (seed production, seed dispersal, seed germination, seed predation and seedling survival). The outputs of the submodels represent the transitional probabilities between these stages as a function of climatic and stand variables, which in turn are representative of the ecological factors driving regeneration. At subprocess level, the findings of this dissertation should be interpreted as follows. The scheduling of the shelterwood system currently conducted over low density stands leads to situations of dispersal limitation since the initial stages of the regeneration period. Concerning predation, predator activity appears to be only limited by the occurrence of severe summer droughts and masting events, the summer resulting in a favourable period for seed survival. Out of this time interval, predators were found to almost totally deplete seed crops. Given that P. pinea dissemination occurs in summer (i.e. the safe period against predation), the likelihood of a seed to not be destroyed is conditional to germination occurrence prior to the intensification of predator activity. However, the optimal conditions for germination seldom take place, restraining emergence to few days during the fall. Thus, the window to reach the seedling stage is narrow. In addition, the seedling survival submodel predicts extremely high seedling mortality rates and therefore only some individuals from large cohorts will be able to persist. These facts, along with the strong climate-mediated masting habit exhibited by P. pinea, reveal that viii the overall probability of establishment is low. Given this background, current management –low final stand densities resulting from intense thinning and strict felling schedules– conditions the occurrence of enough favourable events to achieve natural regeneration during the current rotation time. Stochastic simulation and optimisation computed through the integral model confirm this circumstance, suggesting that more flexible and progressive regeneration fellings should be conducted. From an ecological standpoint, these results inform a reproductive strategy leading to uneven-aged stand structures, in full accordance with the medium shade-tolerant behaviour of the species. As a final remark, stochastic simulations performed under a climate-change scenario show that regeneration in the species will not be strongly hampered in the future. This resilient behaviour highlights the fundamental ecological role played by P. pinea in demanding areas where other tree species fail to persist.
Resumo:
In pressure irrigation-water distribution networks, pressure regulating devices for controlling the discharged flow rate by irrigation units are needed due to the variability of flow rate. In addition, applied water volume is used controlled operating the valve during a calculated time interval, and assuming constant flow rate. In general, a pressure regulating valve PRV is the commonly used pressure regulating device in a hydrant, which, also, executes the open and close function. A hydrant feeds several irrigation units, requiring a wide range in flow rate. In addition, some flow meters are also available, one as a component of the hydrant and the rest are placed downstream. Every land owner has one flow meter for each group of field plots downstream the hydrant. Its lecture could be used for refining the water balance but its accuracy must be taken into account. Ideal PRV performance would maintain a constant downstream pressure. However, the true performance depends on both upstream pressure and the discharged flow rate. The objective of this work is to asses the influence of the performance on the applied volume during the whole irrigation events in a year. The results of the study have been obtained introducing the flow rate into a PRV model. Variations on flow rate are simulated by taking into account the consequences of variations on climate conditions and also decisions in irrigation operation, such us duration and frequency application. The model comprises continuity, dynamic and energy equations of the components of the PRV.
Resumo:
Las fuentes de alimentación de modo conmutado (SMPS en sus siglas en inglés) se utilizan ampliamente en una gran variedad de aplicaciones. La tarea más difícil para los diseñadores de SMPS consiste en lograr simultáneamente la operación del convertidor con alto rendimiento y alta densidad de energía. El tamaño y el peso de un convertidor de potencia está dominado por los componentes pasivos, ya que estos elementos son normalmente más grandes y más pesados que otros elementos en el circuito. Para una potencia de salida dada, la cantidad de energía almacenada en el convertidor que ha de ser entregada a la carga en cada ciclo de conmutación, es inversamente proporcional a la frecuencia de conmutación del convertidor. Por lo tanto, el aumento de la frecuencia de conmutación se considera un medio para lograr soluciones más compactas con los niveles de densidad de potencia más altos. La importancia de investigar en el rango de alta frecuencia de conmutación radica en todos los beneficios que se pueden lograr: además de la reducción en el tamaño de los componentes pasivos, el aumento de la frecuencia de conmutación puede mejorar significativamente prestaciones dinámicas de convertidores de potencia. Almacenamiento de energía pequeña y el período de conmutación corto conducen a una respuesta transitoria del convertidor más rápida en presencia de las variaciones de la tensión de entrada o de la carga. Las limitaciones más importantes del incremento de la frecuencia de conmutación se relacionan con mayores pérdidas del núcleo magnético convencional, así como las pérdidas de los devanados debido a los efectos pelicular y proximidad. También, un problema potencial es el aumento de los efectos de los elementos parásitos de los componentes magnéticos - inductancia de dispersión y la capacidad entre los devanados - que causan pérdidas adicionales debido a las corrientes no deseadas. Otro factor limitante supone el incremento de las pérdidas de conmutación y el aumento de la influencia de los elementos parásitos (pistas de circuitos impresos, interconexiones y empaquetado) en el comportamiento del circuito. El uso de topologías resonantes puede abordar estos problemas mediante el uso de las técnicas de conmutaciones suaves para reducir las pérdidas de conmutación incorporando los parásitos en los elementos del circuito. Sin embargo, las mejoras de rendimiento se reducen significativamente debido a las corrientes circulantes cuando el convertidor opera fuera de las condiciones de funcionamiento nominales. A medida que la tensión de entrada o la carga cambian las corrientes circulantes incrementan en comparación con aquellos en condiciones de funcionamiento nominales. Se pueden obtener muchos beneficios potenciales de la operación de convertidores resonantes a más alta frecuencia si se emplean en aplicaciones con condiciones de tensión de entrada favorables como las que se encuentran en las arquitecturas de potencia distribuidas. La regulación de la carga y en particular la regulación de la tensión de entrada reducen tanto la densidad de potencia del convertidor como el rendimiento. Debido a la relativamente constante tensión de bus que se encuentra en arquitecturas de potencia distribuidas los convertidores resonantes son adecuados para el uso en convertidores de tipo bus (transformadores cc/cc de estado sólido). En el mercado ya están disponibles productos comerciales de transformadores cc/cc de dos puertos que tienen muy alta densidad de potencia y alto rendimiento se basan en convertidor resonante serie que opera justo en la frecuencia de resonancia y en el orden de los megahercios. Sin embargo, las mejoras futuras en el rendimiento de las arquitecturas de potencia se esperan que vengan del uso de dos o más buses de distribución de baja tensión en vez de una sola. Teniendo eso en cuenta, el objetivo principal de esta tesis es aplicar el concepto del convertidor resonante serie que funciona en su punto óptimo en un nuevo transformador cc/cc bidireccional de puertos múltiples para atender las necesidades futuras de las arquitecturas de potencia. El nuevo transformador cc/cc bidireccional de puertos múltiples se basa en la topología de convertidor resonante serie y reduce a sólo uno el número de componentes magnéticos. Conmutaciones suaves de los interruptores hacen que sea posible la operación en las altas frecuencias de conmutación para alcanzar altas densidades de potencia. Los problemas posibles con respecto a inductancias parásitas se eliminan, ya que se absorben en los Resumen elementos del circuito. El convertidor se caracteriza con una muy buena regulación de la carga propia y cruzada debido a sus pequeñas impedancias de salida intrínsecas. El transformador cc/cc de puertos múltiples opera a una frecuencia de conmutación fija y sin regulación de la tensión de entrada. En esta tesis se analiza de forma teórica y en profundidad el funcionamiento y el diseño de la topología y del transformador, modelándolos en detalle para poder optimizar su diseño. Los resultados experimentales obtenidos se corresponden con gran exactitud a aquellos proporcionados por los modelos. El efecto de los elementos parásitos son críticos y afectan a diferentes aspectos del convertidor, regulación de la tensión de salida, pérdidas de conducción, regulación cruzada, etc. También se obtienen los criterios de diseño para seleccionar los valores de los condensadores de resonancia para lograr diferentes objetivos de diseño, tales como pérdidas de conducción mínimas, la eliminación de la regulación cruzada o conmutación en apagado con corriente cero en plena carga de todos los puentes secundarios. Las conmutaciones en encendido con tensión cero en todos los interruptores se consiguen ajustando el entrehierro para obtener una inductancia magnetizante finita en el transformador. Se propone, además, un cambio en los señales de disparo para conseguir que la operación con conmutaciones en apagado con corriente cero de todos los puentes secundarios sea independiente de la variación de la carga y de las tolerancias de los condensadores resonantes. La viabilidad de la topología propuesta se verifica a través una extensa tarea de simulación y el trabajo experimental. La optimización del diseño del transformador de alta frecuencia también se aborda en este trabajo, ya que es el componente más voluminoso en el convertidor. El impacto de de la duración del tiempo muerto y el tamaño del entrehierro en el rendimiento del convertidor se analizan en un ejemplo de diseño de transformador cc/cc de tres puertos y cientos de vatios de potencia. En la parte final de esta investigación se considera la implementación y el análisis de las prestaciones de un transformador cc/cc de cuatro puertos para una aplicación de muy baja tensión y de decenas de vatios de potencia, y sin requisitos de aislamiento. Abstract Recently, switch mode power supplies (SMPS) have been used in a great variety of applications. The most challenging issue for designers of SMPS is to achieve simultaneously high efficiency operation at high power density. The size and weight of a power converter is dominated by the passive components since these elements are normally larger and heavier than other elements in the circuit. If the output power is constant, the stored amount of energy in the converter which is to be delivered to the load in each switching cycle is inversely proportional to the converter’s switching frequency. Therefore, increasing the switching frequency is considered a mean to achieve more compact solutions at higher power density levels. The importance of investigation in high switching frequency range comes from all the benefits that can be achieved. Besides the reduction in size of passive components, increasing switching frequency can significantly improve dynamic performances of power converters. Small energy storage and short switching period lead to faster transient response of the converter against the input voltage and load variations. The most important limitations for pushing up the switching frequency are related to increased conventional magnetic core loss as well as the winding loss due to the skin and proximity effect. A potential problem is also increased magnetic parasitics – leakage inductance and capacitance between the windings – that cause additional loss due to unwanted currents. Higher switching loss and the increased influence of printed circuit boards, interconnections and packaging on circuit behavior is another limiting factor. Resonant power conversion can address these problems by using soft switching techniques to reduce switching loss incorporating the parasitics into the circuit elements. However the performance gains are significantly reduced due to the circulating currents when the converter operates out of the nominal operating conditions. As the input voltage or the load change the circulating currents become higher comparing to those ones at nominal operating conditions. Multiple Input-Output Many potential gains from operating resonant converters at higher switching frequency can be obtained if they are employed in applications with favorable input voltage conditions such as those found in distributed power architectures. Load and particularly input voltage regulation reduce a converter’s power density and efficiency. Due to a relatively constant bus voltage in distributed power architectures the resonant converters are suitable for bus voltage conversion (dc/dc or solid state transformation). Unregulated two port dc/dc transformer products achieving very high power density and efficiency figures are based on series resonant converter operating just at the resonant frequency and operating in the megahertz range are already available in the market. However, further efficiency improvements of power architectures are expected to come from using two or more separate low voltage distribution buses instead of a single one. The principal objective of this dissertation is to implement the concept of the series resonant converter operating at its optimum point into a novel bidirectional multiple port dc/dc transformer to address the future needs of power architectures. The new multiple port dc/dc transformer is based on a series resonant converter topology and reduces to only one the number of magnetic components. Soft switching commutations make possible high switching frequencies to be adopted and high power densities to be achieved. Possible problems regarding stray inductances are eliminated since they are absorbed into the circuit elements. The converter features very good inherent load and cross regulation due to the small output impedances. The proposed multiple port dc/dc transformer operates at fixed switching frequency without line regulation. Extensive theoretical analysis of the topology and modeling in details are provided in order to compare with the experimental results. The relationships that show how the output voltage regulation and conduction losses are affected by the circuit parasitics are derived. The methods to select the resonant capacitor values to achieve different design goals such as minimum conduction losses, elimination of cross regulation or ZCS operation at full load of all the secondary side bridges are discussed. ZVS turn-on of all the switches is achieved by relying on the finite magnetizing inductance of the Abstract transformer. A change of the driving pattern is proposed to achieve ZCS operation of all the secondary side bridges independent on load variations or resonant capacitor tolerances. The feasibility of the proposed topology is verified through extensive simulation and experimental work. The optimization of the high frequency transformer design is also addressed in this work since it is the most bulky component in the converter. The impact of dead time interval and the gap size on the overall converter efficiency is analyzed on the design example of the three port dc/dc transformer of several hundreds of watts of the output power for high voltage applications. The final part of this research considers the implementation and performance analysis of the four port dc/dc transformer in a low voltage application of tens of watts of the output power and without isolation requirements.
Resumo:
In pressure irrigation-water distribution networks, applied water volume is usually controlled opening a valve during a calculated time interval, and assuming constant flow rate. In general, pressure regulating devices for controlling the discharged flow rate by irrigation units are needed due to the variability of pressure conditions.
Resumo:
En la actualidad, la gestión de embalses para el control de avenidas se realiza, comúnmente, utilizando modelos de simulación. Esto se debe, principalmente, a su facilidad de uso en tiempo real por parte del operador de la presa. Se han desarrollado modelos de optimización de la gestión del embalse que, aunque mejoran los resultados de los modelos de simulación, su aplicación en tiempo real se hace muy difícil o simplemente inviable, pues está limitada al conocimiento de la avenida futura que entra al embalse antes de tomar la decisión de vertido. Por esta razón, se ha planteado el objetivo de desarrollar un modelo de gestión de embalses en avenidas que incorpore las ventajas de un modelo de optimización y que sea de fácil uso en tiempo real por parte del gestor de la presa. Para ello, se construyó un modelo de red Bayesiana que representa los procesos de la cuenca vertiente y del embalse y, que aprende de casos generados sintéticamente mediante un modelo hidrológico agregado y un modelo de optimización de la gestión del embalse. En una primera etapa, se generó un gran número de episodios sintéticos de avenida utilizando el método de Monte Carlo, para obtener las lluvias, y un modelo agregado compuesto de transformación lluvia- escorrentía, para obtener los hidrogramas de avenida. Posteriormente, se utilizaron las series obtenidas como señales de entrada al modelo de gestión de embalses PLEM, que optimiza una función objetivo de costes mediante programación lineal entera mixta, generando igual número de eventos óptimos de caudal vertido y de evolución de niveles en el embalse. Los episodios simulados fueron usados para entrenar y evaluar dos modelos de red Bayesiana, uno que pronostica el caudal de entrada al embalse, y otro que predice el caudal vertido, ambos en un horizonte de tiempo que va desde una a cinco horas, en intervalos de una hora. En el caso de la red Bayesiana hidrológica, el caudal de entrada que se elige es el promedio de la distribución de probabilidad de pronóstico. En el caso de la red Bayesiana hidráulica, debido al comportamiento marcadamente no lineal de este proceso y a que la red Bayesiana devuelve un rango de posibles valores de caudal vertido, se ha desarrollado una metodología para seleccionar un único valor, que facilite el trabajo del operador de la presa. Esta metodología consiste en probar diversas estrategias propuestas, que incluyen zonificaciones y alternativas de selección de un único valor de caudal vertido en cada zonificación, a un conjunto suficiente de episodios sintéticos. Los resultados de cada estrategia se compararon con el método MEV, seleccionándose las estrategias que mejoran los resultados del MEV, en cuanto al caudal máximo vertido y el nivel máximo alcanzado por el embalse, cualquiera de las cuales puede usarse por el operador de la presa en tiempo real para el embalse de estudio (Talave). La metodología propuesta podría aplicarse a cualquier embalse aislado y, de esta manera, obtener, para ese embalse particular, diversas estrategias que mejoran los resultados del MEV. Finalmente, a modo de ejemplo, se ha aplicado la metodología a una avenida sintética, obteniendo el caudal vertido y el nivel del embalse en cada intervalo de tiempo, y se ha aplicado el modelo MIGEL para obtener en cada instante la configuración de apertura de los órganos de desagüe que evacuarán el caudal. Currently, the dam operator for the management of dams uses simulation models during flood events, mainly due to its ease of use in real time. Some models have been developed to optimize the management of the reservoir to improve the results of simulation models. However, real-time application becomes very difficult or simply unworkable, because the decision to discharge depends on the unknown future avenue entering the reservoir. For this reason, the main goal is to develop a model of reservoir management at avenues that incorporates the advantages of an optimization model. At the same time, it should be easy to use in real-time by the dam manager. For this purpose, a Bayesian network model has been developed to represent the processes of the watershed and reservoir. This model learns from cases generated synthetically by a hydrological model and an optimization model for managing the reservoir. In a first stage, a large number of synthetic flood events was generated using the Monte Carlo method, for rain, and rain-added processing model composed of runoff for the flood hydrographs. Subsequently, the series obtained were used as input signals to the reservoir management model PLEM that optimizes a target cost function using mixed integer linear programming. As a result, many optimal discharge rate events and water levels in the reservoir levels were generated. The simulated events were used to train and test two models of Bayesian network. The first one predicts the flow into the reservoir, and the second predicts the discharge flow. They work in a time horizon ranging from one to five hours, in intervals of an hour. In the case of hydrological Bayesian network, the chosen inflow is the average of the probability distribution forecast. In the case of hydraulic Bayesian network the highly non-linear behavior of this process results on a range of possible values of discharge flow. A methodology to select a single value has been developed to facilitate the dam operator work. This methodology tests various strategies proposed. They include zoning and alternative selection of a single value in each discharge rate zoning from a sufficient set of synthetic episodes. The results of each strategy are compared with the MEV method. The strategies that improve the outcomes of MEV are selected and can be used by the dam operator in real time applied to the reservoir study case (Talave). The methodology could be applied to any single reservoir and, thus, obtain, for the particular reservoir, various strategies that improve results from MEV. Finally, the methodology has been applied to a synthetic flood, obtaining the discharge flow and the reservoir level in each time interval. The open configuration floodgates to evacuate the flow at each interval have been obtained applying the MIGEL model.
Resumo:
La tesis estudia en detalle la Hunstanton Secondary School y su trascendencia. Así, se trata de analizar el conjunto de procesos que hace que esta obra sea entendida como el manifiesto construido del Nuevo Brutalismo en Inglaterra. La Escuela en Hunstanton fue la primera obra proyectada y construida por los Smithson y, si se considera que el legado que dejaron Alison y Peter fue más de carácter teórico que constructivo, ésta se ha convertido en un edificio relevante dentro de su trayectoria profesional. Además, el rigor con el que fue realizado el proyecto y la ratificación de las ideas que subyacían tras él, a pesar del extenso intervalo temporal que caracterizó su proceso constructivo, hacen que esta obra se convierta en una síntesis de la filosofía arquitectónica gestada en Inglaterra tras la guerra. Por otro lado, hay que contemplar que la sencillez del lenguaje constructivo empleado, viene dada por la compleja reiteración de los sistemas proyectuales tipo que formulan para este proyecto y el establecimiento de una gramática casi matemática. La sistematización de su vocabulario hace que, tras el análisis de su arquitectura, se encuentren nuevos parámetros capaces de documentar este momento de la historia de la arquitectura en Inglaterra. La envolvente del edificio constituye al tiempo fachada y estructura. Esta característica ha pasado inadvertida cuando, en numerosas ocasiones y durante seis décadas, se han venido publicando las fotografías de la obra terminada y los dibujos que los Smithson habían realizado en la fase de proyecto. Como consecuencia, ha proliferado el conocimiento de la arquitectura de la escuela a un nivel más superficial, mostrando el resultado formal de la misma y con ello, simplemente se ha dejado intuir la gran influencia que Mies Van der Rohe provocó en los Smithson en los primeros años de desarrollo de su labor como arquitectos. El objetivo principal de esta tesis es, por tanto, facilitar el entendimiento del espacio que propusieron los Smithson a partir del análisis pormenorizado de los distintos sistemas constructivos empleados y del equipo personal que se vio implicado en su construcción. Para ello, es necesario abordar el estudio de los materiales y mecanismos proyectuales que hicieron posible que este conjunto de espacios –interiores y exteriores- resultase definido a través de la relación entre dos variables: una evidente austeridad en la utilización de los materiales y la combinación de los distintos sistemas intervinientes a partir del recurso de la repetición. La Escuela de Hunstanton, a pesar de las inoportunas intervenciones realizadas para adaptar el centro a unas necesidades derivadas de su número de alumnos actual (el doble que en su inicio), continúa proclamando su integridad espacial. Partiendo de la hipótesis de que la arquitectura de la Secondary School en Hunstanton, representa el manifiesto construido del Nuevo Brutalismo en Inglaterra, se concluye que el resultado de su construcción fue consecuencia de numerosas influencias que, en relación con los Smithson, estuvieron presentes durante los años en que se gestó. Algo que va más allá de la conclusión de aquellos debates arquitectónicos que se habían emprendido, por escrito, en las distintas revistas locales de arquitectura. Los mecanismos compositivos empleados, también habían tenido mucho que ver con lo que los historiadores del arte habían venido aportando a la historia de la arquitectura hasta ese momento. Desde los años 40, éstos últimos habían emprendido una nueva manera de contar la historia en la que quedaba fuertemente involucrada su capacidad crítica, provocando interferencias en la mentalidad de los arquitectos de nueva generación y otorgándoles un bagaje cultural subliminalmente determinado y subjetivo. Por supuesto, en el resultado arquitectónico final, también tuvieron mucho que ver los recursos materiales de que se disponía en aquel momento. Así como la optimización de los mismos a través de la adopción de nuevas metodologías de trabajo como puede ser la organización multidisciplinar. La inclusión del ingeniero Ronald Jenkins en el equipo de trabajo de los Smithson supuso una gran oportunidad. Este ingeniero, propuso poner en práctica la entonces innovadora Teoría Plástica en la metodología de cálculo estructural y, con ello consiguió enriquecer el resultado espacial, posibilitando la percepción de una arquitectura ligera –a pesar de sus grandes dimensiones- y vinculada al paisaje donde se inserta. Pero todos estos condicionantes fueron pasados a su vez por el filtro del deseo de una regeneración social, que buscaba el modelo de la sociedad americana. El Buen Vivir que propugnaban los americanos, viajaba a Europa de la mano de la publicidad. Y, al igual que la componente publicitaria tuvo algo que ver en el proceso creativo de la arquitectura de la escuela, también lo tuvo el conocimiento del arte pop y sus recursos compositivos. ABSTRACT The thesis examines in detail the project of Hunstanton Secondary School and the architectural language’s significance used in it. Thus, it is reinterpreting the set of processes that makes this work to be understood as the “built manifesto” of the English New Brutalism. Hunstanton School’s project was the first work designed and built by the Smithsons and, considering their legacy -more theoretical than constructed-, make of this building an important work within their career. In addition, the rigor with which it was carried out the project and the ratification of the ideas lying behind him, make this work becomes a synthesis of the architectural philosophy gestated in England after the war, despite the extensive time interval that characterized its construction process. On the other hand, it must be considered the simplicity of the constructive language used in this project. It is given by the complex projective repetition of the type systems and by the establishment of a quasi-mathematical grammar. The systematization of its vocabulary makes, after a deep analysis of its architecture, to recognize new parameters able to document this moment in the history of English architecture. The building envelope is, at the same time, facade and structure. This feature has been overlooked when many photographs of the finished work and its drawings -made by the Smithsons during the design phase- has been exposed over six decades. As a result, it has proliferated the knowledge of Hunstanton Secondary Modern School’s architecture as a more superficial level, just by showing the formal outcome of its project and thus simply been left the sensation of the great influence that Mies Van der Rohe provocated in the Smithson thinking during their first years of developing his work as architects. Therefore, the main objective of this thesis is to facilitate an understanding of the Smithsons’ proposed space. This is made possible through the detailed analysis of the different systems used in it and, by understanding the knowledge of the team involved in its construction. To prove this, it is necessary to pay attention to the study of the materials and to different project mechanisms that make possible to this group of spaces -inner and outer- be defined through the game played by two variables: an apparent austerity in the use of materials and the combination of the various participant systems through the resource of repetition. Despite the untimely interventions made in order to adapt the center to the new needs (the large increase in the number of students), Hunstanton School’s building continues proclaiming its spatial integrity. Assuming that Hunstanton Secondary School’s architecture represents the manifesto of New Brutalism in England, it is concluded that the result of its construction was the result of numerous influences that, in connection with the Smithsons, were present during the years in which its project was conceived. This meaning goes beyond the conclusions made from the architectural debate that was published in many of local architectural magazines. The compositional mechanisms employed, are also linked to what art historians had contributed to the history of architecture until then. Since the 40s, historians had undertaken a new way to tell History. This new mode strongly implied its critical capacity. All this, was causing interferences in the mentality of the architects of the new generation and, giving them a subliminally determined and very subjective cultural background. Of course, the final architectural result had much to do with the material resources available at that time and, with its optimization through the adoption of new working methods as the multidisciplinary organization. The inclusion of engineer Ronald Jenkins in the team of the Smithsons was a great opportunity. He proposed to implement the new Plastic Theory in the structural calculation and thereby he got enrich the spatial results achieved, by enabling the perception of a lightweight construction, despite its large size and, linked to the landscape where it is inserted. But all these conditions were passed through the filter of social regeneration’s desire, following the American society’s model. This American model travelled to Europe in the hands of advertising. And, in the same way that publicity had something to do with the creative process of this architecture, also had a lot to do the knowledge of pop art and its compositional resources.
Resumo:
Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, specifically an electronic circuit called Cognitive Sensorimotor Loop (CSL) and its control algorithm based on VHDL hardware description language. The reason that makes the CSL special resides in its ability to operate a motor both as a sensor and an actuator. This way, it is possible to achieve a balanced position in any of the robot joints (e.g. the robot manages to stand) without needing any conventional sensor. In other words, the back electromotive force (EMF) induced by the motor coils is measured and the control algorithm responds depending on its magnitude. The CSL circuit contains mainly an analog-to-digital converter (ADC) and a driver. The ADC consists on a delta-sigma modulation which generates a series of bits with a certain percentage of 1's and 0's, proportional to the back EMF. The control algorithm, running in a FPGA, processes the bit frame and outputs a signal for the driver. This driver, which has an H bridge topology, gives the motor the ability to rotate in both directions while it's supplied with the power needed. The objective of this thesis is to document the experiments and overall work done on push ignoring contractive sensorimotor algorithms, meaning sensorimotor algorithms that ignore large magnitude forces (compared to gravity) applied in a short time interval on a pendulum system. This main objective is divided in two sub-objectives: (1) developing a system based on parameterized thresholds and (2) developing a system based on a push bypassing filter. System (1) contains a module that outputs a signal which blocks the main Sensorimotor algorithm when a push is detected. This module has several different parameters as inputs e.g. the back EMF increment to consider a force as a push or the time interval between samples. System (2) consists on a low-pass Infinite Impulse Response digital filter. It cuts any frequency considered faster than a certain push oscillation. This filter required an intensive study on how to implement some functions and data types (fixed or floating point data) not supported by standard VHDL packages. Once this was achieved, the next challenge was to simplify the solution as much as possible, without using non-official user made packages. Both systems behaved with a series of interesting advantages and disadvantages for the elaboration of the document. Stability, reaction time, simplicity or computational load are one of the many factors to be studied in the designed systems. RESUMEN. Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL es un Proyecto de Fin de Grado (PFG) que concluye mis estudios en la Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación de la Universidad Politécnica de Madrid. En él se documenta el trabajo de investigación que realicé en el Neurorobotics Research Laboratory de la Beuth Hochschule für Technik Berlin durante el año 2015 mediante el programa de intercambio ERASMUS. Este PFG se centra en el campo de la robótica y en concreto en un circuito electrónico llamado Cognitive Sensorimotor Loop (CSL) y su algoritmo de control basado en lenguaje de modelado hardware VHDL. La particularidad del CSL reside en que se consigue que un motor haga las veces tanto de sensor como de actuador. De esta manera es posible que las articulaciones de un robot alcancen una posición de equilibrio (p.ej. el robot se coloca erguido) sin la necesidad de sensores en el sentido estricto de la palabra. Es decir, se mide la propia fuerza electromotriz (FEM) inducida sobre el motor y el algoritmo responde de acuerdo a su magnitud. El circuito CSL se compone de un convertidor analógico-digital (ADC) y un driver. El ADC consiste en un modulador sigma-delta, que genera una serie de bits con un porcentaje de 1's y 0's determinado, en proporción a la magnitud de la FEM inducida. El algoritmo de control, que se ejecuta en una FPGA, procesa esta cadena de bits y genera una señal para el driver. El driver, que posee una topología en puente H, provee al motor de la potencia necesaria y le otorga la capacidad de rotar en cualquiera de las dos direcciones. El objetivo de este PFG es documentar los experimentos y en general el trabajo realizado en algoritmos Sensorimotor que puedan ignorar fuerzas de gran magnitud (en comparación con la gravedad) y aplicadas en una corta ventana de tiempo. En otras palabras, ignorar empujones conservando el comportamiento original frente a la gravedad. Para ello se han desarrollado dos sistemas: uno basado en umbrales parametrizados (1) y otro basado en un filtro de corte ajustable (2). El sistema (1) contiene un módulo que, en el caso de detectar un empujón, genera una señal que bloquea el algoritmo Sensorimotor. Este módulo recibe diferentes parámetros como el incremento necesario de la FEM para que se considere un empujón o la ventana de tiempo para que se considere la existencia de un empujón. El sistema (2) consiste en un filtro digital paso-bajo de respuesta infinita que corta cualquier variación que considere un empujón. Para crear este filtro se requirió un estudio sobre como implementar ciertas funciones y tipos de datos (coma fija o flotante) no soportados por las librerías básicas de VHDL. Tras esto, el objetivo fue simplificar al máximo la solución del problema, sin utilizar paquetes de librerías añadidos. En ambos sistemas aparecen una serie de ventajas e inconvenientes de interés para el documento. La estabilidad, el tiempo de reacción, la simplicidad o la carga computacional son algunas de las muchos factores a estudiar en los sistemas diseñados. Para concluir, también han sido documentadas algunas incorporaciones a los sistemas: una interfaz visual en VGA, un módulo que compensa el offset del ADC o la implementación de una batería de faders MIDI entre otras.
Resumo:
São escassos os estudos que analisam o contínuo temporal dos estados de ânimo ao longo de um período competitivo esportivo. Embora os estados de ânimo pareçam estáveis ao longo do tempo, diferentes estímulos e contextos presentes modificam a intensidade e a valência desses estados. Além disso, há fenômenos psicológicos como decaimento, em que traços de informação perdem sua ativação devido, principalmente, à passagem do tempo e a expectativa, que é a espera pela ocorrência de um evento em um determinado tempo. O objetivo desse estudo foi examinar as alterações dos estados de ânimo em jovens atletas de futebol, separados por posição e função, que ocorreram num período competitivo, em função do decurso temporal. Assim, processos como decaimento dos estados de ânimo e a influência da expectativa pela ocorrência jogo foram analisados, bem como a influência do contexto nas variações dos estados de ânimo dos atletas. Participaram deste estudo 18 jovens atletas (média de 15,4 anos ± 0,266) de um clube de futebol que estava disputando um campeonato estadual. Para o acesso aos estados de ânimo, foi utilizada a versão reduzida da Lista de Estados de Ânimo Presentes (LEAP), juntamente com um formulário de instruções de preenchimento, aplicada minutos antes de alguns treinamentos e jogos. Foram calculados os valores de presença de cada Fator da LEAP em cada evento para cada participante. Os dados foram coletados em três tipos de Eventos: antes do último treino antecedente ao jogo (Treino-Pré), antes do jogo (Pré-jogo) e antes do primeiro treino subsequente ao jogo (Treino-Pós). Os 18 jogadores foram divididos em dois grupos: Ações Defensivas (AD) e Ações Ofensivas (AO). Foram encontrados padrões de alteração dos estados de ânimo, representados pelos Fatores II (Fadiga), VII (Interesse) e XII (Serenidade) da LEAP, em função do decurso temporal, permitindo a análise dos processos de decaimento desses estados de ânimo e a influência da expectativa nessas alterações. Também foi encontrado que alguns estados de ânimo diferiram seus padrões de alteração de acordo com um intervalo temporal (Fatores IV Limerência/Empatia e; VII Interesse), bem como tiveram valores de presença diferentes na comparação entre esses intervalos. Além disso, os Fatores III (Esperança), V (Fisiológico) e XI (Receptividade) apresentaram padrões de alteração em função do decurso temporal em diferentes intervalos temporais. Variáveis contextuais, como o resultado das partidas e a competição esportiva em si, também foram influentes nessas alterações. Fadiga, esperança, empatia, estados ligados à propriocepção, interesse, receptividade e serenidade foram os estados de ânimo presentes durante todo o estudo. Ressalta-se a importância de incluir a temporalidade como variável influente nos modelos de variação de processos neurobiológicos, sobretudo nas investigações acerca de aspectos subjetivos como os estados de ânimo.
Resumo:
A opção por sistemas biológicos prevalece para o tratamento do esgoto sanitário. Nas décadas recentes, sistemas que possuem regiões e/ou zonas anaeróbia, anóxica e aeróbia têm-se mostrado como alternativas atraentes para remoção simultânea de matéria orgânica, nitrogênio e fósforo. No entanto, os aspectos operacionais ainda merecem ser objeto de estudo para alcançar desempenho otimizado. Nesse cenário, com intuito de comparar alternativas para a operação das unidades de tratamento de esgoto, o presente trabalho propôs-se a estudar estratégias operacionais associadas ao monitoramento, em tempo real, sem adição de fonte externa de carbono, para um reator aerado não compartimentado com crescimento suspenso e fluxo contínuo precedido de reator anaeróbio. O sistema experimental, em escala de bancada, era constituído de um reator anaeróbio, com volume útil de 43,54 L, e reator aerado, com volume útil de 68,07 L; sendo que este era formado por sete setores, em série, sem separação física. O estudo foi dividido em duas etapas: I - estudo da variação dos volumes da região aerada e da não aerada; II - estudo da aeração intermitente com ciclo de aeração/agitação pré-fixado e controlado em tempo real por sistema informatizado. Em todas as Etapas do estudo ocorreu elevada remoção de DBO e conversão de NTK para nitrato, contudo não se conseguiu obter desnitrificação em nível desejado. O uso de reatores com setores sequenciais sem divisão física (Etapa I) dificultou a obtenção de regiões distintas predominantemente anóxica e aeróbia, comprometendo a remoção de nitrogênio (principalmente a desnitrificação). A maior eficiência média de remoção de nitrogênio alcançada no reator aerado foi de 35,6% (Etapa II), quando o reator era operado com aeração intermitente sendo o ciclo de aeração/agitação controlado em tempo real. A estratégia de operação com aeração intermitente, estudada na Etapa II, favoreceu a remoção de nitrogênio. A aeração intermitente demonstrou ser uma opção promissora comparada à aeração contínua em setores específicos do reator. O controle automatizado e informatizado em tempo real dos ciclos de aeração/agitação pode ser aplicado no aperfeiçoamento da operação dos sistemas de tratamento de esgoto sanitário.
Resumo:
Differential SAR Interferometry (DInSAR) is a remote sensing method with the well demonstrated ability to monitor geological hazards like earthquakes, landslides and subsidence. Among all these hazards, subsidence involves the settlement of the ground surface affecting wide areas. Frequently, subsidence is induced by overexploitation of aquifers and constitutes a common problem that affects developed societies. The excessive pumping of underground water decreases the piezometric level in the subsoil and, as a consequence, increases the effective stresses with depth causing a consolidation of the soil column. This consolidation originates a settlement of ground surface that must be withstood by civil structures built on these areas. In this paper we make use of an advanced DInSAR approach - the Coherent Pixels Technique (CPT) [1] - to monitor subsidence induced by aquifer overexploitation in the Vega Media of the Segura River (SE Spain) from 1993 to the present. 28 ERS-1/2 scenes covering a time interval of about 10 years were used to study this phenomenon. The deformation map retrieved with CPT technique shows settlements of up to 80 mm at some points of the studied zone. These values agree with data obtained by means of borehole extensometers, but not with the distribution of damaged buildings, well points and basements, because the occurrence of damages also depends on the structural quality of the buildings and their foundations. The most interesting relationship observed is the one existing between piezometric changes, settlement evolution and local geology. Three main patterns of ground surface and piezometric level behaviour have been distinguished for the study zone during this period: 1) areas where deformation occurs while ground conditions remain altered (recent deformable sediments), 2) areas with no deformation (old and non-deformable materials), and 3) areas where ground deformation mimics piezometric level changes (expansive soils). The temporal relationship between deformation patterns and soil characteristics has been analysed in this work, showing a delay between them. Moreover, this technique has allowed the measurement of ground subsidence for a period (1993-1995) where no instrument information was available.
Resumo:
Objetivo. Estimar la reproducibilidad de tres medidas objetivas de desempeño físico de personas mayores en atención primaria. Diseño. Estudio descriptivo y prospectivo con observación directa de la función física por parte de profesionales de la salud de acuerdo a un protocolo estandarizado. Emplazamiento. Tres centros de atención primaria de las provincias de Alicante y Valencia. Participantes. Muestra de 66 personas de 70 y más años, evaluadas en dos ocasiones por el mismo profesional al objeto de replicar idénticas condiciones del estudio, en un intervalo temporal de dos semanas (mediana de 14 días). Mediciones principales. Se evaluó el funcionamiento físico a través de tres pruebas objetivas de desempeño: el test de equilibrio, el de velocidad de la marcha, y la capacidad para levantarse y sentarse de una silla. Estas medidas provienen de los estudios E PESE (Established Populations for Epidemiologic Studies of the Elderly). Se ha calculado la fiabilidad test-retest mediante el coeficiente de correlación intraclase. Resultados. Los coeficientes de correlación intraclase (CCI) fueron de 0,55 para el test de equilibrio, de 0,69 para el test de levantarse de la silla, y de O, 79 para el de velocidad de la marcha. El valor para la puntuación total de la batería EPESE fue de 0,80. Conclusiones. La reproducibilidad de estas medidas de desempeño es tan aceptable como las aportadas por la bibliografía de referencia. Estas pruebas de desempeño permiten evaluar con rigor cambios importantes en funcionamiento y salud que se producen con el tiempo.
Resumo:
Aims: The recent availability of the novel oral anticoagulants (NOACs) may have led to a change in the anticoagulation regimens of patients referred to catheter ablation of atrial fibrillation (AF). Preliminary data exist concerning dabigatran, but information regarding the safety and efficacy of rivaroxaban in this setting is currently scarce. Methods: and results Of the 556 consecutive eligible patients (age 61.0 ± 9.6; 74.6% men; 61.2% paroxysmal AF) undergoing AF catheter ablation in our centre (October 2012 to September 2013) and enroled in a systematic standardized 30-day follow-up period: 192 patients were under vitamin K antagonists (VKAs), 188 under rivaroxaban, and 176 under dabigatran. Peri-procedural mortality and significant systemic or pulmonary thromboembolism (efficacy outcome), as well as bleeding events (safety outcome) during the 30 days following the ablation were evaluated according to anticoagulation regimen. During a 12-month time interval, the use of the NOACs in this population rose from <10 to 70%. Overall, the rate of events was low with no significant differences regarding: thrombo-embolic events in 1.3% (VKA 2.1%; rivaroxaban 1.1%; dabigatran 0.6%; P = 0.410); major bleeding in 2.3% (VKA 4.2%; rivaroxaban 1.6%; dabigatran 1.1%; P = 0.112), and minor bleeding 1.4% (VKA 2.1%; rivaroxaban 1.6%; dabigatran 0.6%; P = 0.464). No fatal events were observed. Conclusion: The use of the NOAC in patients undergoing catheter ablation of AF has rapidly evolved (seven-fold) over 1 year. These preliminary data suggest that rivaroxaban and dabigatran in the setting of catheter ablation of AF are efficient and safe, compared with the traditional VKA.
Resumo:
Two cores, Site 1089 (ODP Leg 177) and PS2821-1, recovered from the same location (40°56'S; 9°54'E) at the Subtropical Front (STF) in the Atlantic Sector of the Southern Ocean, provide a high-resolution climatic record, with an average temporal resolution of less than 600 yr. A multi-proxy approach was used to produce an age model for Core PS2821-1, and to correlate the two cores. Both cores document the last climatic cycle, from Marine Isotopic Stage 6 (MIS 6, ca. 160 kyr BP, ka) to present. Summer sea-surface temperatures (SSSTs) have been estimated, with a standard error of ca. +/-1.16°C, for the down core record by using Q-mode factor analysis (Imbrie and Kipp method). The paleotemperatures show a 7°C warming at Termination II (last interglacial, transition from MIS 6 to MIS 5). This transition from glacial to interglacial paleotemperatures (with maximum temperatures ca. 3°C warmer than present at the core location) occurs earlier than the corresponding shift in delta18O values for benthic foraminifera from the same core; this suggests a lead of Southern Ocean paleotemperature changes compared to the global ice-volume changes, as indicated by the benthic isotopic record. The climatic evolution of the record continues with a progressive temperature deterioration towards MIS 2. High-frequency, millennial-scale climatic instability has been documented for MIS 3 and part of MIS 4, with sudden temperature variations of almost the same magnitude as those observed at the transitions between glacial and interglacial times. These changes occur during the same time interval as the Dansgaard-Oeschger cycles recognized in the delta18Oice record of the GRIP and GISP ice cores from Greenland, and seem to be connected to rapid changes in the STF position in relation to the core location. Sudden cooling episodes ('Younger Dryas (YD)-type' and 'Antarctic Cold Reversal (ACR)-type' of events) have been recognized for both Termination I (ACR-I and YD-I events) and II (ACR-II and YD-II events), and imply that our core is located in an optimal position in order to record events triggered by phenomena occurring in both hemispheres. Spectral analysis of our SSST record displays strong analogies, particularly for high, sub-orbital frequencies, to equivalent records from Vostok (Antarctica) and from the Subtropical North Atlantic ocean. This implies that the climatic variability of widely separated areas (the Antarctic continent, the Subtropical North Atlantic, and the Subantarctic South Atlantic) can be strongly coupled and co-varying at millennial time scales (a few to 10-ka periods), and eventually induced by the same triggering mechanisms. Climatic variability has also been documented for supposedly warm and stable interglacial intervals (MIS 1 and 5), with several cold events which can be correlated to other Southern Ocean and North Atlantic sediment records.