17 resultados para monitoring design
em Universidad Politécnica de Madrid
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
CO2 capture and storage (CCS) projects are presently developed to reduce the emission of anthropogenic CO2 into the atmosphere. CCS technologies are expected to account for the 20% of the CO2 reduction by 2050. One of the main concerns of CCS is whether CO2 may remain confined within the geological formation into which it is injected since post-injection CO2 migration in the time scale of years, decades and centuries is not well understood. Theoretically, CO2 can be retained at depth i) as a supercritical fluid (physical trapping), ii) as a fluid slowly migrating in an aquifer due to long flow path (hydrodynamic trapping), iii) dissolved into ground waters (solubility trapping) and iv) precipitated secondary carbonates. Carbon dioxide will be injected in the near future (2012) at Hontomín (Burgos, Spain) in the frame of the Compostilla EEPR project, led by the Fundación Ciudad de la Energía (CIUDEN). In order to detect leakage in the operational stage, a pre-injection geochemical baseline is presently being developed. In this work a geochemical monitoring design is presented to provide information about the feasibility of CO2 storage at depth.
Resumo:
The integration of scientific knowledge about possible climate change impacts on water resources has a direct implication on the way water policies are being implemented and evolving. This is particularly true regarding various technical steps embedded into the EU Water Framework Directive river basin management planning, such as risk characterisation, monitoring, design and implementation of action programmes and evaluation of the "good status" objective achievements (in 2015). The need to incorporate climate change considerations into the implementation of EU water policy is currently discussed with a wide range of experts and stakeholders at EU level. Research trends are also on-going, striving to support policy developments and examining how scientific findings and recommendations could be best taken on board by policy-makers and water managers within the forthcoming years. This paper provides a snapshot of policy discussions about climate change in the context of the WFD river basin management planning and specific advancements of related EU-funded research projects. Perspectives for strengthening links among the scientific and policy-making communities in this area are also highlighted.
Resumo:
The real increase in energy prices and the intention of reducing pollutant emissions in developed countries makes interesting to use solar energy in all the processes where its application is possible. As it is demonstrated in countries sited at latitudes with optimal conditions of solar radiation and temperature, it is possible to use solar energy as heat source for small-scale hatchery [1,2], but beyond, making a design for proper installation; it is possible to use solar energy as main or support energy source in medium and large size incubators . Monitoring of a normal actual process using temperature and relative humidity sensors is necessary to know the actual operating conditions that the solar heating system must be designed and sized for. Moreover, the identification and analysis of temperature and enthalpy gradients inside the incubator is of major importance.
Resumo:
Current nanometer technologies are subjected to several adverse effects that seriously impact the yield and performance of integrated circuits. Such is the case of within-die parameters uncertainties, varying workload conditions, aging, temperature, etc. Monitoring, calibration and dynamic adaptation have appeared as promising solutions to these issues and many kinds of monitors have been presented recently. In this scenario, where systems with hundreds of monitors of different types have been proposed, the need for light-weight monitoring networks has become essential. In this work we present a light-weight network architecture based on digitization resource sharing of nodes that require a time-to-digital conversion. Our proposal employs a single wire interface, shared among all the nodes in the network, and quantizes the time domain to perform the access multiplexing and transmit the information. It supposes a 16% improvement in area and power consumption compared to traditional approaches.
Resumo:
Geodetic volcano monitoring in Tenerife has mainly focused on the Las Cañadas Caldera, where a geodetic micronetwork and a levelling profile are located. A sensitivity test of this geodetic network showed that it should be extended to cover the whole island for volcano monitoring purposes. Furthermore, InSAR allowed detecting two unexpected movements that were beyond the scope of the traditional geodetic network. These two facts prompted us to design and observe a GPS network covering the whole of Tenerife that was monitored in August 2000. The results obtained were accurate to one centimetre, and confirm one of the deformations, although they were not definitive enough to confirm the second one. Furthermore, new cases of possible subsidence have been detected in areas where InSAR could not be used to measure deformation due to low coherence. A first modelling attempt has been made using a very simple model and its results seem to indicate that the deformation observed and the groundwater level variation in the island may be related. Future observations will be necessary for further validation and to study the time evolution of the displacements, carry out interpretation work using different types of data (gravity, gases, etc) and develop models that represent the island more closely. The results obtained are important because they might affect the geodetic volcano monitoring on the island, which will only be really useful if it is capable of distinguishing between displacements that might be linked to volcanic activity and those produced by other causes. One important result in this work is that a new geodetic monitoring system based on two complementary techniques, InSAR and GPS, has been set up on Tenerife island. This the first time that the whole surface of any of the volcanic Canary Islands has been covered with a single network for this purpose. This research has displayed the need for further similar studies in the Canary Islands, at least on the islands which pose a greater risk of volcanic reactivation, such as Lanzarote and La Palma, where InSAR techniques have been used already.
Resumo:
Variabilities associated with CMOS evolution affect the yield and performance of current digital designs. FPGAs, which are widely used for fast prototyping and implementation of digital circuits, also suffer from these issues. Proactive approaches start to appear to achieve self-awareness and dynamic adaptation of these devices. To support these techniques we propose the employment of a multi-purpose sensor network. This infrastructure, through adequate use of configuration and automation tools, is able to obtain relevant data along the life cycle of an FPGA. This is realised at a very reduced cost, not only in terms of area or other limited resources, but also regarding the design effort required to define and deploy the measuring infrastructure. Our proposal has been validated by measuring inter-die and intra-die variability in different FPGA families.
Resumo:
Tool wear detection is a key issue for tool condition monitoring. The maximization of useful tool life is frequently related with the optimization of machining processes. This paper presents two model-based approaches for tool wear monitoring on the basis of neuro-fuzzy techniques. The use of a neuro-fuzzy hybridization to design a tool wear monitoring system is aiming at exploiting the synergy of neural networks and fuzzy logic, by combining human reasoning with learning and connectionist structure. The turning process that is a well-known machining process is selected for this case study. A four-input (i.e., time, cutting forces, vibrations and acoustic emissions signals) single-output (tool wear rate) model is designed and implemented on the basis of three neuro-fuzzy approaches (inductive, transductive and evolving neuro-fuzzy systems). The tool wear model is then used for monitoring the turning process. The comparative study demonstrates that the transductive neuro-fuzzy model provides better error-based performance indices for detecting tool wear than the inductive neuro-fuzzy model and than the evolving neuro-fuzzy model.
Resumo:
Geologic storage of carbon dioxide (CO2) has been proposed as a viable means for reducing anthropogenic CO2 emissions. Once injection begins, a program for measurement, monitoring, and verification (MMV) of CO2 distribution is required in order to: a) research key features, effects and processes needed for risk assessment; b) manage the injection process; c) delineate and identify leakage risk and surface escape; d) provide early warnings of failure near the reservoir; and f) verify storage for accounting and crediting. The selection of the methodology of monitoring (characterization of site and control and verification in the post-injection phase) is influenced by economic and technological variables. Multiple Criteria Decision Making (MCDM) refers to a methodology developed for making decisions in the presence of multiple criteria. MCDM as a discipline has only a relatively short history of 40 years, and it has been closely related to advancements on computer technology. Evaluation methods and multicriteria decisions include the selection of a set of feasible alternatives, the simultaneous optimization of several objective functions, and a decision-making process and evaluation procedures that must be rational and consistent. The application of a mathematical model of decision-making will help to find the best solution, establishing the mechanisms to facilitate the management of information generated by number of disciplines of knowledge. Those problems in which decision alternatives are finite are called Discrete Multicriteria Decision problems. Such problems are most common in reality and this case scenario will be applied in solving the problem of site selection for storing CO2. Discrete MCDM is used to assess and decide on issues that by nature or design support a finite number of alternative solutions. Recently, Multicriteria Decision Analysis has been applied to hierarchy policy incentives for CCS, to assess the role of CCS, and to select potential areas which could be suitable to store. For those reasons, MCDM have been considered in the monitoring phase of CO2 storage, in order to select suitable technologies which could be techno-economical viable. In this paper, we identify techniques of gas measurements in subsurface which are currently applying in the phase of characterization (pre-injection); MCDM will help decision-makers to hierarchy the most suitable technique which fit the purpose to monitor the specific physic-chemical parameter.
Resumo:
High-Performance Computing, Cloud computing and next-generation applications such e-Health or Smart Cities have dramatically increased the computational demand of Data Centers. The huge energy consumption, increasing levels of CO2 and the economic costs of these facilities represent a challenge for industry and researchers alike. Recent research trends propose the usage of holistic optimization techniques to jointly minimize Data Center computational and cooling costs from a multilevel perspective. This paper presents an analysis on the parameters needed to integrate the Data Center in a holistic optimization framework and leverages the usage of Cyber-Physical systems to gather workload, server and environmental data via software techniques and by deploying a non-intrusive Wireless Sensor Net- work (WSN). This solution tackles data sampling, retrieval and storage from a reconfigurable perspective, reducing the amount of data generated for optimization by a 68% without information loss, doubling the lifetime of the WSN nodes and allowing runtime energy minimization techniques in a real scenario.
Resumo:
Esta tesis examina las implicaciones técnicas, políticas y espaciales del aire urbano, y en concreto, de la calidad del aire, para tenerlo en cuenta desde una perspectiva arquitectónica. En oposición a formas de entender el aire como un vacío o como una metáfora, este proyecto propone abordarlo desde un acercamiento material y tecnológico, trayendo el entorno al primer plano y reconociendo sus múltiples agencias. Debido a la escasa bibliografía detectada en el campo de la arquitectura, el objetivo es construir un marco teórico-analítico para considerar el aire urbano. Para ello el trabajo construye Aeropolis, una metáfora heurística que describe el ensamblaje sociotecnico de la ciudad. Situada en la intersección de determinadas ramas de la filosofía de la cultura, los estudios sobre ciencia y tecnología y estudios feministas de la ciencia este nuevo paisaje conceptual ofrece una metodología y herramientas para abordar el objeto de estudio desde distintos ángulos. Estas herramientas metodológicas han sido desarrolladas en el contexto específico de Madrid, ciudad muy contaminada cuyo aire ha sido objeto de controversias políticas y sociales, y donde las políticas y tecnologías para reducir sus niveles no han sido exitosas. Para encontrar una implicación alternativa con el aire esta tesis propone un método de investigación de agentes invisibles a partir del análisis de sus dispositivos epistémicos. Se centra, en concreto, en los instrumentos que miden, visualizan y comunican la calidad del aire, proponiendo que no sólo lo representan, sino que son también instrumentos que diseñan el aire y la ciudad. La noción de “sensing” (en castellano medir y sentir) es expandida, reconociendo distintas prácticas que reconstruyen el aire de Madrid. El resultado de esta estrategia no es sólo la ampliación de los espacios desde los que relacionarnos con el aire, sino también la legitimación de prácticas existentes fuera de contextos científicos y administrativos, como por ejemplo prácticas relacionadas con el cuerpo, así como la redistribución de agencias entre más actores. Así, esta tesis trata sobre toxicidad, la Unión Europea, producción colaborativa, modelos de computación, dolores de cabeza, kits DIY, gases, cuerpos humanos, salas de control, sangre o políticos, entre otros. Los dispositivos que sirven de datos empíricos sirven como un ejemplo excepcional para investigar infraestructuras digitales, permitiendo desafiar nociones sobre Ciudades Inteligentes. La tesis pone especial atención en los efectos del aire en el espacio público, reconociendo los procesos de invisibilización que han sufrido sus infraestructuras de monitorización. Para terminar se exponen líneas de trabajo y oportunidades para la arquitectura y el diseño urbano a través de nuevas relaciones entre infraestructuras urbanas, el medio construido, espacios domésticos y públicos y humanos y no humanos, para crear nuevas ecologías políticas urbanas (queer). ABSTRACT This thesis examines the technical, political and spatial implications of urban air, and more specifically "air quality", in order to consider it from an architectural perspective. In opposition to understandings of the air either as a void or as a metaphor, this project proposes to inspect it from a material and technical approach, bringing the background to the fore and acknowledging its multiple agencies. Due to the scarce bibliography within the architectural field, its first aim is to construct a theoretical and analytical framework from which to consider urban air. For this purpose, the work attempts the construction of Aeropolis, a heuristic metaphor that describes the city's aero socio-material assemblage. Located at the intersection of certain currents in cultural philosophy, science and technology studies as well as feminist studies in technoscience, this framework enables a methodology and toolset to be extracted in order to approach the subject matter from different angles. The methodological tools stemming from this purpose-built framework were put to the test in a specific case study: Madrid, a highly polluted city whose air has been subject to political and social controversies, and where no effective policies or technologies have been successful in reducing its levels of pollution. In order to engage with the air, the thesis suggests a method for researching invisible agents by examining the epistemic devices involved. It locates and focuses on the instruments that sense, visualise and communicate urban air, claiming that they do not only represent it, but are also instruments that design the air and the city. The notion of "sensing" is then expanded by recognising different practices which enact the air in Madrid. The work claims that the result of this is not only the opening up of spaces for engagement but also the legitimisation of existing practices outside science and policymaking environments, such as embodied practices, as well as the redistribution of agency among more actors. So this is a thesis about toxicity, the European Union, collaborative production, scientific computational models, headaches, DIY kits, gases, human bodies, control rooms, blood, or politicians, among many others. The devices found throughout the work serve as an exceptional substrate for an investigation of digital infrastructures, enabling to challenge Smart City tropes. There is special attention paid to the effects of the air on the public space, acknowledging the silencing processes these infrastructures have been subjected to. Finally there is an outline of the opportunities arising for architecture and urban design when taking the air into account, to create new (queer) urban political ecologies between the air, urban infrastructures, the built environment, public and domestic spaces, and humans and more than humans.
Resumo:
La gestión de los recursos hídricos se convierte en un reto del presente y del futuro frente a un panorama de continuo incremento de la demanda de agua debido al crecimiento de la población, el crecimiento del desarrollo económico y los posibles efectos del calentamiento global. La política hidráulica desde los años 60 en España se ha centrado en la construcción de infraestructuras que han producido graves alteraciones en el régimen natural de los ríos. Estas alteraciones han provocado y acrecentado los impactos sobre los ecosistemas fluviales y ribereños. Desde los años 90, sin embargo, ha aumentado el interés de la sociedad para conservar estos ecosistemas. El concepto de caudales ambientales consiste en un régimen de caudales que simula las características principales del régimen natural. Los caudales ambientales están diseñados para conservar la estructura y funcionalidad de los ecosistemas asociados al régimen fluvial, bajo la hipótesis de que los elementos que conforman estos ecosistemas están profundamente adaptados al régimen natural de caudales, y que cualquier alteración del régimen natural puede provocar graves daños a todo el sistema. El método ELOHA (Ecological Limits of Hydrological Alteration) tiene como finalidad identificar las componentes del régimen natural de caudales que son clave para mantener el equilibrio de los ecosistemas asociados, y estimar los límites máximos de alteración de estas componentes para garantizar su buen estado. Esta tesis presenta la aplicación del método ELOHA en la cuenca del Ebro. La cuenca del Ebro está profundamente regulada e intervenida por el hombre, y sólo las cabeceras de los principales afluentes del Ebro gozan todavía de un régimen total o cuasi natural. La tesis se estructura en seis capítulos que desarrollan las diferentes partes del método. El primer capítulo explica cómo se originó el concepto “caudales ambientales” y en qué consiste el método ELOHA. El segundo capítulo describe el área de estudio. El tercer capítulo realiza una clasificación de los regímenes naturales de la cuenca (RNC) del Ebro, basada en series de datos de caudal mínimamente alterado y usando exclusivamente parámetros hidrológicos. Se identificaron seis tipos diferentes de régimen natural: pluvial mediterráneo, nivo-pluvial, pluvial mediterréaneo con una fuerte componente del caudal base, pluvial oceánico, pluvio-nival oceánico y Mediterráneo. En el cuarto capítulo se realiza una regionalización a toda la cuenca del Ebro de los seis RNC encontrados en la cueca. Mediante parámetros climáticos y fisiográficos se extrapola la información del tipo de RNC a puntos donde no existen datos de caudal inalterado. El patrón geográfico de los tipos de régimen fluvial obtenido con la regionalización resultó ser coincidente con el patrón obtenido a través de la clasificación hidrológica. El quinto capítulo presenta la validación biológica de los procesos de clasificación anteriores: clasificación hidrológica y regionalización. La validación biológica de los tipos de regímenes fluviales es imprescindible, puesto que los diferentes tipos de régimen fluvial van a servir de unidades de gestión para favorecer el mantenimiento de los ecosistemas fluviales. Se encontraron diferencias significativas entre comunidades biológicas en cinco de los seis tipos de RNC encontrados en la cuenca. Finalmente, en el sexto capítulo se estudian las relaciones hidro-ecológicas existentes en tres de los seis tipos de régimen fluvial encontrados en la cuenca del Ebro. Mediante la construcción de curvas hidro-ecológicas a lo largo de un gradiente de alteración hidrológica, se pueden sugerir los límites de alteración hidrológica (ELOHAs) para garantizar el buen estado ecológico en cada uno de los tipos fluviales estudiados. Se establecieron ELOHAs en tres de los seis tipos de RNC de la cuenca del Ebro Esta tesis, además, pone en evidencia la falta de datos biológicos asociados a registros de caudal. Para llevar a cabo la implantación de un régimen de caudales ambientales en la cuenca, la ubicación de los puntos de muestreo biológico cercanos a estaciones de aforo es imprescindible para poder extraer relaciones causa-efecto de la gestión hidrológica sobre los ecosistemas dependientes. ABSTRACT In view of a growing freshwater demand because of population raising, improvement of economies and the potential effects of climate change, water resources management has become a challenge for present and future societies. Water policies in Spain have been focused from the 60’s on constructing hydraulic infrastructures, in order to dampen flow variability and granting water availability along the year. Consequently, natural flow regimes have been deeply altered and so the depending habitats and its ecosystems. However, an increasing acknowledgment of societies for preserving healthy freshwater ecosystems started in the 90’s and agreed that to maintain healthy freshwater ecosystems, it was necessary to set environmental flow regimes based on the natural flow variability. The Natural Flow Regime paradigm (Richter et al. 1996, Poff et al. 1997) bases on the hypothesis that freshwater ecosystems are made up by elements adapted to natural flow conditions, and any change on these conditions can provoke deep impacts on the whole system. Environmental flow regime concept consists in designing a flow regime that emulates natural flow characteristics, so that ecosystem structure, functions and services are maintained. ELOHA framework (Ecological Limits of Hydrological Alteration) aims to identify key features of the natural flow regime (NFR) that are needed to maintain and preserve healthy freshwater and riparian ecosystems. Moreover, ELOHA framework aims to quantify thresholds of alteration of these flow features according to ecological impacts. This thesis describes the application of the ELOHA framework in the Ebro River Basin. The Ebro River basin is the second largest basin in Spain and it is highly regulated for human demands. Only the Ebro headwaters tributaries still have completely unimpaired flow regime. The thesis has six chapters and the process is described step by step. The first chapter makes an introduction to the origin of the environmental flow concept and the necessity to come up. The second chapter shows a description of the study area. The third chapter develops a classification of NFRs in the basin based on natural flow data and using exclusively hydrological parameters. Six NFRs were found in the basin: continental Mediterranean-pluvial, nivo-pluvial, continental Mediterranean pluvial (with groundwater-dominated flow pattern), pluvio-oceanic, pluvio-nival-oceanic and Mediterranean. The fourth chapter develops a regionalization of the six NFR types across the basin by using climatic and physiographic variables. The geographical pattern obtained from the regionalization process was consistent with the pattern obtained with the hydrologic classification. The fifth chapter performs a biological validation of both classifications, obtained from the hydrologic classification and the posterior extrapolation. When the aim of flow classification is managing water resources according to ecosystem requirements, a validation based on biological data is compulsory. We found significant differences in reference macroinvertebrate communities between five over the six NFR types identified in the Ebro River basin. Finally, in the sixth chapter we explored the existence of significant and explicative flow alteration-ecological response relationships (FA-E curves) within NFR types in the Ebro River basin. The aim of these curves is to find out thresholds of hydrological alteration (ELOHAs), in order to preserve healthy freshwater ecosystem. We set ELOHA values in three NFR types identified in the Ebro River basin. During the development of this thesis, an inadequate biological monitoring in the Ebro River basin was identified. The design and establishment of appropriate monitoring arrangements is a critical final step in the assessment and implementation of environmental flows. Cause-effect relationships between hydrology and macroinvertebrate community condition are the principal data that sustain FA-E curves. Therefore, both data sites must be closely located, so that the effects of external factors are minimized. The scarce hydro-biological pairs of data available in the basin prevented us to apply the ELOHA method at all NFR types.
Resumo:
Esta Tesis tiene como objetivo principal el desarrollo de métodos de identificación del daño que sean robustos y fiables, enfocados a sistemas estructurales experimentales, fundamentalmente a las estructuras de hormigón armado reforzadas externamente con bandas fibras de polímeros reforzados (FRP). El modo de fallo de este tipo de sistema estructural es crítico, pues generalmente es debido a un despegue repentino y frágil de la banda del refuerzo FRP originado en grietas intermedias causadas por la flexión. La detección de este despegue en su fase inicial es fundamental para prevenir fallos futuros, que pueden ser catastróficos. Inicialmente, se lleva a cabo una revisión del método de la Impedancia Electro-Mecánica (EMI), de cara a exponer sus capacidades para la detección de daño. Una vez la tecnología apropiada es seleccionada, lo que incluye un analizador de impedancias así como novedosos sensores PZT para monitorización inteligente, se ha diseñado un procedimiento automático basado en los registros de impedancias de distintas estructuras de laboratorio. Basándonos en el hecho de que las mediciones de impedancias son posibles gracias a una colocación adecuada de una red de sensores PZT, la estimación de la presencia de daño se realiza analizando los resultados de distintos indicadores de daño obtenidos de la literatura. Para que este proceso sea automático y que no sean necesarios conocimientos previos sobre el método EMI para realizar un experimento, se ha diseñado e implementado un Interfaz Gráfico de Usuario, transformando la medición de impedancias en un proceso fácil e intuitivo. Se evalúa entonces el daño a través de los correspondientes índices de daño, intentando estimar no sólo su severidad, sino también su localización aproximada. El desarrollo de estos experimentos en cualquier estructura genera grandes cantidades de datos que han de ser procesados, y algunas veces los índices de daño no son suficientes para una evaluación completa de la integridad de una estructura. En la mayoría de los casos se pueden encontrar patrones de daño en los datos, pero no se tiene información a priori del estado de la estructura. En este punto, se ha hecho una importante investigación en técnicas de reconocimiento de patrones particularmente en aprendizaje no supervisado, encontrando aplicaciones interesantes en el campo de la medicina. De ahí surge una idea creativa e innovadora: detectar y seguir la evolución del daño en distintas estructuras como si se tratase de un cáncer propagándose por el cuerpo humano. En ese sentido, las lecturas de impedancias se emplean como información intrínseca de la salud de la propia estructura, de forma que se pueden aplicar las mismas técnicas que las empleadas en la investigación del cáncer. En este caso, se ha aplicado un algoritmo de clasificación jerárquica dado que ilustra además la clasificación de los datos de forma gráfica, incluyendo información cualitativa y cuantitativa sobre el daño. Se ha investigado la efectividad de este procedimiento a través de tres estructuras de laboratorio, como son una viga de aluminio, una unión atornillada de aluminio y un bloque de hormigón reforzado con FRP. La primera ayuda a mostrar la efectividad del método en sencillos escenarios de daño simple y múltiple, de forma que las conclusiones extraídas se aplican sobre los otros dos, diseñados para simular condiciones de despegue en distintas estructuras. Demostrada la efectividad del método de clasificación jerárquica de lecturas de impedancias, se aplica el procedimiento sobre las estructuras de hormigón armado reforzadas con bandas de FRP objeto de esta tesis, detectando y clasificando cada estado de daño. Finalmente, y como alternativa al anterior procedimiento, se propone un método para la monitorización continua de la interfase FRP-Hormigón, a través de una red de sensores FBG permanentemente instalados en dicha interfase. De esta forma, se obtienen medidas de deformación de la interfase en condiciones de carga continua, para ser implementadas en un modelo de optimización multiobjetivo, cuya solución se haya por medio de una expansión multiobjetivo del método Particle Swarm Optimization (PSO). La fiabilidad de este último método de detección se investiga a través de sendos ejemplos tanto numéricos como experimentales. ABSTRACT This thesis aims to develop robust and reliable damage identification methods focused on experimental structural systems, in particular Reinforced Concrete (RC) structures externally strengthened with Fiber Reinforced Polymers (FRP) strips. The failure mode of this type of structural system is critical, since it is usually due to sudden and brittle debonding of the FRP reinforcement originating from intermediate flexural cracks. Detection of the debonding in its initial stage is essential thus to prevent future failure, which might be catastrophic. Initially, a revision of the Electro-Mechanical Impedance (EMI) method is carried out, in order to expose its capabilities for local damage detection. Once the appropriate technology is selected, which includes impedance analyzer as well as novel PZT sensors for smart monitoring, an automated procedure has been design based on the impedance signatures of several lab-scale structures. On the basis that capturing impedance measurements is possible thanks to an adequately deployed PZT sensor network, the estimation of damage presence is done by analyzing the results of different damage indices obtained from the literature. In order to make this process automatic so that it is not necessary a priori knowledge of the EMI method to carry out an experimental test, a Graphical User Interface has been designed, turning the impedance measurements into an easy and intuitive procedure. Damage is then assessed through the analysis of the corresponding damage indices, trying to estimate not only the damage severity, but also its approximate location. The development of these tests on any kind of structure generates large amounts of data to be processed, and sometimes the information provided by damage indices is not enough to achieve a complete analysis of the structural health condition. In most of the cases, some damage patterns can be found in the data, but none a priori knowledge of the health condition is given for any structure. At this point, an important research on pattern recognition techniques has been carried out, particularly on unsupervised learning techniques, finding interesting applications in the medicine field. From this investigation, a creative and innovative idea arose: to detect and track the evolution of damage in different structures, as if it were a cancer propagating through a human body. In that sense, the impedance signatures are used to give intrinsic information of the health condition of the structure, so that the same clustering algorithms applied in the cancer research can be applied to the problem addressed in this dissertation. Hierarchical clustering is then applied since it also provides a graphical display of the clustered data, including quantitative and qualitative information about damage. The performance of this approach is firstly investigated using three lab-scale structures, such as a simple aluminium beam, a bolt-jointed aluminium beam and an FRP-strengthened concrete specimen. The first one shows the performance of the method on simple single and multiple damage scenarios, so that the first conclusions can be extracted and applied to the other two experimental tests, which are designed to simulate a debonding condition on different structures. Once the performance of the impedance-based hierarchical clustering method is proven to be successful, it is then applied to the structural system studied in this dissertation, the RC structures externally strengthened with FRP strips, where the debonding failure in the interface between the FRP and the concrete is successfully detected and classified, proving thus the feasibility of this method. Finally, as an alternative to the previous approach, a continuous monitoring procedure of the FRP-Concrete interface is proposed, based on an FBGsensors Network permanently deployed within that interface. In this way, strain measurements can be obtained under controlled loading conditions, and then they are used in order to implement a multi-objective model updating method solved by a multi-objective expansion of the Particle Swarm Optimization (PSO) method. The feasibility of this last proposal is investigated and successfully proven on both numerical and experimental RC beams strengthened with FRP.
Resumo:
Recent research has shown large differences between the expected and the actual energy consumption in buildings. The differences have been attributed partially, to the assumptions made during the design phase of buildings when simulation methods are employed. More accurate occupancy profiles on building operation could help to carry out more precise building performance calculations. This study focuses on the post-occupancy evaluation of two apartments, one renovated and one non renovated, in Madrid within the same building complex. The aim of this paper is to present an application of the mixed-methods methodology (Creswell, 2007) to assess thermal comfort and occupancy practices used in the case studies, and to discuss the shortcomings and opportunities associated with it. The mixed-methods methodology offers strategies for integrating qualitative and quantitative methods to investigate complex phenomena. This approach is expected to contribute to the growing knowledge of occupants’ behaviour and building performance by explaining the differences observed between energy consumption and thermal comfort in relation to people’s saving and comfort practices and the related experiences, preferences and values.
Resumo:
El uso de técnicas para la monitorización del movimiento humano generalmente permite a los investigadores analizar la cinemática y especialmente las capacidades motoras en aquellas actividades de la vida cotidiana que persiguen un objetivo concreto como pueden ser la preparación de bebidas y comida, e incluso en tareas de aseo. Adicionalmente, la evaluación del movimiento y el comportamiento humanos en el campo de la rehabilitación cognitiva es esencial para profundizar en las dificultades que algunas personas encuentran en la ejecución de actividades diarias después de accidentes cerebro-vasculares. Estas dificultades están principalmente asociadas a la realización de pasos secuenciales y al reconocimiento del uso de herramientas y objetos. La interpretación de los datos sobre la actitud de este tipo de pacientes para reconocer y determinar el nivel de éxito en la ejecución de las acciones, y para ampliar el conocimiento en las enfermedades cerebrales, sus consecuencias y severidad, depende totalmente de los dispositivos usados para la captura de esos datos y de la calidad de los mismos. Más aún, existe una necesidad real de mejorar las técnicas actuales de rehabilitación cognitiva contribuyendo al diseño de sistemas automáticos para crear una especie de terapeuta virtual que asegure una vida más independiente de estos pacientes y reduzca la carga de trabajo de los terapeutas. Con este objetivo, el uso de sensores y dispositivos para obtener datos en tiempo real de la ejecución y estado de la tarea de rehabilitación es esencial para también contribuir al diseño y entrenamiento de futuros algoritmos que pudieran reconocer errores automáticamente para informar al paciente acerca de ellos mediante distintos tipos de pistas como pueden ser imágenes, mensajes auditivos o incluso videos. La tecnología y soluciones existentes en este campo no ofrecen una manera totalmente robusta y efectiva para obtener datos en tiempo real, por un lado, porque pueden influir en el movimiento del propio paciente en caso de las plataformas basadas en el uso de marcadores que necesitan sensores pegados en la piel; y por otro lado, debido a la complejidad o alto coste de implantación lo que hace difícil pensar en la idea de instalar un sistema en el hospital o incluso en la casa del paciente. Esta tesis presenta la investigación realizada en el campo de la monitorización del movimiento de pacientes para proporcionar un paso adelante en términos de detección, seguimiento y reconocimiento del comportamiento de manos, gestos y cara mediante una manera no invasiva la cual puede mejorar la técnicas actuales de rehabilitación cognitiva para la adquisición en tiempo real de datos sobre el comportamiento del paciente y la ejecución de la tarea. Para entender la importancia del marco de esta tesis, inicialmente se presenta un resumen de las principales enfermedades cognitivas y se introducen las consecuencias que tienen en la ejecución de tareas de la vida diaria. Más aún, se investiga sobre las metodologías actuales de rehabilitación cognitiva. Teniendo en cuenta que las manos son la principal parte del cuerpo para la ejecución de tareas manuales de la vida cotidiana, también se resumen las tecnologías existentes para la captura de movimiento de manos. Una de las principales contribuciones de esta tesis está relacionada con el diseño y evaluación de una solución no invasiva para detectar y seguir las manos durante la ejecución de tareas manuales de la vida cotidiana que a su vez involucran la manipulación de objetos. Esta solución la cual no necesita marcadores adicionales y está basada en una cámara de profundidad de bajo coste, es robusta, precisa y fácil de instalar. Otra contribución presentada se centra en el reconocimiento de gestos para detectar el agarre de objetos basado en un sensor infrarrojo de última generación, y también complementado con una cámara de profundidad. Esta nueva técnica, y también no invasiva, sincroniza ambos sensores para seguir objetos específicos además de reconocer eventos concretos relacionados con tareas de aseo. Más aún, se realiza una evaluación preliminar del reconocimiento de expresiones faciales para analizar si es adecuado para el reconocimiento del estado de ánimo durante la tarea. Por su parte, todos los componentes y algoritmos desarrollados son integrados en un prototipo simple para ser usado como plataforma de monitorización. Se realiza una evaluación técnica del funcionamiento de cada dispositivo para analizar si es adecuada para adquirir datos en tiempo real durante la ejecución de tareas cotidianas reales. Finalmente, se estudia la interacción con pacientes reales para obtener información del nivel de usabilidad del prototipo. Dicha información es esencial y útil para considerar una rehabilitación cognitiva basada en la idea de instalación del sistema en la propia casa del paciente al igual que en el hospital correspondiente. ABSTRACT The use of human motion monitoring techniques usually let researchers to analyse kinematics, especially in motor strategies for goal-oriented activities of daily living, such as the preparation of drinks and food, and even grooming tasks. Additionally, the evaluation of human movements and behaviour in the field of cognitive rehabilitation is essential to deep into the difficulties some people find in common activities after stroke. This difficulties are mainly associated with sequence actions and the recognition of tools usage. The interpretation of attitude data of this kind of patients in order to recognize and determine the level of success of the execution of actions, and to broaden the knowledge in brain diseases, consequences and severity, depends totally on the devices used for the capture of that data and the quality of it. Moreover, there is a real need of improving the current cognitive rehabilitation techniques by contributing to the design of automatic systems to create a kind of virtual therapist for the improvement of the independent life of these stroke patients and to reduce the workload of the occupational therapists currently in charge of them. For this purpose, the use of sensors and devices to obtain real time data of the execution and state of the rehabilitation task is essential to also contribute to the design and training of future smart algorithms which may recognise errors to automatically provide multimodal feedback through different types of cues such as still images, auditory messages or even videos. The technology and solutions currently adopted in the field don't offer a totally robust and effective way for obtaining real time data, on the one hand, because they may influence the patient's movement in case of marker-based platforms which need sensors attached to the skin; and on the other hand, because of the complexity or high cost of implementation, which make difficult the idea of installing a system at the hospital or even patient's home. This thesis presents the research done in the field of user monitoring to provide a step forward in terms of detection, tracking and recognition of hand movements, gestures and face via a non-invasive way which could improve current techniques for cognitive rehabilitation for real time data acquisition of patient's behaviour and execution of the task. In order to understand the importance of the scope of the thesis, initially, a summary of the main cognitive diseases that require for rehabilitation and an introduction of the consequences on the execution of daily tasks are presented. Moreover, research is done about the actual methodology to provide cognitive rehabilitation. Considering that the main body members involved in the completion of a handmade daily task are the hands, the current technologies for human hands movements capture are also highlighted. One of the main contributions of this thesis is related to the design and evaluation of a non-invasive approach to detect and track user's hands during the execution of handmade activities of daily living which involve the manipulation of objects. This approach does not need the inclusion of any additional markers. In addition, it is only based on a low-cost depth camera, it is robust, accurate and easy to install. Another contribution presented is focused on the hand gesture recognition for detecting object grasping based on a brand new infrared sensor, and also complemented with a depth camera. This new, and also non-invasive, solution which synchronizes both sensors to track specific tools as well as recognize specific events related to grooming is evaluated. Moreover, a preliminary assessment of the recognition of facial expressions is carried out to analyse if it is adequate for recognizing mood during the execution of task. Meanwhile, all the corresponding hardware and software developed are integrated in a simple prototype with the purpose of being used as a platform for monitoring the execution of the rehabilitation task. Technical evaluation of the performance of each device is carried out in order to analyze its suitability to acquire real time data during the execution of real daily tasks. Finally, a kind of healthcare evaluation is also presented to obtain feedback about the usability of the system proposed paying special attention to the interaction with real users and stroke patients. This feedback is quite useful to consider the idea of a home-based cognitive rehabilitation as well as a possible hospital installation of the prototype.