23 resultados para 300804 Environmental Impact Assessment
Resumo:
The environmental impact of systems managing large (kg) tritium amount represents a public scrutiny issue for the next coming fusion facilities as ITER and DEMO. Furthermore, potentially new dose limits imposed by international regulations (ICRP) shall impact next coming devices designs and the overall costs of fusion technology deployment. Refined environmental tritium dose impact assessment schemes are then overwhelming. Detailed assessments can be procured from the knowledge of the real boundary conditions of the primary tritium discharge phase into atmosphere (low levels) and into soils. Lagrangian dispersion models using real-time meteorological and topographic data provide a strong refinement. Advance simulation tools are being developed in this sense. The tool integrates a numerical model output records from European Centre for Medium range Weather Forecast (ECMWF) with a lagrangian atmospheric dispersion model (FLEXPART). The composite model ECMWF/FLEXTRA results can be coupled with tritium dose secondary phase pathway assessment tools. Nominal tritium discharge operational reference and selected incidental ITER-like plant systems tritium form source terms have been assumed. The realtime daily data and mesh-refined records together with lagrangian dispersion model approach provide accurate results for doses to population by inhalation or ingestion in the secondary phase
Resumo:
We examined the consequences of the spatial heterogeneity of atmospheric ammonia (NH3) by measuring and modelling NH3 concentrations and deposition at 25 m grid resolution for a rural landscape containing intensive poultry farming, agricultural grassland, woodland and moorland. The emission pattern gave rise to a high spatial variability of modelled mean annual NH3 concentrations and dry deposition. Largest impacts were predicted for woodland patches located within the agricultural area, while larger moorland areas were at low risk, due to atmospheric dispersion, prevailing wind direction and low NH3 background. These high resolution spatial details are lost in national scale estimates at 1 km resolution due to less detailed emission input maps. The results demonstrate how the spatial arrangement of sources and sinks is critical to defining the NH3 risk to semi-natural ecosystems. These spatial relationships provide the foundation for local spatial planning approaches to reduce environmental impacts of atmospheric NH3.
Resumo:
One of the key scrutiny issues of new coming energy era would be the environmental impact of fusion facilities managing one kg of tritium. The potential change of committed dose regulatory limits together with the implementation of nuclear design principles (As Low as Reasonably achievable - ALARA -, Defense in Depth -D-i-D-) for fusion facilities could strongly impact on the cost of deployment of coming fusion technology. Accurate modeling of environmental tritium transport forms (HT, HTO) for the assessment of fusion facility dosimetric impact in Accidental case appears as of major interest. This paper considers different short-term releases of tritium forms (HT and HTO) to the atmosphere from a potential fusion reactor located in the Mediterranean Basin. This work models in detail the dispersion of tritium forms and dosimetric impact of selected environmental patterns both inland and in-sea using real topography and forecast meteorological data-fields (ECMWF/FLEXPART). We explore specific values of this ratio in different levels and we examine the influence of meteorological conditions in the HTO behavior for 24 hours. For this purpose we have used a tool which consists on a coupled Lagrangian ECMWF/FLEXPART model useful to follow real time releases of tritium at 10, 30 and 60 meters together with hourly observations of wind (and in some cases precipitations) to provide a short-range approximation of tritium cloud behavior. We have assessed inhalation doses. And also HTO/HT ratios in a representative set of cases during winter 2010 and spring 2011 for the 3 air levels.
Resumo:
Transport climate change impacts have become a worldwide concern. The use of Intelligent Transport Systems (ITS) could contribute to a more effective use of resources in toll road networks. Management of toll plazas is central to the reduction of greenhouse gas (GHG) emissions, as it is there that bottlenecks and congestion occur. This study focuses on management strategies aimed at reducing climate change impacts of toll plazas by managing toll collection systems. These strategies are based on the use of different collection system technologies – Electronic Toll Collection (ETC) and Open Road Tolling (ORT) – and on queue management. The carbon footprint of various toll plazas is determined by a proposed integrated methodology which estimates the carbon dioxide (CO2) emissions of the different operational stages at toll plazas (deceleration, service time, acceleration, and queuing) for the different toll collection systems. To validate the methodology, two main-line toll plazas of a Spanish toll highway were evaluated. The findings reveal that the application of new technologies to toll collection systems is an effective management strategy from an environmental point of view. The case studies revealed that ORT systems lead to savings of up to 70% of CO2 emissions at toll plazas, while ETC systems save 20% comparing to the manual ones. Furthermore, queue management can offer a 16% emissions savings when queue time is reduced by 116 seconds. The integrated methodology provides an efficient environmental management tool for toll plazas. The use of new technologies is the future of the decarbonization of toll plazas.
Resumo:
The increasing importance of pollutant noise has led to the creation of many new noise testing laboratories in recent years. For this reason and due to the legal implications that noise reporting may have, it is necessary to create procedures intended to guarantee the quality of the testing and its results. For instance, the ISO/IEC standard 17025:2005 specifies general requirements for the competence of testing laboratories. In this standard, interlaboratory comparisons are one of the main measures that must be applied to guarantee the quality of laboratories when applying specific methodologies for testing. In the specific case of environmental noise, round robin tests are usually difficult to design, as it is difficult to find scenarios that can be available and controlled while the participants carry out the measurements. Monitoring and controlling the factors that can influence the measurements (source emissions, propagation, background noise…) is not usually affordable, so the most extended solution is to create very effortless scenarios, where most of the factors that can have an influence on the results are excluded (sampling, processing of results, background noise, source detection…) The new approach described in this paper only requires the organizer to make actual measurements (or prepare virtual ones). Applying and interpreting a common reference document (standard, regulation…), the participants must analyze these input data independently to provide the results, which will be compared among the participants. The measurement costs are severely reduced for the participants, there is no need to monitor the scenario conditions, and almost any relevant factor can be included in this methodology
Resumo:
The paper considers short-term releases of tritium (mainly but not only tritium hydride (HT)) to the atmosphere from a potential ITER-like fusion reactor located in the Mediterranean Basin and explores if the short range legal exposure limits are exceeded (both locally and downwind). For this, a coupled Lagrangian ECMWF/FLEXPART model has been used to follow real time releases of tritium. This tool was analyzed for nominal tritium operational conditions under selected incidental conditions to determine resultant local and Western Mediterranean effects, together with hourly observations of wind, to provide a short-range approximation of tritium cloud behavior. Since our results cannot be compared with radiological station measurements of tritium in air, we use the NORMTRI Gaussian model. We demonstrate an overestimation of the sequence of tritium concentrations in the atmosphere, close to the reactor, estimated with this model when compared with ECMWF/FLEXPART results. A Gaussian “mesoscale” qualification tool has been used to validate the ECMWF/FLEXPART for winter 2010/spring 2011 with a database of the HT plumes. It is considered that NORMTRI allows evaluation of tritium-in-air-plume patterns and its contribution to doses.
Resumo:
All activities of an organization involve risks that should be managed. The risk management process aids decision making by taking account of uncertainty and the possibility of future events or circumstances (intended or unintended) and their effects on agreed objectives. With that idea, new ISO Standard has been drawn up. ISO 31010 has been recently issued which provides a structured process that identifies how objectives may be affected, and analyses the risk in term of consequences and their probabilities before deciding on whether further treatment is required. In this lecture, that ISO Standard has been adapted to Open Pit Blasting Operations, focusing in Environmental effects which can be managed properly. Technique used is Fault Tree Analysis (FTA), which is applied in all possible scenarios, providing to Blasting Professionals the tools to identify, analyze and manage environmental effects in blasting operations. Also this lecture can help to minimize each effect, studying each case. This paper also can be useful to Project Managers and Occupational Health and Safety Departments (OH&S) because blasting operations can be evaluated and compared one to each other to determine the risks that should be managed in different case studies. The environmental effects studied are: ground vibrations, flyrock and air overpressure (airblast). Sometimes, blasting operations are carried out near populated areas where environmental effects may impose several limitations on the use of explosives. In those cases, where these factors approach certain limits, National Standards and Regulations have to be applied.
Resumo:
El objetivo de esta investigación es desarrollar una metodología para estimar los potenciales impactos económicos y de transporte generados por la aplicación de políticas en el sector transporte. Los departamentos de transporte y otras instituciones gubernamentales relacionadas se encuentran interesadas en estos análisis debido a que son presentados comúnmente de forma errónea por la insuficiencia de datos o por la falta de metodologías adecuadas. La presente investigación tiene por objeto llenar este vacío haciendo un análisis exhaustivo de las técnicas disponibles que coincidan con ese propósito. Se ha realizado un análisis que ha identificado las diferencias cuando son aplicados para la valoración de los beneficios para el usuario o para otros efectos como aspectos sociales. Como resultado de ello, esta investigación ofrece un enfoque integrado que incluye un modelo Input-Output de múltiples regiones basado en la utilidad aleatoria (RUBMRIO), y un modelo de red de transporte por carretera. Este modelo permite la reproducción con mayor detalle y realismo del transporte de mercancías que por medio de su estructura sectorial identifica los vínculos de las compras y ventas inter-industriales dentro de un país utilizando los servicios del transporte de mercancías. Por esta razón, el modelo integrado es aplicable a diversas políticas de transporte. En efecto, el enfoque se ha aplicado para estudiar los efectos macroeconómicos regionales de la implementación de dos políticas diferentes en el sistema de transporte de mercancías de España, tales como la tarificación basada en la distancia recorrida por vehículo-kilómetro (€/km) aplicada a los vehículos del transporte de mercancías, y para la introducción de vehículos más largos y pesados de mercancías en la red de carreteras de España. El enfoque metodológico se ha evaluado caso por caso teniendo en cuenta una selección de la red de carreteras que unen las capitales de las regiones españolas. También se ha tenido en cuenta una dimensión económica a través de una tabla Input-Output de múltiples regiones (MRIO) y la base de datos de conteo de tráfico existente para realizar la validación del modelo. El enfoque integrado reproduce las condiciones de comercio observadas entre las regiones usando el sistema de transporte de mercancías por carretera, y que permite por comparación con los escenarios de políticas, determinar las contribuciones a los cambios distributivos y generativos. Así pues, el análisis estima los impactos económicos en cualquier región considerando los cambios en el Producto Interno Bruto (PIB) y el empleo. El enfoque identifica los cambios en el sistema de transporte a través de todos los caminos de la red de transporte a través de las medidas de efectividad (MOEs). Los resultados presentados en esta investigación proporcionan evidencia sustancial de que en la evaluación de las políticas de transporte, es necesario establecer un vínculo entre la estructura económica de las regiones y de los servicios de transporte. Los análisis muestran que para la mayoría de las regiones del país, los cambios son evidentes para el PIB y el empleo, ya que el comercio se fomenta o se inhibe. El enfoque muestra cómo el tráfico se desvía en ambas políticas, y también determina detalles de las emisiones de contaminantes en los dos escenarios. Además, las políticas de fijación de precios o de regulación de los sistemas de transporte de mercancías por carretera dirigidas a los productores y consumidores en las regiones promoverán transformaciones regionales afectando todo el país, y esto conduce a conclusiones diferentes. Así mismo, este enfoque integrado podría ser útil para evaluar otras políticas y otros países en todo el mundo. The purpose of this research is to develop a methodological approach aimed at assessing the potential economic and transportation impacts of transport policies. Transportation departments and other related government parties are interested in such analysis because it is commonly misrepresented for the insufficiency of data and suitable methodologies available. This research is directed at filling this gap by making a comprehensive analysis of the available techniques that match with that purpose. The differences when they are applied for the valuation of user benefits or for other impacts as social matters have been identified. As a result, this research presents an integrated approach which includes both a random utility-based multiregional Input-Output model (RUBMRIO), and a road transport network model. This model accounts for freight transport with more detail and realism because its commodity-based structure traces the linkages of inter-industry purchases and sales that use freight services within a given country. For this reason, the integrated model is applicable to various transport policies. In fact, the approach is applied to study the regional macroeconomic effects of implementing two different policies in the freight transport system of Spain, such as a distance-based charge in vehicle-kilometer (€/km) for Heavy Goods Vehicles (HGVs), and the introduction of Longer and Heavier Vehicles (LHVs) in the road network of Spain. The methodological approach has been evaluated on a case by case basis considering a selected road network of highways linking the capitals of the Spanish regions. It has also considered an economic dimension through a Multiregional Input Output Table (MRIO) and the existing traffic count database used in the model validation. The integrated approach replicates observed conditions of trade among regions using road freight transport systems that determine contributions to distributional and generative changes by comparison with policy scenarios. Therefore, the model estimates economic impacts in any given area by considering changes in Gross Domestic Product (GDP), employment (jobs), and in the transportation system across all paths of the transport network considering Measures of effectiveness (MOEs). The results presented in this research provide substantive evidence that in the assessment of transport policies it is necessary to establish a link between the economic structure of regions and the transportation services. The analysis shows that for most regions in the country, GDP and employment changes are noticeable when trade is encouraged or discouraged. This approach shows how traffic is diverted in both policies, and also provides details of the pollutant emissions in both scenarios. Furthermore, policies, such as pricing or regulation of road freight transportation systems, directed to producers and consumers in regions will promote different regional transformations across the country, and this lead to different conclusions. In addition, this integrated approach could be useful to assess other policies and countries worldwide.
Resumo:
La presente Tesis analiza las posibilidades que ofrecen en la actualidad las tecnologías del habla para la detección de patologías clínicas asociadas a la vía aérea superior. El estudio del habla que tradicionalmente cubre tanto la producción como el proceso de transformación del mensaje y las señales involucradas, desde el emisor hasta alcanzar al receptor, ofrece una vía de estudio alternativa para estas patologías. El hecho de que la señal emitida no solo contiene este mensaje, sino también información acerca del locutor, ha motivado el desarrollo de sistemas orientados a la identificación y verificación de la identidad de los locutores. Estos trabajos han recibido recientemente un nuevo impulso, orientándose tanto hacia la caracterización de rasgos que son comunes a varios locutores, como a las diferencias existentes entre grabaciones de un mismo locutor. Los primeros resultan especialmente relevantes para esta Tesis dado que estos rasgos podrían evidenciar la presencia de características relacionadas con una cierta condición común a varios locutores, independiente de su identidad. Tal es el caso que se enfrenta en esta Tesis, donde los rasgos identificados se relacionarían con una de la patología particular y directamente vinculada con el sistema de físico de conformación del habla. El caso del Síndrome de Apneas Hipopneas durante el Sueno (SAHS) resulta paradigmático. Se trata de una patología con una elevada prevalencia mundo, que aumenta con la edad. Los pacientes de esta patología experimentan episodios de cese involuntario de la respiración durante el sueño, que se prolongan durante varios segundos y que se reproducen a lo largo de la noche impidiendo el correcto descanso. En el caso de la apnea obstructiva, estos episodios se deben a la imposibilidad de mantener un camino abierto a través de la vía aérea, de forma que el flujo de aire se ve interrumpido. En la actualidad, el diagnostico de estos pacientes se realiza a través de un estudio polisomnográfico, que se centra en el análisis de los episodios de apnea durante el sueño, requiriendo que el paciente permanezca en el hospital durante una noche. La complejidad y el elevado coste de estos procedimientos, unidos a las crecientes listas de espera, han evidenciado la necesidad de contar con técnicas rápidas de detección, que si bien podrían no obtener tasas tan elevadas, permitirían reorganizar las listas de espera en función del grado de severidad de la patología en cada paciente. Entre otros, los sistemas de diagnostico por imagen, así como la caracterización antropométrica de los pacientes, han evidenciado la existencia de patrones anatómicos que tendrían influencia directa sobre el habla. Los trabajos dedicados al estudio del SAHS en lo relativo a como esta afecta al habla han sido escasos y algunos de ellos incluso contradictorios. Sin embargo, desde finales de la década de 1980 se conoce la existencia de patrones específicos relativos a la articulación, la fonación y la resonancia. Sin embargo, su descripción resultaba difícilmente aprovechable a través de un sistema de reconocimiento automático, pero apuntaba la existencia de un nexo entre voz y SAHS. En los últimos anos las técnicas de procesado automático han permitido el desarrollo de sistemas automáticos que ya son capaces de identificar diferencias significativas en el habla de los pacientes del SAHS, y que los distinguen de los locutores sanos. Por contra, poco se conoce acerca de la conexión entre estos nuevos resultados, los sé que habían obtenido en el pasado y la patogénesis del SAHS. Esta Tesis continua la labor desarrollada en este ámbito considerando específicamente: el estudio de la forma en que el SAHS afecta el habla de los pacientes, la mejora en las tasas de clasificación automática y la combinación de la información obtenida con los predictores utilizados por los especialistas clínicos en sus evaluaciones preliminares. Las dos primeras tareas plantean problemas simbióticos, pero diferentes. Mientras el estudio de la conexión entre el SAHS y el habla requiere de modelos acotados que puedan ser interpretados con facilidad, los sistemas de reconocimiento se sirven de un elevado número de dimensiones para la caracterización y posterior identificación de patrones. Así, la primera tarea debe permitirnos avanzar en la segunda, al igual que la incorporación de los predictores utilizados por los especialistas clínicos. La Tesis aborda el estudio tanto del habla continua como del habla sostenida, con el fin de aprovechar las sinergias y diferencias existentes entre ambas. En el análisis del habla continua se tomo como punto de partida un esquema que ya fue evaluado con anterioridad, y sobre el cual se ha tratado la evaluación y optimización de la representación del habla, así como la caracterización de los patrones específicos asociados al SAHS. Ello ha evidenciado la conexión entre el SAHS y los elementos fundamentales de la señal de voz: los formantes. Los resultados obtenidos demuestran que el éxito de estos sistemas se debe, fundamentalmente, a la capacidad de estas representaciones para describir dichas componentes, obviando las dimensiones ruidosas o con poca capacidad discriminativa. El esquema resultante ofrece una tasa de error por debajo del 18%, sirviéndose de clasificadores notablemente menos complejos que los descritos en el estado del arte y de una única grabación de voz de corta duración. En relación a la conexión entre el SAHS y los patrones observados, fue necesario considerar las diferencias inter- e intra-grupo, centrándonos en la articulación característica del locutor, sustituyendo los complejos modelos de clasificación por el estudio de los promedios espectrales. El resultado apunta con claridad hacia ciertas regiones del eje de frecuencias, sugiriendo la existencia de un estrechamiento sistemático en la sección del tracto en la región de la orofaringe, ya prevista en la patogénesis de este síndrome. En cuanto al habla sostenida, se han reproducido los estudios realizados sobre el habla continua en grabaciones de la vocal /a/ sostenida. Los resultados son cualitativamente análogos a los anteriores, si bien en este caso las tasas de clasificación resultan ser más bajas. Con el objetivo de identificar el sentido de este resultado se reprodujo el estudio de los promedios espectrales y de la variabilidad inter e intra-grupo. Ambos estudios mostraron importantes diferencias con los anteriores que podrían explicar estos resultados. Sin embargo, el habla sostenida ofrece otras oportunidades al establecer un entorno controlado para el estudio de la fonación, que también había sido identificada como una fuente de información para la detección del SAHS. De su estudio se pudo observar que, en el conjunto de datos disponibles, no existen variaciones que pudieran asociarse fácilmente con la fonación. Únicamente aquellas dimensiones que describen la distribución de energía a lo largo del eje de frecuencia evidenciaron diferencias significativas, apuntando, una vez más, en la dirección de las resonancias espectrales. Analizados los resultados anteriores, la Tesis afronta la fusión de ambas fuentes de información en un único sistema de clasificación. Con ello es posible mejorar las tasas de clasificación, bajo la hipótesis de que la información presente en el habla continua y el habla sostenida es fundamentalmente distinta. Esta tarea se realizo a través de un sencillo esquema de fusión que obtuvo un 88.6% de aciertos en clasificación (tasa de error del 11.4%), lo que representa una mejora significativa respecto al estado del arte. Finalmente, la combinación de este clasificador con los predictores utilizados por los especialistas clínicos ofreció una tasa del 91.3% (tasa de error de 8.7%), que se encuentra dentro del margen ofrecido por esquemas más costosos e intrusivos, y que a diferencia del propuesto, no pueden ser utilizados en la evaluación previa de los pacientes. Con todo, la Tesis ofrece una visión clara sobre la relación entre el SAHS y el habla, evidenciando el grado de madurez alcanzado por la tecnología del habla en la caracterización y detección del SAHS, poniendo de manifiesto que su uso para la evaluación de los pacientes ya sería posible, y dejando la puerta abierta a futuras investigaciones que continúen el trabajo aquí iniciado. ABSTRACT This Thesis explores the potential of speech technologies for the detection of clinical disorders connected to the upper airway. The study of speech traditionally covers both the production process and post processing of the signals involved, from the speaker up to the listener, offering an alternative path to study these pathologies. The fact that utterances embed not just the encoded message but also information about the speaker, has motivated the development of automatic systems oriented to the identification and verificaton the speaker’s identity. These have recently been boosted and reoriented either towards the characterization of traits that are common to several speakers, or to the differences between records of the same speaker collected under different conditions. The first are particularly relevant to this Thesis as these patterns could reveal the presence of features that are related to a common condition shared among different speakers, regardless of their identity. Such is the case faced in this Thesis, where the traits identified would relate to a particular pathology, directly connected to the speech production system. The Obstructive Sleep Apnea syndrome (OSA) is a paradigmatic case for analysis. It is a disorder with high prevalence among adults and affecting a larger number of them as they grow older. Patients suffering from this disorder experience episodes of involuntary cessation of breath during sleep that may last a few seconds and reproduce throughout the night, preventing proper rest. In the case of obstructive apnea, these episodes are related to the collapse of the pharynx, which interrupts the air flow. Currently, OSA diagnosis is done through a polysomnographic study, which focuses on the analysis of apnea episodes during sleep, requiring the patient to stay at the hospital for the whole night. The complexity and high cost of the procedures involved, combined with the waiting lists, have evidenced the need for screening techniques, which perhaps would not achieve outstanding performance rates but would allow clinicians to reorganize these lists ranking patients according to the severity of their condition. Among others, imaging diagnosis and anthropometric characterization of patients have evidenced the existence of anatomical patterns related to OSA that have direct influence on speech. Contributions devoted to the study of how this disorder affects scpeech are scarce and somehow contradictory. However, since the late 1980s the existence of specific patterns related to articulation, phonation and resonance is known. By that time these descriptions were virtually useless when coming to the development of an automatic system, but pointed out the existence of a link between speech and OSA. In recent years automatic processing techniques have evolved and are now able to identify significant differences in the speech of OSAS patients when compared to records from healthy subjects. Nevertheless, little is known about the connection between these new results with those published in the past and the pathogenesis of the OSA syndrome. This Thesis is aimed to progress beyond the previous research done in this area by addressing: the study of how OSA affects patients’ speech, the enhancement of automatic OSA classification based on speech analysis, and its integration with the information embedded in the predictors generally used by clinicians in preliminary patients’ examination. The first two tasks, though may appear symbiotic at first, are quite different. While studying the connection between speech and OSA requires simple narrow models that can be easily interpreted, classification requires larger models including a large number dimensions for the characterization and posterior identification of the observed patterns. Anyhow, it is clear that any progress made in the first task should allow us to improve our performance on the second one, and that the incorporation of the predictors used by clinicians shall contribute in this same direction. The Thesis considers both continuous and sustained speech analysis, to exploit the synergies and differences between them. On continuous speech analysis, a conventional speech processing scheme, designed and evaluated before this Thesis, was taken as a baseline. Over this initial system several alternative representations of the speech information were proposed, optimized and tested to select those more suitable for the characterization of OSA-specific patterns. Evidences were found on the existence of a connection between OSA and the fundamental constituents of the speech: the formants. Experimental results proved that the success of the proposed solution is well explained by the ability of speech representations to describe these specific OSA-related components, ignoring the noisy ones as well those presenting low discrimination capabilities. The resulting scheme obtained a 18% error rate, on a classification scheme significantly less complex than those described in the literature and operating on a single speech record. Regarding the connection between OSA and the observed patterns, it was necessary to consider inter-and intra-group differences for this analysis, and to focus on the articulation, replacing the complex classification models by the long-term average spectra. Results clearly point to certain regions on the frequency axis, suggesting the existence of a systematic narrowing in the vocal tract section at the oropharynx. This was already described in the pathogenesis of this syndrome. Regarding sustained speech, similar experiments as those conducted on continuous speech were reproduced on sustained phonations of vowel / a /. Results were qualitatively similar to the previous ones, though in this case perfomance rates were found to be noticeably lower. Trying to derive further knowledge from this result, experiments on the long-term average spectra and intraand inter-group variability ratios were also reproduced on sustained speech records. Results on both experiments showed significant differences from the previous ones obtained from continuous speech which could explain the differences observed on peformance. However, sustained speech also provided the opportunity to study phonation within the controlled framework it provides. This was also identified in the literature as a source of information for the detection of OSA. In this study it was found that, for the available dataset, no sistematic differences related to phonation could be found between the two groups of speakers. Only those dimensions which relate energy distribution along the frequency axis provided significant differences, pointing once again towards the direction of resonant components. Once classification schemes on both continuous and sustained speech were developed, the Thesis addressed their combination into a single classification system. Under the assumption that the information in continuous and sustained speech is fundamentally different, it should be possible to successfully merge the two of them. This was tested through a simple fusion scheme which obtained a 88.6% correct classification (11.4% error rate), which represents a significant improvement over the state of the art. Finally, the combination of this classifier with the variables used by clinicians obtained a 91.3% accuracy (8.7% error rate). This is within the range of alternative, but costly and intrusive schemes, which unlike the one proposed can not be used in the preliminary assessment of patients’ condition. In the end, this Thesis has shed new light on the underlying connection between OSA and speech, and evidenced the degree of maturity reached by speech technology on OSA characterization and detection, leaving the door open for future research which shall continue in the multiple directions that have been pointed out and left as future work.
Resumo:
The ex ante quantification of impactsis compulsory when establishing a Rural Development Program (RDP) in the European Union. Thus, the purpose of this paper is to learn how to perform it better. In order to this all of the European 2007-2013 RDPs (a total of 88) and all of their corresponding available ex ante evaluations were analyzed.Results show that less than 50% of all RDPs quantify all the impact indicators and that the most used methodology that allows the quantification of all impact indicators is Input-Output. There are two main difficulties cited for not accomplishing the impact quantification: the heterogeneity of actors and factors involved in the program impacts and the lack of needed information.These difficulties should be addressedby usingnew methods that allow approaching the complexity of the programs and by implementing a better planning that facilitatesgathering the needed information.
Resumo:
The European Commission established Mid-term evaluation for the period 2007-2013 on Rural Development Programs as part of a continuous evaluation system. Mid-term evaluations are important for the Commission because they help measuring the success of a program, as well as giving advice and pointing out good practices for the current and consecutive programming periods. One of the main elements used to achieve these objectives is the impact indicators estimation of the program. This paper will focus on how impact indicators estimation is done for just the environmental indicators. To do this the 88 Mid-term evaluations of Rural Development Programs for 2007-2013 period, were analyzed. This study shows how far the actual methodologies to obtain impact indicators? values are from what the European Commission expects when demanding this task to be done.
Resumo:
Data centers are easily found in every sector of the worldwide economy. They are composed of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of Data Centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grep 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, Data Centers are responsible for more than 2% of total carbon dioxide emissions.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
El sector energético, en España en particular, y de forma similar en los principales países de Europa, cuenta con una significativa sobrecapacidad de generación, debido al rápido y significativo crecimiento de las energías renovables en los últimos diez años y la reducción de la demanda energética, como consecuencia de la crisis económica. Esta situación ha hecho que las centrales térmicas de generación de electricidad, y en concreto los ciclos combinados de gas, operen con un factor de utilización extremadamente bajo, del orden del 10%. Además de la reducción de ingresos, esto supone para las plantas trabajar continuamente fuera del punto de diseño, provocando una significativa pérdida de rendimiento y mayores costes de explotación. En este escenario, cualquier contribución que ayude a mejorar la eficiencia y la condición de los equipos, es positivamente considerada. La gestión de activos está ganando relevancia como un proceso multidisciplinar e integrado, tal y como refleja la reciente publicación de las normas ISO 55000:2014. Como proceso global e integrado, la gestión de activos requiere el manejo de diversos procesos y grandes volúmenes de información, incluso en tiempo real. Para ello es necesario utilizar tecnologías de la información y aplicaciones de software. Esta tesis desarrolla un concepto integrado de gestión de activos (Integrated Plant Management – IPM) aplicado a centrales de ciclo combinado y una metodología para estimar el beneficio aportado por el mismo. Debido a las incertidumbres asociadas a la estimación del beneficio, se ha optado por un análisis probabilístico coste-beneficio. Así mismo, el análisis cuantitativo se ha completado con una validación cualitativa del beneficio aportado por las tecnologías incorporadas al concepto de gestión integrada de activos, mediante una entrevista realizada a expertos del sector de generación de energía. Los resultados del análisis coste-beneficio son positivos, incluso en el desfavorable escenario con un factor de utilización de sólo el 10% y muy prometedores para factores de utilización por encima del 30%. ABSTRACT The energy sector particularly in Spain, and in a similar way in Europe, has a significant overcapacity due to the big growth of the renewable energies in the last ten years, and it is seriously affected by the demand decrease due to the economic crisis. That situation has forced the thermal plants and in particular, the combined cycles to operate with extremely low annual average capacity factors, very close to 10%. Apart from the incomes reduction, working in out-of-design conditions, means getting a worse performance and higher costs than expected. In this scenario, anything that can be done to improve the efficiency and the equipment condition is positively received. Asset Management, as a multidisciplinary and integrated process, is gaining prominence, reflected in the recent publication of the ISO 55000 series in 2014. Dealing Asset Management as a global, integrated process needs to manage several processes and significant volumes of information, also in real time, that requires information technologies and software applications to support it. This thesis proposes an integrated asset management concept (Integrated Plant Management-IPM) applied to combined cycle power plants and develops a methodology to assess the benefit that it can provide. Due to the difficulties in getting deterministic benefit estimation, a statistical approach has been adopted for the cot-benefit analysis. As well, the quantitative analysis has been completed with a qualitative validation of the technologies included in the IPM and their contribution to key power plant challenges by power generation sector experts. The cost- benefit analysis provides positive results even in the negative scenario of annual average capacity factor close to 10% and is promising for capacity factors over 30%.
Resumo:
The last decade, scientific studies have indicated an association between air pollution to which people are exposed and wide range of adverse health outcomes. We have developed a tool which is based on a model (MM5-CMAQ) running over Europe with 50 km spatial resolution, based on EMEP annual emissions, to produce a short-term forecast of the impact on health. In order to estimate the mortality change (forecasted for the next 24 hours) we have chosen a log-linear (Poisson) regression form to estimate the concentration-response function. The parameters involved in the C-R function have been estimated based on epidemiological studies, which have been published. Finally, we have derived the relationship between concentration change and mortality change from the C-R function which is the final health impact function.