862 resultados para make energy use more effective
Resumo:
Background Energy Policy is one of the main drivers of Transport Policy. A number of strategies to reduce current energy consumption trends in the transport sector have been designed over the last decades. They include fuel taxes, more efficient technologies and changing travel behavior through demand regulation. But energy market has a high degree of uncertainty and the effectiveness of those policy options should be assessed. Methods A scenario based assessment methodology has been developed in the frame of the EU project STEPS. It provides an integrated view of Energy efficiency, environment, social and competitiveness impacts of the different strategies. It has been applied at European level and to five specific Regions. Concluding remarks The results are quite site specific dependent. However they show that regulation measures appear to be more effective than new technology investments. Higher energy prices could produce on their turn a deterioration of competitiveness and a threat for social goals.
Resumo:
Las nanopartículas de metales nobles (especialmente las de oro) tienen un gran potencial asociado al desarrollo de sistemas de terapia contra el cáncer debido principalmente a sus propiedades ópticas, ya que cuando son irradiadas con un haz de luz sintonizado en longitud de onda con su máximo de Resonancia de Plasmón Superficial, absorben de manera muy eficiente dicha luz y la disipan rápidamente al medio en forma de calor localizado. Esta característica por tanto, puede ser aprovechada para conseguir elevar la temperatura de células tumorales hasta sobrepasar umbrales a partir de los cuales se produciría la muerte celular. Partiendo de estos principios, esta tesis se centra en el desarrollo y la caracterización de una serie de prototipos de hipertermia óptica basados en la irradiación de nanopartículas de oro con un haz de luz adecuado, así como en la aplicación in vitro de la terapia sobre células cancerígenas. Además, el trabajo se orienta a identificar y comprender los procesos mecánicos y térmicos asociados a este tipo de hipertermia, y a desarrollar modelos que los describan, estudiando y planteando nuevas formas de irradiación, para, en última instancia, poder optimizar los procesos descritos y hacerlos más efectivos. Los resultados obtenidos indican que, el uso de nanopartículas de oro, y más concretamente de nanorods de oro, para llevar a cabo terapias de hipertermia óptica, permite desarrollar terapias muy efectivas para inducir muerte en células cancerígenas, especialmente en tumores superficiales, o como complemento quirúrgico en tumores internos. Sin embargo, los efectos de la toxicidad de las nanopartículas de oro, aún deben ser detalladamente estudiados, ya que este tipo de terapias sólo será viable si se consigue una completa biocompatibilidad. Por otro lado, el estudio exhaustivo de los procesos térmicos que tienen lugar durante la irradiación de las nanopartículas ha dado lugar a una serie de modelos que permiten determinar la efectividad fototérmica de las nanopartículas y además, visualizar la evolución de la temperatura tanto a escala nanométrica como a escala macrométrica, en función de los parámetros ópticos y térmicos del sistema. El planteamiento de nuevas formas de irradiación y el desarrollo de dispositivos orientados a estudiar los fenómenos mecánicos que tienen lugar durante la irradiación pulsada de baja frecuencia y baja potencia de nanopartículas de oro, ha dado lugar a la detección de ondas de presión asociadas a procesos de expansión termoelástica, abriendo la puerta al desarrollo de terapias de hipertermia que combinen la muerte celular producida por calentamiento con la muerte derivada de los fenómenos mecánicos descritos.VII Noble metal nanoparticles (especially gold ones), have a huge potential in the development of therapy systems against cancer mainly due to their optical properties, so that, when these particles are irradiated with a light that is syntonized in wavelength with their maximum of Surface Plasmon Resonance, they effectively absorb and dissipate the light to the surrounding medium as localized heat. We can take advantage of this characteristic for rising the temperature of cancer cells above the threshold at which cellular death would occur. From these principles, this thesis is oriented to the development and characterization of a series of optical hyperthermia prototypes based on the irradiation of gold nanoparticles using the suitable light, and on the in vitro application of this therapy over cancer cells, to understand the mechanical and thermal processes associated with this kind of hyperthermia, developing descriptive models, and to study and to approach new ways of irradiation in order to, ultimately, optimize the described processes and make them more effective. The obtained results show that, the use of gold nanoparticles, and more specifically, of gold nanorods, to carry out optical hyperthermia therapies, allows the development of very effective therapies in order to induce death in VIII cancer cells, especially in superficial tumors, or like surgical complement in more internal tumors. However, the toxicity effects of the gold nanoparticles still need to be studied more detail, because this kind of therapies will be feasible only if a complete biocompatibility is achieved. On the other hand, the exhaustive study of the thermal processes that take place during the irradiation of the nanoparticles resulted in a series of models that allow the determination of the photothermal efficiency of the nanoparticles and also the visualization of the temperature evolution, both at nanoscale and at macroscale, as a function of the optical and thermal parameters of the system. The proposal of new ways of irradiation and the development of devices oriented to study the mechanical effects that take place during the low frequency and low power pulsing irradiation of gold nanoparticles has led to the detection of pressure waves associated to thermoelastic expansion processes, opening the door to the development of hyperthermia therapies that combine the cellular death due to the heating with the death derived from the described mechanical phenomena.
Resumo:
Con la finalidad de ayudar a la creación y desarrollo de modelos de predicción y simulación que permitan al ciudadano/administraciones publicas gestionar el consumo energético de forma más eficiente y respetuosa con el medio ambiente, se ha implementado un sistema de gestión de datos de indicadores energéticos. En 2007 la UE creó una directiva conocida como "20/20/20" en la que la Unión Europea se compromete a ahorrar un 20% del consumo anual de energía primaria desde esa fecha a 2020. En 2009 la Comisión Europea ha llegado a la conclusión de que con las medidas propuestas en dicha directiva no se podría alcanzar el objetivo de reducción del 20% del consumo energético previsto para el 2020, quedándose en menos de la mitad. Para dar un nuevo impulso a la eficiencia energética se redacta una propuesta de directiva: 2011/0172(COD). En esta directiva se obliga a los estados miembros a potenciar y ampliar la información estadística agregada sobre sus clientes finales (los perfiles de carga, la segmentación de los clientes, su ubicación geográfica, etc ). La Unión Europea plantea que incrementar el volumen y la accesibilidad de los datos de consumo energético, ayudará de forma significativa a alcanzar los objetivos. En este marco, parece lógico afirmar que un banco de datos de indicadores energéticos universalmente accesible puede contribuir de un modo efectivo al aumento de la eficiencia energética. Como aplicativo de este PFC se ha desarrollado una aplicación que permite la definición y almacenamiento de indicadores energéticos, en la que los diferentes sistemas, propietarios o abiertos, pueden volcar y extraer datos de una forma poco costosa. Se ha pretendido realizar una aplicación lo más abierta posible, tanto desde el punto de vista de la funcionalidad, permitiendo la definición del propio indicador a través del sistema, como desde el punto de vista de la implementación, usando únicamente código abierto para el desarrollo de la misma. ABSTRACT. In order to assist in the creation and development of forecasting and simulation models that enable citizens / public authorities manage energy consumption more efficient and environmentally friendly, we have implemented a data management system of energy indicators. In 2007 the EU created a policy known as " 20/20/20 " in which the European Union is committed to saving 20 % of the annual primary energy consumption from that date to 2020 . In 2009 the European Commission has concluded that the measures proposed in the directive could not achieve the goal of 20% reduction in energy consumption expected for 2020 , staying in less than half. To give new impetus to energy efficiency is drawn up a draft directive : 2011/0172 ( COD ) . This directive obliges member states to strengthen and expand aggregate statistical information on their final customers ( load profiles , customer segmentation , geographic location, etc. ) . The European Union argues that increasing the volume and accessibility ofenergy data , will significantly help to achieve the objectives . In this context , it seems logical to say that a database of universally accessible energy indicators can contribute in an effective way to increase energy efficiency. As of this PFC application has developed an application that allows the definition and storage of energy indicators , in which different systems, proprietary or open, can tip and extract data from an inexpensive way. We have tried to make an application as open as possible , both from the point of view of functionality , allowing the definition of the indicator itself through the system , and from the point of view of implementation, using only open source development thereof.
Resumo:
Optical hyperthermia systems based on the laser irradiation of gold nanorods seem to be a promising tool in the development of therapies against cancer. After a proof of concept in which the authors demonstrated the efficiency of this kind of systems, a modeling process based on an equivalent thermal-electric circuit has been carried out to determine the thermal parameters of the system and an energy balance obtained from the time-dependent heating and cooling temperature curves of the irradiated samples in order to obtain the photothermal transduction efficiency. By knowing this parameter, it is possible to increase the effectiveness of the treatments, thanks to the possibility of predicting the response of the device depending on the working configuration. As an example, the thermal behavior of two different kinds of nanoparticles is compared. The results show that, under identical conditions, the use of PEGylated gold nanorods allows for a more efficient heating compared with bare nanorods, and therefore, it results in a more effective therapy.
Resumo:
Los dispositivos móviles modernos disponen cada vez de más funcionalidad debido al rápido avance de las tecnologías de las comunicaciones y computaciones móviles. Sin embargo, la capacidad de la batería no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas móviles modernos se ve muy afectada por la vida de la batería, que es un factor inestable de difícil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energía y que proporciona una garantía sobre la vida operativa de la batería mediante la gestión de la energía como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administración del consumo de energía y en la garantía del rendimiento de las aplicaciones, esta tesis explora la optimización de la experiencia de usuario para sistemas móviles con energía limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energía en un contexto en el que ésta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema móvil. Después se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificación centrada en el consumo de energía, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clásico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energía, y en base a ésto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificación basada en la energía. El algoritmo EFQ está diseñado para compartir la potencia consumida entre las tareas mediante su planificación en función de la energía consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales está protegida frente a diversos cambios que puedan ocurrir en el sistema. Además, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificación durante breves intervalos de tiempo a las tareas más urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a través del modelado de alto nivel y la simulación. Los resultados de las simulaciones indican que los requisitos esenciales de la planificación centrada en la energía pueden lograrse. El algoritmo EFQ se implementa más tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarrolló un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de código abierto. A través de experimentos específicamente diseñados, esta tesis verifica primero las propiedades de EFQ en la gestión de la cuota de consumo de potencia y la planificación en tiempo real y, a continuación, explora los beneficios potenciales de emplear la planificación EFQ en la optimización de la experiencia de usuario para sistemas móviles con energía limitada. Los resultados experimentales sobre la gestión de la cuota de energía muestran que EFQ es más eficaz que el planificador de Linux-CFS en la gestión de energía, logrando un reparto proporcional de la energía del sistema independientemente de en qué dispositivo se consume la energía. Los resultados experimentales en la planificación en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se dé el caso de aumento del el número de tareas o del error en la estimación de energía. Por último, un análisis comparativo de los resultados experimentales sobre la optimización de la experiencia del usuario demuestra que, primero, EFQ es más eficaz y flexible que los algoritmos tradicionales de planificación del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas móviles con energía limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.
Resumo:
La presente Tesis constituye un avance en el conocimiento de los efectos de la variabilidad climática en los cultivos en la Península Ibérica (PI). Es bien conocido que la temperatura del océano, particularmente de la región tropical, es una de las variables más convenientes para ser utilizado como predictor climático. Los océanos son considerados como la principal fuente de almacenamiento de calor del planeta debido a la alta capacidad calorífica del agua. Cuando se libera esta energía, altera los regímenes globales de circulación atmosférica por mecanismos de teleconexión. Estos cambios en la circulación general de la atmósfera afectan a la temperatura, precipitación, humedad, viento, etc., a escala regional, los cuales afectan al crecimiento, desarrollo y rendimiento de los cultivos. Para el caso de Europa, esto implica que la variabilidad atmosférica en una región específica se asocia con la variabilidad de otras regiones adyacentes y/o remotas, como consecuencia Europa está siendo afectada por los patrones de circulaciones globales, que a su vez, se ven afectados por patrones oceánicos. El objetivo general de esta tesis es analizar la variabilidad del rendimiento de los cultivos y su relación con la variabilidad climática y teleconexiones, así como evaluar su predictibilidad. Además, esta Tesis tiene como objetivo establecer una metodología para estudiar la predictibilidad de las anomalías del rendimiento de los cultivos. El análisis se centra en trigo y maíz como referencia para otros cultivos de la PI, cultivos de invierno en secano y cultivos de verano en regadío respectivamente. Experimentos de simulación de cultivos utilizando una metodología en cadena de modelos (clima + cultivos) son diseñados para evaluar los impactos de los patrones de variabilidad climática en el rendimiento y su predictibilidad. La presente Tesis se estructura en dos partes: La primera se centra en el análisis de la variabilidad del clima y la segunda es una aplicación de predicción cuantitativa de cosechas. La primera parte está dividida en 3 capítulos y la segundo en un capitulo cubriendo los objetivos específicos del presente trabajo de investigación. Parte I. Análisis de variabilidad climática El primer capítulo muestra un análisis de la variabilidad del rendimiento potencial en una localidad como indicador bioclimático de las teleconexiones de El Niño con Europa, mostrando su importancia en la mejora de predictibilidad tanto en clima como en agricultura. Además, se presenta la metodología elegida para relacionar el rendimiento con las variables atmosféricas y oceánicas. El rendimiento de los cultivos es parcialmente determinado por la variabilidad climática atmosférica, que a su vez depende de los cambios en la temperatura de la superficie del mar (TSM). El Niño es el principal modo de variabilidad interanual de la TSM, y sus efectos se extienden en todo el mundo. Sin embargo, la predictibilidad de estos impactos es controversial, especialmente aquellos asociados con la variabilidad climática Europea, que se ha encontrado que es no estacionaria y no lineal. Este estudio mostró cómo el rendimiento potencial de los cultivos obtenidos a partir de datos de reanálisis y modelos de cultivos sirve como un índice alternativo y más eficaz de las teleconexiones de El Niño, ya que integra las no linealidades entre las variables climáticas en una única serie temporal. Las relaciones entre El Niño y las anomalías de rendimiento de los cultivos son más significativas que las contribuciones individuales de cada una de las variables atmosféricas utilizadas como entrada en el modelo de cultivo. Además, la no estacionariedad entre El Niño y la variabilidad climática europea se detectan con mayor claridad cuando se analiza la variabilidad de los rendimiento de los cultivos. La comprensión de esta relación permite una cierta predictibilidad hasta un año antes de la cosecha del cultivo. Esta predictibilidad no es constante, sino que depende tanto la modulación de la alta y baja frecuencia. En el segundo capítulo se identifica los patrones oceánicos y atmosféricos de variabilidad climática que afectan a los cultivos de verano en la PI. Además, se presentan hipótesis acerca del mecanismo eco-fisiológico a través del cual el cultivo responde. Este estudio se centra en el análisis de la variabilidad del rendimiento de maíz en la PI para todo el siglo veinte, usando un modelo de cultivo calibrado en 5 localidades españolas y datos climáticos de reanálisis para obtener series temporales largas de rendimiento potencial. Este estudio evalúa el uso de datos de reanálisis para obtener series de rendimiento de cultivos que dependen solo del clima, y utilizar estos rendimientos para analizar la influencia de los patrones oceánicos y atmosféricos. Los resultados muestran una gran fiabilidad de los datos de reanálisis. La distribución espacial asociada a la primera componente principal de la variabilidad del rendimiento muestra un comportamiento similar en todos los lugares estudiados de la PI. Se observa una alta correlación lineal entre el índice de El Niño y el rendimiento, pero no es estacionaria en el tiempo. Sin embargo, la relación entre la temperatura del aire y el rendimiento se mantiene constante a lo largo del tiempo, siendo los meses de mayor influencia durante el período de llenado del grano. En cuanto a los patrones atmosféricos, el patrón Escandinavia presentó una influencia significativa en el rendimiento en PI. En el tercer capítulo se identifica los patrones oceánicos y atmosféricos de variabilidad climática que afectan a los cultivos de invierno en la PI. Además, se presentan hipótesis acerca del mecanismo eco-fisiológico a través del cual el cultivo responde. Este estudio se centra en el análisis de la variabilidad del rendimiento de trigo en secano del Noreste (NE) de la PI. La variabilidad climática es el principal motor de los cambios en el crecimiento, desarrollo y rendimiento de los cultivos, especialmente en los sistemas de producción en secano. En la PI, los rendimientos de trigo son fuertemente dependientes de la cantidad de precipitación estacional y la distribución temporal de las mismas durante el periodo de crecimiento del cultivo. La principal fuente de variabilidad interanual de la precipitación en la PI es la Oscilación del Atlántico Norte (NAO), que se ha relacionado, en parte, con los cambios en la temperatura de la superficie del mar en el Pacífico Tropical (El Niño) y el Atlántico Tropical (TNA). La existencia de cierta predictibilidad nos ha animado a analizar la posible predicción de los rendimientos de trigo en la PI utilizando anomalías de TSM como predictor. Para ello, se ha utilizado un modelo de cultivo (calibrado en dos localidades del NE de la PI) y datos climáticos de reanálisis para obtener series temporales largas de rendimiento de trigo alcanzable y relacionar su variabilidad con anomalías de la TSM. Los resultados muestran que El Niño y la TNA influyen en el desarrollo y rendimiento del trigo en el NE de la PI, y estos impactos depende del estado concurrente de la NAO. Aunque la relación cultivo-TSM no es igual durante todo el periodo analizado, se puede explicar por un mecanismo eco-fisiológico estacionario. Durante la segunda mitad del siglo veinte, el calentamiento (enfriamiento) en la superficie del Atlántico tropical se asocia a una fase negativa (positiva) de la NAO, que ejerce una influencia positiva (negativa) en la temperatura mínima y precipitación durante el invierno y, por lo tanto, aumenta (disminuye) el rendimiento de trigo en la PI. En relación con El Niño, la correlación más alta se observó en el período 1981 -2001. En estas décadas, los altos (bajos) rendimientos se asocian con una transición El Niño - La Niña (La Niña - El Niño) o con eventos de El Niño (La Niña) que están finalizando. Para estos eventos, el patrón atmosférica asociada se asemeja a la NAO, que también influye directamente en la temperatura máxima y precipitación experimentadas por el cultivo durante la floración y llenado de grano. Los co- efectos de los dos patrones de teleconexión oceánicos ayudan a aumentar (disminuir) la precipitación y a disminuir (aumentar) la temperatura máxima en PI, por lo tanto el rendimiento de trigo aumenta (disminuye). Parte II. Predicción de cultivos. En el último capítulo se analiza los beneficios potenciales del uso de predicciones climáticas estacionales (por ejemplo de precipitación) en las predicciones de rendimientos de trigo y maíz, y explora métodos para aplicar dichos pronósticos climáticos en modelos de cultivo. Las predicciones climáticas estacionales tienen un gran potencial en las predicciones de cultivos, contribuyendo de esta manera a una mayor eficiencia de la gestión agrícola, seguridad alimentaria y de subsistencia. Los pronósticos climáticos se expresan en diferentes formas, sin embargo todos ellos son probabilísticos. Para ello, se evalúan y aplican dos métodos para desagregar las predicciones climáticas estacionales en datos diarios: 1) un generador climático estocástico condicionado (predictWTD) y 2) un simple re-muestreador basado en las probabilidades del pronóstico (FResampler1). Los dos métodos se evaluaron en un caso de estudio en el que se analizaron los impactos de tres escenarios de predicciones de precipitación estacional (predicción seco, medio y lluvioso) en el rendimiento de trigo en secano, sobre las necesidades de riego y rendimiento de maíz en la PI. Además, se estimó el margen bruto y los riesgos de la producción asociada con las predicciones de precipitación estacional extremas (seca y lluviosa). Los métodos predWTD y FResampler1 usados para desagregar los pronósticos de precipitación estacional en datos diarios, que serán usados como inputs en los modelos de cultivos, proporcionan una predicción comparable. Por lo tanto, ambos métodos parecen opciones factibles/viables para la vinculación de los pronósticos estacionales con modelos de simulación de cultivos para establecer predicciones de rendimiento o las necesidades de riego en el caso de maíz. El análisis del impacto en el margen bruto de los precios del grano de los dos cultivos (trigo y maíz) y el coste de riego (maíz) sugieren que la combinación de los precios de mercado previstos y la predicción climática estacional pueden ser una buena herramienta en la toma de decisiones de los agricultores, especialmente en predicciones secas y/o localidades con baja precipitación anual. Estos métodos permiten cuantificar los beneficios y riesgos de los agricultores ante una predicción climática estacional en la PI. Por lo tanto, seríamos capaces de establecer sistemas de alerta temprana y diseñar estrategias de adaptación del manejo del cultivo para aprovechar las condiciones favorables o reducir los efectos de condiciones adversas. La utilidad potencial de esta Tesis es la aplicación de las relaciones encontradas para predicción de cosechas de la próxima campaña agrícola. Una correcta predicción de los rendimientos podría ayudar a los agricultores a planear con antelación sus prácticas agronómicas y todos los demás aspectos relacionados con el manejo de los cultivos. Esta metodología se puede utilizar también para la predicción de las tendencias futuras de la variabilidad del rendimiento en la PI. Tanto los sectores públicos (mejora de la planificación agrícola) como privados (agricultores, compañías de seguros agrarios) pueden beneficiarse de esta mejora en la predicción de cosechas. ABSTRACT The present thesis constitutes a step forward in advancing of knowledge of the effects of climate variability on crops in the Iberian Peninsula (IP). It is well known that ocean temperature, particularly the tropical ocean, is one of the most convenient variables to be used as climate predictor. Oceans are considered as the principal heat storage of the planet due to the high heat capacity of water. When this energy is released, it alters the global atmospheric circulation regimes by teleconnection1 mechanisms. These changes in the general circulation of the atmosphere affect the regional temperature, precipitation, moisture, wind, etc., and those influence crop growth, development and yield. For the case of Europe, this implies that the atmospheric variability in a specific region is associated with the variability of others adjacent and/or remote regions as a consequence of Europe being affected by global circulations patterns which, in turn, are affected by oceanic patterns. The general objective of this Thesis is to analyze the variability of crop yields at climate time scales and its relation to the climate variability and teleconnections, as well as to evaluate their predictability. Moreover, this Thesis aims to establish a methodology to study the predictability of crop yield anomalies. The analysis focuses on wheat and maize as a reference crops for other field crops in the IP, for winter rainfed crops and summer irrigated crops respectively. Crop simulation experiments using a model chain methodology (climate + crop) are designed to evaluate the impacts of climate variability patterns on yield and its predictability. The present Thesis is structured in two parts. The first part is focused on the climate variability analyses, and the second part is an application of the quantitative crop forecasting for years that fulfill specific conditions identified in the first part. This Thesis is divided into 4 chapters, covering the specific objectives of the present research work. Part I. Climate variability analyses The first chapter shows an analysis of potential yield variability in one location, as a bioclimatic indicator of the El Niño teleconnections with Europe, putting forward its importance for improving predictability in both climate and agriculture. It also presents the chosen methodology to relate yield with atmospheric and oceanic variables. Crop yield is partially determined by atmospheric climate variability, which in turn depends on changes in the sea surface temperature (SST). El Niño is the leading mode of SST interannual variability, and its impacts extend worldwide. Nevertheless, the predictability of these impacts is controversial, especially those associated with European climate variability, which have been found to be non-stationary and non-linear. The study showed how potential2 crop yield obtained from reanalysis data and crop models serves as an alternative and more effective index of El Niño teleconnections because it integrates the nonlinearities between the climate variables in a unique time series. The relationships between El Niño and crop yield anomalies are more significant than the individual contributions of each of the atmospheric variables used as input in the crop model. Additionally, the non-stationarities between El Niño and European climate variability are more clearly detected when analyzing crop-yield variability. The understanding of this relationship allows for some predictability up to one year before the crop is harvested. This predictability is not constant, but depends on both high and low frequency modulation. The second chapter identifies the oceanic and atmospheric patterns of climate variability affecting summer cropping systems in the IP. Moreover, hypotheses about the eco-physiological mechanism behind crop response are presented. It is focused on an analysis of maize yield variability in IP for the whole twenty century, using a calibrated crop model at five contrasting Spanish locations and reanalyses climate datasets to obtain long time series of potential yield. The study tests the use of reanalysis data for obtaining only climate dependent time series of simulated crop yield for the whole region, and to use these yield to analyze the influences of oceanic and atmospheric patterns. The results show a good reliability of reanalysis data. The spatial distribution of the leading principal component of yield variability shows a similar behaviour over all the studied locations in the IP. The strong linear correlation between El Niño index and yield is remarkable, being this relation non-stationary on time, although the air temperature-yield relationship remains on time, being the highest influences during grain filling period. Regarding atmospheric patterns, the summer Scandinavian pattern has significant influence on yield in IP. The third chapter identifies the oceanic and atmospheric patterns of climate variability affecting winter cropping systems in the IP. Also, hypotheses about the eco-physiological mechanism behind crop response are presented. It is focused on an analysis of rainfed wheat yield variability in IP. Climate variability is the main driver of changes in crop growth, development and yield, especially for rainfed production systems. In IP, wheat yields are strongly dependent on seasonal rainfall amount and temporal distribution of rainfall during the growing season. The major source of precipitation interannual variability in IP is the North Atlantic Oscillation (NAO) which has been related in part with changes in the Tropical Pacific (El Niño) and Atlantic (TNA) sea surface temperature (SST). The existence of some predictability has encouraged us to analyze the possible predictability of the wheat yield in the IP using SSTs anomalies as predictor. For this purpose, a crop model with a site specific calibration for the Northeast of IP and reanalysis climate datasets have been used to obtain long time series of attainable wheat yield and relate their variability with SST anomalies. The results show that El Niño and TNA influence rainfed wheat development and yield in IP and these impacts depend on the concurrent state of the NAO. Although crop-SST relationships do not equally hold on during the whole analyzed period, they can be explained by an understood and stationary ecophysiological mechanism. During the second half of the twenty century, the positive (negative) TNA index is associated to a negative (positive) phase of NAO, which exerts a positive (negative) influence on minimum temperatures (Tmin) and precipitation (Prec) during winter and, thus, yield increases (decreases) in IP. In relation to El Niño, the highest correlation takes place in the period 1981-2001. For these decades, high (low) yields are associated with an El Niño to La Niña (La Niña to El Niño) transitions or to El Niño events finishing. For these events, the regional associated atmospheric pattern resembles the NAO, which also influences directly on the maximum temperatures (Tmax) and precipitation experienced by the crop during flowering and grain filling. The co-effects of the two teleconnection patterns help to increase (decrease) the rainfall and decrease (increase) Tmax in IP, thus on increase (decrease) wheat yield. Part II. Crop forecasting The last chapter analyses the potential benefits for wheat and maize yields prediction from using seasonal climate forecasts (precipitation), and explores methods to apply such a climate forecast to crop models. Seasonal climate prediction has significant potential to contribute to the efficiency of agricultural management, and to food and livelihood security. Climate forecasts come in different forms, but probabilistic. For this purpose, two methods were evaluated and applied for disaggregating seasonal climate forecast into daily weather realizations: 1) a conditioned stochastic weather generator (predictWTD) and 2) a simple forecast probability resampler (FResampler1). The two methods were evaluated in a case study where the impacts of three scenarios of seasonal rainfall forecasts on rainfed wheat yield, on irrigation requirements and yields of maize in IP were analyzed. In addition, we estimated the economic margins and production risks associated with extreme scenarios of seasonal rainfall forecasts (dry and wet). The predWTD and FResampler1 methods used for disaggregating seasonal rainfall forecast into daily data needed by the crop simulation models provided comparable predictability. Therefore both methods seem feasible options for linking seasonal forecasts with crop simulation models for establishing yield forecasts or irrigation water requirements. The analysis of the impact on gross margin of grain prices for both crops and maize irrigation costs suggests the combination of market prices expected and the seasonal climate forecast can be a good tool in farmer’s decision-making, especially on dry forecast and/or in locations with low annual precipitation. These methodologies would allow quantifying the benefits and risks of a seasonal weather forecast to farmers in IP. Therefore, we would be able to establish early warning systems and to design crop management adaptation strategies that take advantage of favorable conditions or reduce the effect of adverse conditions. The potential usefulness of this Thesis is to apply the relationships found to crop forecasting on the next cropping season, suggesting opportunity time windows for the prediction. The methodology can be used as well for the prediction of future trends of IP yield variability. Both public (improvement of agricultural planning) and private (decision support to farmers, insurance companies) sectors may benefit from such an improvement of crop forecasting.
Resumo:
Patterns in sequences of amino acid hydrophobic free energies predict secondary structures in proteins. In protein folding, matches in hydrophobic free energy statistical wavelengths appear to contribute to selective aggregation of secondary structures in “hydrophobic zippers.” In a similar setting, the use of Fourier analysis to characterize the dominant statistical wavelengths of peptide ligands’ and receptor proteins’ hydrophobic modes to predict such matches has been limited by the aliasing and end effects of short peptide lengths, as well as the broad-band, mode multiplicity of many of their frequency (power) spectra. In addition, the sequence locations of the matching modes are lost in this transformation. We make new use of three techniques to address these difficulties: (i) eigenfunction construction from the linear decomposition of the lagged covariance matrices of the ligands and receptors as hydrophobic free energy sequences; (ii) maximum entropy, complex poles power spectra, which select the dominant modes of the hydrophobic free energy sequences or their eigenfunctions; and (iii) discrete, best bases, trigonometric wavelet transformations, which confirm the dominant spectral frequencies of the eigenfunctions and locate them as (absolute valued) moduli in the peptide or receptor sequence. The leading eigenfunction of the covariance matrix of a transmembrane receptor sequence locates the same transmembrane segments seen in n-block-averaged hydropathy plots while leaving the remaining hydrophobic modes unsmoothed and available for further analyses as secondary eigenfunctions. In these receptor eigenfunctions, we find a set of statistical wavelength matches between peptide ligands and their G-protein and tyrosine kinase coupled receptors, ranging across examples from 13.10 amino acids in acid fibroblast growth factor to 2.18 residues in corticotropin releasing factor. We find that the wavelet-located receptor modes in the extracellular loops are compatible with studies of receptor chimeric exchanges and point mutations. A nonbinding corticotropin-releasing factor receptor mutant is shown to have lost the signatory mode common to the normal receptor and its ligand. Hydrophobic free energy eigenfunctions and their transformations offer new quantitative physical homologies in database searches for peptide-receptor matches.
Resumo:
A percepção de presença (PP), evolução do conceito de telepresença, pode ser definida como ilusão perceptiva de não mediação e/ou a percepção ilusória da realidade. O método mais utilizado para a avaliação da PP faz uso de questionários aplicados aos sujeitos, após sua participação numa experiência. Além de não fornecer informações em tempo real esse método sofre muitas interferências advindas tanto dos sujeitos submetidos ao experimento como dos avaliadores dos questionários. Os métodos que poderiam ser mais efetivos para a avaliação da PP, em tempo real, fazem uso de sinais fisiológicos que variam independentemente da vontade dos sujeitos, como batimento cardíaco, eletrocardiograma, eletroencefalograma, resistividade e umidade da pele. Os sinais fisiológicos, no entanto, só variam de forma significativa em situações de estresse, inviabilizando sua utilização em atividades normais, sem estresse. Outra forma de avaliar a PP é utilizar sistemas de rastreamento do olhar. Estudados e desenvolvidos desde o século 19, os sistemas de rastreamento do olhar fornecem um mapeamento do movimento dos olhos. Além de indicar para onde os sujeitos estão olhando, podem também monitorar a dilatação da pupila e as piscadas. Atualmente existem sistemas de rastreamento do olhar comerciais de baixo custo, que apesar de terem menos precisão e frequência que os equipamentos de alto custo são mais práticos e possuem software de plataforma aberta. No futuro serão tão comuns e simples de usar como são hoje as câmeras em dispositivos móveis e computadores, o que viabilizará a aplicação das técnicas e métodos aqui propostos em larga escala, principalmente para monitorar a atenção e envolvimento de atividades mediadas por vídeo. É apresentada uma ferramenta que faz uso do rastreamento do olhar para avaliar a percepção de presença em atividades mediadas por vídeo (com estímulos sonoros). Dois experimentos foram realizados para validar as hipóteses da pesquisa e a ferramenta. Um terceiro experimento foi executado para verificar a capacidade da ferramenta em avaliar a percepção de presença em atividades não estressantes mediadas por vídeo.
Resumo:
Climate change is critically impacting the environment and economy at the local level. County governments have an opportunity to adopt climate change policies that address local environmental and economic concerns. The Colorado counties of Boulder, Gunnison, and Pitkin have all adopted some form of climate change policies. There are some components of each of these policies that are more effective in terms of economic, environmental, and community benefits. An effective climate change policy clearly states specific cost analyses, environmental impacts at the local level, the relationship between impacts and the community, and the economic benefits of policy adoption. This Capstone project addresses specific cost and energy analyses and provides a beneficial policy framework for county governments.
Resumo:
Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.
Resumo:
Teachers are deeply concerned on how to be more effective in our task of teaching. We must organize the contents of our specific area providing them with a logical configuration, for which we must know the mental structure of the students that we have in the classroom. We must shape this mental structure, in a progressive manner, so that they can assimilate the contents that we are trying to transfer, to make the learning as meaningful as possible. In the generative learning model, the links before the stimulus delivered by the teacher and the information stored in the mind of the learner requires an important effort by the student, who should build new conceptual meanings. That effort, which is extremely necessary for a good learning, sometimes is the missing ingredient so that the teaching-learning process can be properly assimilated. In electrical circuits, which we know are perfectly controlled and described by Ohm's law and Kirchhoff's two rules, there are two concepts that correspond to the following physical quantities: voltage and electrical resistance. These two concepts are integrated and linked when the concept of current is presented. This concept is not subordinated to the previous ones, it has the same degree of inclusiveness and gives rise to substantial relations between the three concepts, materializing it into a law: The Ohm, which allows us to relate and to calculate any of the three physical magnitudes, two of them known. The alternate current, in which both the voltage and the current are reversed dozens of times per second, plays an important role in many aspects of our modern life, because it is universally used. Its main feature is that its maximum voltage is easily modifiable through the use of transformers, which greatly facilitates its transfer with very few losses. In this paper, we present a conceptual map so that it is used as a new tool to analyze in a logical manner the underlying structure in the alternate current circuits, with the objective of providing the students from Sciences and Engineering majors with another option to try, amongst all, to achieve a significant learning of this important part of physics.
Resumo:
Tese de mestrado integrado em Engenharia da Energia e do Ambiente, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016
Resumo:
Falling amounts of natural resources and the ‘peak oil’ question, i.e. the point in time when the maximum rate of extraction of easily-accessible oil reserves is reached, have been among the key issues in public debate in Germany on all levels: expert, business and – most crucially – the government level. The alarming assessments of German analysts anticipate a rapid shrinkage of oil reserves and a sharp rise in oil prices, which in the longer term will affect the economic and political systems of importer countries. Concerns about the consequences of the projected resource deficit, especially among representatives of German industry, are also fuelled by the stance of those countries which export raw materials. China, which meets 97% of global demand for minerals crucial for the production of new technologies, cut its exports by 40% in summer 2010 (compared to 2009), arguing that it had to protect its reserves from overexploitation. In 2009 the value of natural resources Germany imported reached €84 billion, of which €62 billion were spent on energy carriers, and €22 billion on metals. For Germany, the shrinkage of resources is a political problem of the utmost importance, since the country is poor in mineral resources and has to acquire petroleum and other necessary raw materials abroad1. In autumn 2010, the German minister of economy initiated the establishment of a Resources Agency designed to support companies in their search for natural resources, and the government prepared and adopted a national Raw Material Strategy. In the next decade the policy of the German government, including foreign policy, will be affected by the consequences of the decreasing availability of natural resources. It can be expected that the mission of the Bundeswehr will be redefined, and the importance of African states and current exporter countries such as Russia and China for German policies will increase. At the same time, Germany will seek to strengthen cooperation among importer countries, which should make pressure on resource-exporting states more effective. In this context, it can be expected that the efforts taken to develop an EU resource strategy or even a ‘comprehensive resource policy’ will be intensified; or at least, the EU’s energy policy will permanently include the issue of sourcing raw materials.
Resumo:
Competition law seeks to protect competition on the market as a means of enhancing consumer welfare and of ensuring an efficient allocation of resources. In order to be successful, therefore, competition authorities should be adequately equipped and have at their disposal all necessary enforcement tools. However, at the EU level the current enforcement system of competition rules allows only for the imposition of administrative fines by the European Commission to liable undertakings. The main objectives, in turn, of an enforcement policy based on financial penalties are two fold: to impose sanctions on infringing undertakings which reflect the seriousness of the violation, and to ensure that the risk of penalties will deter both the infringing undertakings (often referred to as 'specific deterrence') and other undertakings that may be considering anti-competitive activities from engaging in them (often referred to as 'general deterrence'). In all circumstances, it is important to ensure that pecuniary sanctions imposed on infringing undertakings are proportionate and not excessive. Although pecuniary sanctions against infringing undertakings are a crucial part of the arsenal needed to deter competition law violations, they may not be sufficient. One alternative option in that regard is the strategic use of sanctions against the individuals involved in, or responsible for, the infringements. Sanctions against individuals are documented to focus the minds of directors and employees to comply with competition rules as they themselves, in addition to the undertakings in which they are employed, are at risk of infringements. Individual criminal penalties, including custodial sanctions, have been in fact adopted by almost half of the EU Member States. This is a powerful tool but is also limited in scope and hard to implement in practice mostly due to the high standards of proof required and the political consensus that needs first to be built. Administrative sanctions for individuals, on the other hand, promise to deliver up to a certain extent the same beneficial results as criminal sanctions whilst at the same time their adoption is not likely to meet strong opposition and their implementation in practice can be both efficient and effective. Directors’ disqualification, in particular, provides a strong individual incentive for each member, or prospective member, of the Board as well as other senior executives, to take compliance with competition law seriously. It is a flexible and promising tool that if added to the arsenal of the European Commission could bring balance to the current sanctioning system and that, in turn, would in all likelihood make the enforcement of EU competition rules more effective. Therefore, it is submitted that a competition law regime in order to be effective should be able to deliver policy objectives through a variety of tools, not simply by imposing significant pecuniary sanctions to infringing undertakings. It is also clear that individual sanctions, mostly of an administrative nature, are likely to play an increasingly important role as they focus the minds of those in business who might otherwise be inclined to regard infringing the law as a matter of corporate risk rather than of personal risk. At the EU level, in particular, the adoption of directors’ disqualification promises to deliver more effective compliance and greater overall economic impact.
Resumo:
The European Union (EU) is seen as the leading actor in successfully fighting piracy around the Horn of Africa. As a global trade power with strong economic interests, the EU is also challenged by similar maritime security threats in the Gulf of Guinea. To date, there has been no comprehensive analysis to assess the potential transfer of successful EU instruments from the Horn of Africa to the piracy situation in West African waters. This paper examines to what extent the EU can draw on its experience made in the Horn of Africa to deter piracy in West African waters. Based on qualitative research interviews, lessons learned from East Africa are identified and subsequently applied to the situation in the Gulf of Guinea. The results show that the EU is only partially drawing on its experience made in the Horn of Africa. One the one hand, it is rather reluctant to use crisis management instruments such as naval operations. On the other hand, the EU is drawing on its successful leadership in international political and military cooperation from around the Horn of Africa in order to make more effective use of available resources in the Gulf of Guinea.