934 resultados para Power-law contribution
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
Using newly constructed data series on explosions, deaths, and steamboat traffic, we examine econometrically the causes of increased safety in steamboat boilers in the nineteenth century. Although the law of 1852 (but not that of 1838) did have a dramatic initial effect in reducing explosions, that reduction came against the background not of a system out of control but of a system that from the beginning was steadily increasing boiler safety per person- mile. The role of the federal government in conducting and disseminating basic research on boiler technology may have been more significant for increased safety than its explicit regulatory efforts.
Resumo:
State-building is currently considered to be an indispensable process in overcoming state fragility: a condition characterized by frequent armed conflicts as well as chronic poverty. In this process, both the capacity and the legitimacy of the state are supposed to be enhanced; such balanced development of capacity and legitimacy has also been demanded in security sector reform (SSR), which is regarded as being a crucial part of post-conflict state-building. To enhance legitimacy, the importance of democratic governance is stressed in both state-building and SSR in post-conflict countries. In reality, however, the balanced enhancement of capacity and legitimacy has rarely been realized. In particular, legitimacy enhancement tends to stagnate in countries in which one of multiple warring parties takes a strong grip on state power. This paper tries to understand why such unbalanced development of state-building and SSR has been observed in post-conflict countries, through a case study of Rwanda. Analyses of two policy initiatives in the security sector - Gacaca transitional justice and disarmament, demobilization, and reintegration (DDR) - indicate that although these programs achieved goals set by the government, their contribution to the normative objectives promoted by the international community was quite debatable. It can be understood that this is because the country has subordinated SSR to its state-building process. After the military victory of the former rebels, the Rwandan Patriotic Front (RPF), the ruling elite prioritized the establishment of political stability over the introduction of international norms such as democratic governance and the rule of law. SSR was implemented only to the extent that it contributed to, and did not threaten, Rwanda's RPF-led state-building.
Resumo:
In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.
Resumo:
Environmental constraints imposed on hydropoweroperation are usually given in the form of minimum environmental flows and maximum and minimum rates of change of flows, or ramp rates. One solution proposed to mitigate the environmental impact caused by the flows discharged by a hydropower plant while reducing the economic impact of the above-mentioned constraints consists in building a re-regulationreservoir, or afterbay, downstream of the power plant. Adding pumpingcapability between the re-regulationreservoir and the main one could contribute both to reducing the size of the re-regulationreservoir, with the consequent environmental improvement, and to improving the economic feasibility of the project, always fulfilling the environmental constraints imposed to hydropoweroperation. The objective of this paper is studying the contribution of a re-regulationreservoir to fulfilling the environmental constraints while reducing the economic impact of said constraints. For that purpose, a revenue-driven optimization model based on mixed integer linear programming is used. Additionally, the advantages of adding pumpingcapability are analysed. In order to illustrate the applicability of the methodology, a case study based on a real hydropower plant is presented
Resumo:
HiPER is the European Project for Laser Fusion that has been able to join 26 institutions and signed under formal government agreement by 6 countries inside the ESFRI Program of the European Union (EU). The project is already extended by EU for two years more (until 2013) after its first preparatory phase from 2008. A large work has been developed in different areas to arrive to a design of repetitive operation of Laser Fusion Reactor, and decisions are envisioned in the next phase of Technology Development or Risk Reduction for Engineering or Power Plant facilities (or both). Chamber design has been very much completed for Engineering phase and starting of preliminary options for Reactor Power Plant have been established and review here.
Resumo:
Los arrays de ranuras son sistemas de antennas conocidos desde los años 40, principalmente destinados a formar parte de sistemas rádar de navíos de combate y grandes estaciones terrenas donde el tamaño y el peso no eran altamente restrictivos. Con el paso de los años y debido sobre todo a importantes avances en materiales y métodos de fabricación, el rango de aplicaciones de este tipo de sistemas radiantes creció en gran medida. Desde nuevas tecnologías biomédicas, sistemas anticolisión en automóviles y navegación en aviones, enlaces de comunicaciones de alta tasa binaria y corta distancia e incluso sistemas embarcados en satélites para la transmisión de señal de televisión. Dentro de esta familia de antennas, existen dos grupos que destacan por ser los más utilizados: las antennas de placas paralelas con las ranuras distribuidas de forma circular o espiral y las agrupaciones de arrays lineales construidos sobre guia de onda. Continuando con las tareas de investigación desarrolladas durante los últimos años en el Instituto de Tecnología de Tokyo y en el Grupo de Radiación de la Universidad Politécnica de Madrid, la totalidad de esta tesis se centra en este último grupo, aunque como se verá se separa en gran medida de las técnicas de diseño y metodologías convencionales. Los arrays de ranuras rectas y paralelas al eje de la guía rectangular que las alimenta son, sin ninguna duda, los modelos más empleados debido a la fiabilidad que presentan a altas frecuencias, su capacidad para gestionar grandes cantidades de potencia y la sencillez de su diseño y fabricación. Sin embargo, también presentan desventajas como estrecho ancho de banda en pérdidas de retorno y rápida degradación del diagrama de radiación con la frecuencia. Éstas son debidas a la naturaleza resonante de sus elementos radiantes: al perder la resonancia, el sistema global se desajusta y sus prestaciones degeneran. En arrays bidimensionales de slots rectos, el campo eléctrico queda polarizado sobre el plano transversal a las ranuras, correspondiéndose con el plano de altos lóbulos secundarios. Esta tesis tiene como objetivo el desarrollo de un método sistemático de diseño de arrays de ranuras inclinadas y desplazadas del centro (en lo sucesivo “ranuras compuestas”), definido en 1971 como uno de los desafíos a superar dentro del mundo del diseño de antennas. La técnica empleada se basa en el Método de los Momentos, la Teoría de Circuitos y la Teoría de Conexión Aleatoria de Matrices de Dispersión. Al tratarse de un método circuital, la primera parte de la tesis se corresponde con el estudio de la aplicabilidad de las redes equivalentes fundamentales, su capacidad para recrear fenómenos físicos de la ranura, las limitaciones y ventajas que presentan para caracterizar las diferentes configuraciones de slot compuesto. Se profundiza en las diferencias entre las redes en T y en ! y se condiciona la selección de una u otra dependiendo del tipo de elemento radiante. Una vez seleccionado el tipo de red a emplear en el diseño del sistema, se ha desarrollado un algoritmo de cascadeo progresivo desde el puerto alimentador hacia el cortocircuito que termina el modelo. Este algoritmo es independiente del número de elementos, la frecuencia central de funcionamiento, del ángulo de inclinación de las ranuras y de la red equivalente seleccionada (en T o en !). Se basa en definir el diseño del array como un Problema de Satisfacción de Condiciones (en inglés, Constraint Satisfaction Problem) que se resuelve por un método de Búsqueda en Retroceso (Backtracking algorithm). Como resultado devuelve un circuito equivalente del array completo adaptado a su entrada y cuyos elementos consumen una potencia acorde a una distribución de amplitud dada para el array. En toda agrupación de antennas, el acoplo mutuo entre elementos a través del campo radiado representa uno de los principales problemas para el ingeniero y sus efectos perjudican a las prestaciones globales del sistema, tanto en adaptación como en capacidad de radiación. El empleo de circuito equivalente se descartó por la dificultad que suponía la caracterización de estos efectos y su inclusión en la etapa de diseño. En esta tesis doctoral el acoplo también se ha modelado como una red equivalente cuyos elementos son transformadores ideales y admitancias, conectada al conjunto de redes equivalentes que representa el array. Al comparar los resultados estimados en términos de pérdidas de retorno y radiación con aquellos obtenidos a partir de programas comerciales populares como CST Microwave Studio se confirma la validez del método aquí propuesto, el primer método de diseño sistemático de arrays de ranuras compuestos alimentados por guía de onda rectangular. Al tratarse de ranuras no resonantes, el ancho de banda en pérdidas de retorno es mucho mas amplio que el que presentan arrays de slots rectos. Para arrays bidimensionales, el ángulo de inclinación puede ajustarse de manera que el campo quede polarizado en los planos de bajos lóbulos secundarios. Además de simulaciones se han diseñado, construido y medido dos prototipos centrados en la frecuencia de 12GHz, de seis y diez elementos. Las medidas de pérdidas de retorno y diagrama de radiación revelan excelentes resultados, certificando la bondad del método genuino Method of Moments - Forward Matching Procedure desarrollado a lo largo de esta tésis. Abstract The slot antenna arrays are well known systems from the decade of 40s, mainly intended to be part of radar systems of large warships and terrestrial stations where size and weight were not highly restrictive. Over the years, mainly due to significant advances in materials and manufacturing methods, the range of applications of this type of radiating systems grew significantly. From new biomedical technologies, collision avoidance systems in cars and aircraft navigation, short communication links with high bit transfer rate and even embedded systems in satellites for television broadcast. Within this family of antennas, two groups stand out as being the most frequent in the literature: parallel plate antennas with slots placed in a circular or spiral distribution and clusters of waveguide linear arrays. To continue the vast research work carried out during the last decades in the Tokyo Institute of Technology and in the Radiation Group at the Universidad Politécnica de Madrid, this thesis focuses on the latter group, although it represents a technique that drastically breaks with traditional design methodologies. The arrays of slots straight and parallel to the axis of the feeding rectangular waveguide are without a doubt the most used models because of the reliability that they present at high frequencies, its ability to handle large amounts of power and their simplicity of design and manufacturing. However, there also exist disadvantages as narrow bandwidth in return loss and rapid degradation of the radiation pattern with frequency. These are due to the resonant nature of radiating elements: away from the resonance status, the overall system performance and radiation pattern diminish. For two-dimensional arrays of straight slots, the electric field is polarized transverse to the radiators, corresponding to the plane of high side-lobe level. This thesis aims to develop a systematic method of designing arrays of angled and displaced slots (hereinafter "compound slots"), defined in 1971 as one of the challenges to overcome in the world of antenna design. The used technique is based on the Method of Moments, Circuit Theory and the Theory of Scattering Matrices Connection. Being a circuitry-based method, the first part of this dissertation corresponds to the study of the applicability of the basic equivalent networks, their ability to recreate the slot physical phenomena, their limitations and advantages presented to characterize different compound slot configurations. It delves into the differences of T and ! and determines the selection of the most suitable one depending on the type of radiating element. Once the type of network to be used in the system design is selected, a progressive algorithm called Forward Matching Procedure has been developed to connect the proper equivalent networks from the feeder port to shorted ending. This algorithm is independent of the number of elements, the central operating frequency, the angle of inclination of the slots and selected equivalent network (T or ! networks). It is based on the definition of the array design as a Constraint Satisfaction Problem, solved by means of a Backtracking Algorithm. As a result, the method returns an equivalent circuit of the whole array which is matched at its input port and whose elements consume a power according to a given amplitude distribution for the array. In any group of antennas, the mutual coupling between elements through the radiated field represents one of the biggest problems that the engineer faces and its effects are detrimental to the overall performance of the system, both in radiation capabilities and return loss. The employment of an equivalent circuit for the array design was discarded by some authors because of the difficulty involved in the characterization of the coupling effects and their inclusion in the design stage. In this thesis the coupling has also been modeled as an equivalent network whose elements are ideal transformers and admittances connected to the set of equivalent networks that represent the antennas of the array. By comparing the estimated results in terms of return loss and radiation with those obtained from popular commercial software as CST Microwave Studio, the validity of the proposed method is fully confirmed, representing the first method of systematic design of compound-slot arrays fed by rectangular waveguide. Since these slots do not work under the resonant status, the bandwidth in return loss is much wider than the longitudinal-slot arrays. For the case of two-dimensional arrays, the angle of inclination can be adjusted so that the field is polarized at the low side-lobe level plane. Besides the performed full-wave simulations two prototypes of six and ten elements for the X-band have been designed, built and measured, revealing excellent results and agreement with the expected results. These facts certify that the genuine technique Method of Moments - Matching Forward Procedure developed along this thesis is valid and trustable.
Resumo:
The kinetics of amorphization in crystalline SiO2 (α-quartz) under irradiation with swift heavy ions (O+1 at 4 MeV, O+4 at 13 MeV, F+2 at 5 MeV, F+4 at 15 MeV, Cl+3 at 10 MeV, Cl+4 at 20 MeV, Br+5 at 15 and 25 MeV and Br+8 at 40 MeV) has been analyzed in this work with an Avrami-type law and also with a recently developed cumulative approach (track-overlap model). This latter model assumes a track morphology consisting of an amorphous core (area σ) and a surrounding defective halo (area h), both being axially symmetric. The parameters of the two approaches which provide the best fit to the experimental data have been obtained as a function of the electronic stopping power Se. The extrapolation of the σ(Se) dependence yields a threshold value for amorphization, Sth ≈ 2.1 keV/nm; a second threshold is also observed around 4.1 keV/nm. We believe that this double-threshold effect could be related to the appearance of discontinuous tracks in the region between 2.1 and 4.1 keV/nm. For stopping power values around or below the lower threshold, where the ratio h/σ is large, the track-overlap model provides a much better fit than the Avrami function. Therefore, the data show that a right modeling of the amorphization kinetics needs to take into account the contribution of the defective track halo. Finally, a short comparative discussion with the kinetic laws obtained for elastic collision damage is given.
Resumo:
The increasing worldwide demand for electricity impels to develop clean and renewable energy resources. In the field of portable power devices not only size and weight represent important aspects to take into account, but the fuel and its storage are also critical issues to consider. In this last sense, the direct methanol (MeOH) fuel cells (DMFC) play an important role as they can offer high power and energy density, low emissions, ambient operating conditions and fast and convenient refuelling.
Resumo:
This paper analyzes the correlation between the fluctuations of the electrical power generated by the ensemble of 70 DC/AC inverters from a 45.6 MW PV plant. The use of real electrical power time series from a large collection of photovoltaic inverters of a same plant is an impor- tant contribution in the context of models built upon simplified assumptions to overcome the absence of such data. This data set is divided into three different fluctuation categories with a clustering proce- dure which performs correctly with the clearness index and the wavelet variances. Afterwards, the time dependent correlation between the electrical power time series of the inverters is esti- mated with the wavelet transform. The wavelet correlation depends on the distance between the inverters, the wavelet time scales and the daily fluctuation level. Correlation values for time scales below one minute are low without dependence on the daily fluctuation level. For time scales above 20 minutes, positive high correlation values are obtained, and the decay rate with the distance depends on the daily fluctuation level. At intermediate time scales the correlation depends strongly on the daily fluctuation level. The proposed methods have been implemented using free software. Source code is available as supplementary material.
Resumo:
La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.
Resumo:
El objetivo final de las investigaciones recogidas en esta tesis doctoral es la estimación del volumen de hielo total de los ms de 1600 glaciares de Svalbard, en el Ártico, y, con ello, su contribución potencial a la subida del nivel medio del mar en un escenario de calentamiento global. Los cálculos más exactos del volumen de un glaciar se efectúan a partir de medidas del espesor de hielo obtenidas con georradar. Sin embargo, estas medidas no son viables para conjuntos grandes de glaciares, debido al coste, dificultades logísticas y tiempo requerido por ellas, especialmente en las regiones polares o de montaña. Frente a ello, la determinación de áreas de glaciares a partir de imágenes de satélite sí es viable a escalas global y regional, por lo que las relaciones de escala volumen-área constituyen el mecanismo más adecuado para las estimaciones de volúmenes globales y regionales, como las realizadas para Svalbard en esta tesis. Como parte del trabajo de tesis, hemos elaborado un inventario de los glaciares de Svalbard en los que se han efectuado radioecosondeos, y hemos realizado los cálculos del volumen de hielo de más de 80 cuencas glaciares de Svalbard a partir de datos de georradar. Estos volúmenes han sido utilizados para calibrar las relaciones volumen-área desarrolladas en la tesis. Los datos de georradar han sido obtenidos en diversas campañas llevadas a cabo por grupos de investigación internacionales, gran parte de ellas lideradas por el Grupo de Simulación Numérica en Ciencias e Ingeniería de la Universidad Politécnica de Madrid, del que forman parte la doctoranda y los directores de tesis. Además, se ha desarrollado una metodología para la estimación del error en el cálculo de volumen, que aporta una novedosa técnica de cálculo del error de interpolación para conjuntos de datos del tipo de los obtenidos con perfiles de georradar, que presentan distribuciones espaciales con unos patrones muy característicos pero con una densidad de datos muy irregular. Hemos obtenido en este trabajo de tesis relaciones de escala específicas para los glaciares de Svalbard, explorando la sensibilidad de los parámetros a diferentes morfologías glaciares, e incorporando nuevas variables. En particular, hemos efectuado experimentos orientados a verificar si las relaciones de escala obtenidas caracterizando los glaciares individuales por su tamaño, pendiente o forma implican diferencias significativas en el volumen total estimado para los glaciares de Svalbard, y si esta partición implica algún patrón significativo en los parámetros de las relaciones de escala. Nuestros resultados indican que, para un valor constante del factor multiplicativo de la relacin de escala, el exponente que afecta al área en la relación volumen-área decrece según aumentan la pendiente y el factor de forma, mientras que las clasificaciones basadas en tamaño no muestran un patrón significativo. Esto significa que los glaciares con mayores pendientes y de tipo circo son menos sensibles a los cambios de área. Además, los volúmenes de la población total de los glaciares de Svalbard calculados con fraccionamiento en grupos por tamaño y pendiente son un 1-4% menores que los obtenidas usando la totalidad de glaciares sin fraccionamiento en grupos, mientras que los volúmenes calculados fraccionando por forma son un 3-5% mayores. También realizamos experimentos multivariable para obtener estimaciones óptimas del volumen total mediante una combinación de distintos predictores. Nuestros resultados muestran que un modelo potencial simple volumen-área explica el 98.6% de la varianza. Sólo el predictor longitud del glaciar proporciona significación estadística cuando se usa además del área del glaciar, aunque el coeficiente de determinación disminuye en comparación con el modelo más simple V-A. El predictor intervalo de altitud no proporciona información adicional cuando se usa además del área del glaciar. Nuestras estimaciones del volumen de la totalidad de glaciares de Svalbard usando las diferentes relaciones de escala obtenidas en esta tesis oscilan entre 6890 y 8106 km3, con errores relativos del orden de 6.6-8.1%. El valor medio de nuestras estimaciones, que puede ser considerado como nuestra mejor estimación del volumen, es de 7.504 km3. En términos de equivalente en nivel del mar (SLE), nuestras estimaciones corresponden a una subida potencial del nivel del mar de 17-20 mm SLE, promediando 19_2 mm SLE, donde el error corresponde al error en volumen antes indicado. En comparación, las estimaciones usando las relaciones V-A de otros autores son de 13-26 mm SLE, promediando 20 _ 2 mm SLE, donde el error representa la desviación estándar de las distintas estimaciones. ABSTRACT The final aim of the research involved in this doctoral thesis is the estimation of the total ice volume of the more than 1600 glaciers of Svalbard, in the Arctic region, and thus their potential contribution to sea-level rise under a global warming scenario. The most accurate calculations of glacier volumes are those based on ice-thicknesses measured by groundpenetrating radar (GPR). However, such measurements are not viable for very large sets of glaciers, due to their cost, logistic difficulties and time requirements, especially in polar or mountain regions. On the contrary, the calculation of glacier areas from satellite images is perfectly viable at global and regional scales, so the volume-area scaling relationships are the most useful tool to determine glacier volumes at global and regional scales, as done for Svalbard in this PhD thesis. As part of the PhD work, we have compiled an inventory of the radio-echo sounded glaciers in Svalbard, and we have performed the volume calculations for more than 80 glacier basins in Svalbard from GPR data. These volumes have been used to calibrate the volume-area relationships derived in this dissertation. Such GPR data have been obtained during fieldwork campaigns carried out by international teams, often lead by the Group of Numerical Simulation in Science and Engineering of the Technical University of Madrid, to which the PhD candidate and her supervisors belong. Furthermore, we have developed a methodology to estimate the error in the volume calculation, which includes a novel technique to calculate the interpolation error for data sets of the type produced by GPR profiling, which show very characteristic data distribution patterns but with very irregular data density. We have derived in this dissertation scaling relationships specific for Svalbard glaciers, exploring the sensitivity of the scaling parameters to different glacier morphologies and adding new variables. In particular, we did experiments aimed to verify whether scaling relationships obtained through characterization of individual glacier shape, slope and size imply significant differences in the estimated volume of the total population of Svalbard glaciers, and whether this partitioning implies any noticeable pattern in the scaling relationship parameters. Our results indicate that, for a fixed value of the factor in the scaling relationship, the exponent of the area in the volume-area relationship decreases as slope and shape increase, whereas size-based classifications do not reveal any clear trend. This means that steep slopes and cirque-type glaciers are less sensitive to changes in glacier area. Moreover, the volumes of the total population of Svalbard glaciers calculated according to partitioning in subgroups by size and slope are smaller (by 1-4%) than that obtained considering all glaciers without partitioning into subgroups, whereas the volumes calculated according to partitioning in subgroups by shape are 3-5% larger. We also did multivariate experiments attempting to optimally predict the volume of Svalbard glaciers from a combination of different predictors. Our results show that a simple power-type V-A model explains 98.6% of the variance. Only the predictor glacier length provides statistical significance when used in addition to the predictor glacier area, though the coefficient of determination decreases as compared with the simpler V-A model. The predictor elevation range did not provide any additional information when used in addition to glacier area. Our estimates of the volume of the entire population of Svalbard glaciers using the different scaling relationships that we have derived along this thesis range within 6890-8106 km3, with estimated relative errors in total volume of the order of 6.6-8.1% The average value of all of our estimates, which could be used as a best estimate for the volume, is 7,504 km3. In terms of sea-level equivalent (SLE), our volume estimates correspond to a potential contribution to sea-level rise within 17-20 mm SLE, averaging 19 _ 2 mm SLE, where the quoted error corresponds to our estimated relative error in volume. For comparison, the estimates using the V-A scaling relations found in the literature range within 13-26 mm SLE, averaging 20 _ 2 mm SLE, where the quoted error represents the standard deviation of the different estimates.
Resumo:
Sterile coal is a low-value residue associated to the coal extraction and mining activity. According to the type and origin of the coal bed configuration, sterile coal production can mainly vary on quantity, calorific value and presence of sulphur compounds. In addition, the potential availability of sterile coal within Spain is apparently high and its contribution to the local power generation would be of interest playing a significant role. The proposed study evaluates the availability and deployment of gasification technologies to drive clean electricity generation from waste coal and sterile rock coal, incorporating greenhouse gas emission mitigation systems, like CO2, H2S and NOx removal systems. It establishes the target facility and its conceptual basic design proposal. The syngas obtained after the gasification of sterile coal is processed through specific conditioning units before entering into the combustion chamber of a gas turbine. Flue gas leaving the gas turbine is ducted to a heat recovery steam generation boiler; the steam produced within the boilerdrives a steam turbine. The target facility resembles a singular Integrated Gasification in Combined Cycle (IGCC) power station. The evaluation of the conceptual basic design according to the power output set for a maximum sterile contribution, established that rates over 95% H2S and 90% CO2 removal can be achieved. Noticeable decrease of NOx compounds can be also achieved by the use of commercial technology. A techno-economic approach of the conceptual basic design is made evaluating the integration of potential unitsand their implementation within the target facility aiming toachieve clean power generation. The criterion to be compliant with the most restrictive regulation regarding environmental emissions is setting to carry out this analysis.
Resumo:
La presente tesis doctoral con título "Contribution to Active Multi-Beam Reconfigurable Antennas for L and S Bands" ha sido desarrollada por el investigador ingeniero de telecomunicación estudiante de doctorado Javier García-Gasco Trujillo en el Grupo de Radiación del Departamento de Señales, Sistemas y Radiocomunicaciones de la ETSI de Telecomunicación de la Universidad Politécnica de Madrid bajo la dirección de los doctores Manuel Sierra Pérez y José Manuel Fernández González. Durante décadas, el desarrollo de antenas de apuntamiento electrónico ha estado limitado al área militar. Su alto coste y su gran complejidad eran los mayores obstáculos que frenaban la introducción de esta tecnología en aplicaciones comerciales de gran escala. La reciente aparición de componentes de estado sólido prácticos, fiables, y de bajo coste ha roto la barrera del coste y ha reducido la complejidad, haciendo que las antenas reconfigurables de apuntamiento electrónico sean una opción viable en un futuro cercano. De esta manera, las antenas phased array podrían llegar a ser la joya de la corona que permitan alcanzar los futuros retos presentes en los sistemas de comunicaciones tanto civiles como militares. Así pues, ahora es el momento de investigar en el desarrollo de antenas de apuntamiento electrónico de bajo coste, donde los nuevos componentes de estado sólido comerciales forman el núcleo duro de la arquitectura. De esta forma, el estudio e implementación de estos arrays de antenas activas de apuntamiento electrónico capaces de controlar la fase y amplitud de las distintas señales implicadas es uno de los grandes retos de nuestro tiempo. Esta tesis se enfrenta a este desafío, proponiendo novedosas redes de apuntamiento electrónico e innovadores módulos de transmisión/recepción (T/R) utilizando componentes de estado sólido de bajo coste, que podrán integrar asequibles antenas activas reconfigurables multihaz en bandas L y S. En la primera parte de la tesis se realiza una descripción del estado del arte de las antenas phased array, incluyendo su base teórica y sus ventajas competitivas. Debido a que las contribuciones obtenidas en la presente tesis han sido realizadas dentro de distintos proyectos de investigación, donde se han manejada antenas de simple/doble polarización circular y simple/doble banda de trabajo, se describen detenidamente los dos proyectos más relevantes de la investigación: el radar de basura espacial de la Agencia Espacial Europea (ESA), Space Situational Awareness (SSA); y la estación base de seguimiento y control de satélites de órbita baja, GEOdesic Dome Array (GEODA). Sin lugar a dudas, los dispositivos desfasadores son uno de los componentes clave en el diseño de antenas phased arrays. Recientemente se ha observado una gran variación en el precio final de estos dispositivos, llegando en ocasiones a límites inasequibles. Así pues, se han propuesto distintas técnicas de conformación de haz alternativas a la utilización de componentes desfasadores comerciales: el desfasador de líneas conmutadas, la red de haz conmutado, y una novedosa red desfasadora divisora/combinadora de potencia. Para mostrar un uso práctico de las mismas, se ha propuesto el uso de las tres alternativas para el caso práctico del subarray de cinco elementos de la celda GEODA-SARAS. Tras dicho estudio se obtiene que la novedosa red desfasadora divisora/combinadora de potencia propuesta es la que mejor relación comportamiento/coste presenta. Para verificar su correcto funcionamiento se construye y mide los dos bloques principales de los que está compuesta la red total, comprobando que en efecto la red responde según lo esperado. La estructura más simple que permite realizar un barrido plano es el array triangular de tres elementos. Se ha realizado el diseño de una nueva red multihaz que es capaz de proporcionar tres haces ortogonales en un ángulo de elevación _0 y un haz adicional en la dirección broadside utilizando el mencionado array triangular de tres elementos como antena. En primer lugar se realizar una breve introducción al estado del arte de las redes clásicas multihaz. Así mismo se comentan innovadores diseños de redes multihaz sin pérdidas. El estudio da paso a las redes disipativas, de tal forma que se analiza su base matemática y se muestran distintas aplicaciones en arrays triangulares de tres elementos. Finalmente, la novedosa red básica propuesta se presenta, mostrando simulaciones y medidas de la misma para el caso prácticoo de GEODA. También se ha diseñado, construido y medido una red compuesta por dos redes básicas complementarias capaz de proporcionar seis haces cuasi-ortogonales en una dirección _0 con dos haces superpuestos en broadside. La red propuesta queda totalmente validada con la fabricación y medida de estos con prototipos. Las cadenas de RF de los módulos T/R de la nueva antena GEODA-SARAS no son algo trivial. Con el fin de mostrar el desarrollo de una cadena compleja con una gran densidad de componentes de estado sólido, se presenta una descripción detallada de los distintos componentes que integran las cadenas de RF tanto en transmisión como en recepción de la nueva antena GEODA-SARAS. Tras presentar las especificaciones de la antena GEODA-SARA y su diagrama de bloques esquemático se describen los dos bloques principales de las cadenas de RF: la celda de cinco elementos, y el módulo de conversión de panel. De la misma manera también se presentará el módulo de calibración integrado dentro de los dos bloques principales. Para comprobar que el funcionamiento esperado de la placa es el adecuado, se realizará un análisis que tratará entre otros datos: la potencia máxima en la entrada del transmisor (comprobando la saturación de la cadena), señal de recepción mínima y máxima (verificando el rango de sensibilidad requerido), y el factor G/T (cumpliendo la especificación necesaria). Así mismo se mostrará un breve estudio del efecto de la cuantificación de la fase en el conformado de haz de RF. Los estudios muestran que la composición de las cadenas de RF permite el cumplimiento de las especificaciones necesarias. Finalmente la tesis muestra las conclusiones globales del trabajo realizado y las líneas futuras a seguir para continuar con esta línea de investigación. ABSTRACT This PhD thesis named "Contribution to Active Multi-Beam Reconfigurable Antennas for L and S Bands", has been written by the Electrical Engineer MSc. researcher Javier García-Gasco Trujillo in the Grupo de Radiación of the Departamento de Señales, Sistemas y Radiocomunicaciones from the ETSI de Telecomunicación of the Universidad Politécnica de Madrid. For decades, the implementation of electronically steerable phased array antennas was confined to the military area. Their high cost and complexity were the major obstacles to introduce this technology in large scale commercial applications. The recent emergence of new practical, low-cost, and highly reliable solid state devices; breaks the barrier of cost and reduces the complexity, making active phased arrays a viable future option. Thus, phased array antennas could be the crown jewel that allow to meet the future challenges in military and civilian communication systems. Now is time to deploy low-cost phased array antennas, where newly commercial components form the core of the architecture. Therefore, the study and implementation of these novel low-cost and highly efficient solid state phased array blocks capable of controlling signal phase/amplitude accurately is one of the great challenges of our time. This thesis faces this challenge, proposing innovative electronic beam steering networks and transmitter/ receiver (T/R) modules using affordable solid state components, which could integrate fair reconfigurable phased array antennas working in L and S bands. In the first part of the thesis, a description of the state of art of phased array antennas, including their fundamentals and their competitive advantages, is presented. Since thesis contributions have been carried out for different research projects, where antennas with single/double circular polarization and single/double working frequency bands have been examined, frameworks of the two more important projects are detailed: the Space Situational Awareness (SSA) programme from the European Space Agency (ESA), and the GEOdesic Dome Array (GEODA) project from ISDEFE-INSA and the ESA. Undoubtedly, phase shifter devices are one of the key components of phased array antennas. Recent years have witnessed wide fluctuations in commercial phase shifter prices, which sometimes led to unaffordable limit. Several RF steering technique alternatives to the commercial phase shifters are proposed, summarized, and compared: the switched line phase shifter, the switched-beam network, and the novel phase shifter power splitter/combiner network. In order to show a practical use of the three different techniques, the five element GEODA-SARAS subarray is proposed as a real case of study. Finally, a practical study of a newly phase shifter power splitter/combiner network for a subarray of five radiating elements with triangular distribution is shown. Measurements of the two different phase shifter power splitter/combiner prototypes integrating the whole network are also depicted, demonstrating their proper performance. A triangular cell of three radiating elements is the simplest way to obtain a planar scanner. A new multibeam network configuration that provides three orthogonal beams in a desired _0 elevation angle and an extra one in the broadside steering direction for a triangular array of three radiating elements is introduced. Firstly, a short introduction to the state of art of classical multi-beam networks is presented. Lossless network analysis, including original lossless network designs, are also commented. General dissipative network theory as well as applications for array antennas of three radiating elements are depicted. The proposed final basic multi-beam network are simulated, built and measured to the GEODA cell practical case. A combined network that provides six orthogonal beams in a desired _0 elevation angle and a double seventh one in the broadside direction by using two complementary proposed basic networks will be shown. Measurements of the whole system will be also depicted, verifying the expected behavior. GEODA-SARAS T/R module RF chains are not a trivial design. A thorough description of all the components compounding GEODA-SARAS T/R module RF chains is presented. After presenting the general specifications of the GEODA-SARAS antenna and its block diagrams; two main blocks of the RF chains, the five element cell and the panel conversion module, are depicted and analyzed. Calibration module integrated within the two main blocks are also depicted. Signal flow throw the system analyzing critical situations such as maximum transmitted power (testing the chain unsaturation), minimum and maximum receiving signal (verifying sensitivity range), maximum receiver interference signals (assuring a proper reception), and G/T factor (fulfilling the technical specification) are evaluated. Phase quantization error effects are also listed. Finally, the manuscript contains the conclusions drawn of the present research and the future work.
Resumo:
As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. The average consumption of a single data center is equivalent to the energy consumption of 25.000 households. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. This work proposes an automatic method, based on Multi-Objective Particle Swarm Optimization, for the identification of power models of enterprise servers in Cloud data centers. Our approach, as opposed to previous procedures, does not only consider the workload consolidation for deriving the power model, but also incorporates other non traditional factors like the static power consumption and its dependence with temperature. Our experimental results shows that we reach slightly better models than classical approaches, but simul- taneously simplifying the power model structure and thus the numbers of sensors needed, which is very promising for a short-term energy prediction. This work, validated with real Cloud applications, broadens the possibilities to derive efficient energy saving techniques for Cloud facilities.