874 resultados para Electricity Demand, Causality, Cointegration Analysis
Resumo:
With growing demand for liquefied natural gas (LNG) and liquid transportation fuels, and concerns about climate change and causes of greenhouse gas emissions, this master’s thesis introduces a new value chain design for LNG and transportation fuels and respective fundamental business cases based on hybrid PV-Wind power plants. The value chains are composed of renewable electricity (RE) converted by power-to-gas (PtG), gas-to-liquids (GtL) or power-to-liquids (PtL) facilities into SNG (which is finally liquefied into LNG) or synthetic liquid fuels, mainly diesel, respectively. The RE-LNG or RE-diesel are drop-in fuels to the current energy system and can be traded everywhere in the world. The calculations for the hybrid PV-Wind power plants, electrolysis, methanation (H2tSNG), hydrogen-to-liquids (H2tL), GtL and LNG value chain are performed based on both annual full load hours (FLh) and hourly analysis. Results show that the proposed RE-LNG produced in Patagonia, as the study case, is competitive with conventional LNG in Japan for crude oil prices within a minimum price range of about 87 - 145 USD/barrel (20 – 26 USD/MBtu of LNG production cost) and the proposed RE-diesel is competitive with conventional diesel in the European Union (EU) for crude oil prices within a minimum price range of about 79 - 135 USD/barrel (0.44 – 0.75 €/l of diesel production cost), depending on the chosen specific value chain and assumptions for cost of capital, available oxygen sales and CO2 emission costs. RE-LNG or RE-diesel could become competitive with conventional fuels from an economic perspective, while removing environmental concerns. The RE-PtX value chain needs to be located at the best complementing solar and wind sites in the world combined with a de-risking strategy. This could be an opportunity for many countries to satisfy their fuel demand locally. It is also a specific business case for countries with excellent solar and wind resources to export carbon-neutral hydrocarbons, when the decrease in production cost is considerably more than the shipping cost. This is a unique opportunity to export carbon-neutral hydrocarbons around the world where the environmental limitations on conventional hydrocarbons are getting tighter.
Resumo:
This paper proposes a novel demand response model using a fuzzy subtractive cluster approach. The model development provides support to domestic consumer decisions on controllable loads management, considering consumers' consumption needs and the appropriate load shape or rescheduling in order to achieve possible economic benefits. The model based on fuzzy subtractive clustering method considers clusters of domestic consumption covering an adequate consumption range. Analysis of different scenarios is presented considering available electric power and electric energy prices. Simulation results are presented and conclusions of the proposed demand response model are discussed. (C) 2016 Elsevier Ltd. All rights reserved.
Resumo:
This thesis discusses market design and regulation in electricity systems, focusing on the information exchange of the regulated grid firm and the generation firms as well as the regulation of the grid firm. In the first chapter, an economic framework is developed to consistently analyze different market designs and the information exchange between the grid firm and the generation firms. Perfect competition between the generation firms and perfect regulation of the grid firm is assumed. A numerical algorithm is developed and its feasibility demonstrated on a large-scale problem. The effects of different market designs for the Central Western European (CWE) region until 2030 are analyzed. In the second chapter, the consequences of restricted grid expansion within the current market design in the CWE region until 2030 are analyzed. In the third chapter the assumption of efficient markets is modified. The focus of the analysis is then, whether and how inefficiencies in information availability and processing affect different market designs. For different parameter settings, nodal and zonal pricing are compared regarding their welfare in the spot and forward market. In the fourth chapter, information asymmetries between the regulator and the regulated firm are analyzed. The optimal regulatory strategy for a firm, providing one output with two substitutable inputs, is defined. Thereby, one input and the absolute quantity of inputs is not observable for the regulator. The result is then compared to current regulatory approaches.
Resumo:
With the world of professional sports shifting towards employing better sport analytics, the demand for vision-based performance analysis is growing increasingly in recent years. In addition, the nature of many sports does not allow the use of any kind of sensors or other wearable markers attached to players for monitoring their performances during competitions. This provides a potential application of systematic observations such as tracking information of the players to help coaches to develop their visual skills and perceptual awareness needed to make decisions about team strategy or training plans. My PhD project is part of a bigger ongoing project between sport scientists and computer scientists involving also industry partners and sports organisations. The overall idea is to investigate the contribution technology can make to the analysis of sports performance on the example of team sports such as rugby, football or hockey. A particular focus is on vision-based tracking, so that information about the location and dynamics of the players can be gained without any additional sensors on the players. To start with, prior approaches on visual tracking are extensively reviewed and analysed. In this thesis, methods to deal with the difficulties in visual tracking to handle the target appearance changes caused by intrinsic (e.g. pose variation) and extrinsic factors, such as occlusion, are proposed. This analysis highlights the importance of the proposed visual tracking algorithms, which reflect these challenges and suggest robust and accurate frameworks to estimate the target state in a complex tracking scenario such as a sports scene, thereby facilitating the tracking process. Next, a framework for continuously tracking multiple targets is proposed. Compared to single target tracking, multi-target tracking such as tracking the players on a sports field, poses additional difficulties, namely data association, which needs to be addressed. Here, the aim is to locate all targets of interest, inferring their trajectories and deciding which observation corresponds to which target trajectory is. In this thesis, an efficient framework is proposed to handle this particular problem, especially in sport scenes, where the players of the same team tend to look similar and exhibit complex interactions and unpredictable movements resulting in matching ambiguity between the players. The presented approach is also evaluated on different sports datasets and shows promising results. Finally, information from the proposed tracking system is utilised as the basic input for further higher level performance analysis such as tactics and team formations, which can help coaches to design a better training plan. Due to the continuous nature of many team sports (e.g. soccer, hockey), it is not straightforward to infer the high-level team behaviours, such as players’ interaction. The proposed framework relies on two distinct levels of performance analysis: low-level performance analysis, such as identifying players positions on the play field, as well as a high-level analysis, where the aim is to estimate the density of player locations or detecting their possible interaction group. The related experiments show the proposed approach can effectively explore this high-level information, which has many potential applications.
Resumo:
BACKGROUND: Invasive meningococcal disease is a significant cause of mortality and morbidity in the UK. Administration of chemoprophylaxis to close contacts reduces the risk of a secondary case. However, unnecessary chemoprophylaxis may be associated with adverse reactions, increased antibiotic resistance and removal of organisms, such as Neisseria lactamica, which help to protect against meningococcal disease. Limited evidence exists to suggest that overuse of chemoprophylaxis may occur. This study aimed to evaluate prescribing of chemoprophylaxis for contacts of meningococcal disease by general practitioners and hospital staff. METHODS: Retrospective case note review of cases of meningococcal disease was conducted in one health district from 1st September 1997 to 31st August 1999. Routine hospital and general practitioner prescribing data was searched for chemoprophylactic prescriptions of rifampicin and ciprofloxacin. A questionnaire of general practitioners was undertaken to obtain more detailed information. RESULTS: Prescribing by hospital doctors was in line with recommendations by the Consultant for Communicable Disease Control. General practitioners prescribed 118% more chemoprophylaxis than was recommended. Size of practice and training status did not affect the level of additional prescribing, but there were significant differences by geographical area. The highest levels of prescribing occurred in areas with high disease rates and associated publicity. However, some true close contacts did not appear to receive prophylaxis. CONCLUSIONS: Receipt of chemoprophylaxis is affected by a series of patient, doctor and community interactions. High publicity appears to increase demand for prophylaxis. Some true contacts do not receive appropriate chemoprophylaxis and are left at an unnecessarily increased risk
Resumo:
5th International Conference on Education and New Learning Technologies (Barcelona, Spain. 1-3 July, 2013)
Resumo:
Tässä diplomityössä tarkastellaan täysin uusiutuvaa energiajärjestelmää Etelä-Karjalan maakunnan alueella, mikä onkin jo tällä hetkellä Suomen uusiutuvin maakunta. Diplomityössä tarkastellaan julkisen sektorin, liikenteen ja rakennusten energian kulutusta mutta teollisuuden energiankäyttö jätetään tarkastelun ulkopuolelle. Työssä tutustutaan tämän hetken Etelä-Karjalan energiajärjestelmään ja sen perusteella tehdään referenssi-skenaario. Tulevaisuuden skenaariot tehdään vuosille 2030 ja 2050. Tulevaisuuden skenaarioissa muutos keskittyy järjestelmän sähköistymiseen ja uusiutuvien tuotantomuotojen integroimiseen järjestelmään. Sähköistyminen kasvattaa sähkönkulutusta, joka pyritään kattamaan uusiutuvilla tuotantomuodoilla, lähinnä tuuli- ja aurinkovoimalla. Liikennesektori rajataan kumipyöräliikenteeseen ja sen muutos tulee olemaan haastavin ja aikaa vievin. Muutokseen pyritään liikennepolttoaineiden tuotannolla maakunnassa sekä sähköautoilulla. Uusiutuva energiajärjestelmä tarvitsee tuotannon ja kysynnän joustoa sekä älyä järjestelmältä. Työssä tarkastellaan myös järjestelmän kustannuksia sekä työllisyysvaikutuksia.
Resumo:
The electricity market and climate are both undergoing a change. The changes impact hydropower and provoke an interest for hydropower capacity increases. In this thesis a new methodology was developed utilising short-term hydropower optimisation and planning software for better capacity increase profitability analysis accuracy. In the methodology income increases are calculated in month long periods while varying average discharge and electricity price volatility. The monthly incomes are used for constructing year scenarios, and from different types of year scenarios a long-term profitability analysis can be made. Average price development is included utilising a multiplier. The method was applied on Oulujoki hydropower plants. It was found that the capacity additions that were analysed for Oulujoki were not profitable. However, the methodology was found versatile and useful. The result showed that short periods of peaking prices play major role in the profitability of capacity increases. Adding more discharge capacity to hydropower plants that initially bypassed water more often showed the best improvements both in income and power generation profile flexibility.
Resumo:
The value of integrating a heat storage into a geothermal district heating system has been investigated. The behaviour of the system under a novel operational strategy has been simulated focusing on the energetic, economic and environmental effects of the new strategy of incorporation of the heat storage within the system. A typical geothermal district heating system consists of several production wells, a system of pipelines for the transportation of the hot water to end-users, one or more re-injection wells and peak-up devices (usually fossil-fuel boilers). Traditionally in these systems, the production wells change their production rate throughout the day according to heat demand, and if their maximum capacity is exceeded the peak-up devices are used to meet the balance of the heat demand. In this study, it is proposed to maintain a constant geothermal production and add heat storage into the network. Subsequently, hot water will be stored when heat demand is lower than the production and the stored hot water will be released into the system to cover the peak demands (or part of these). It is not intended to totally phase-out the peak-up devices, but to decrease their use, as these will often be installed anyway for back-up purposes. Both the integration of a heat storage in such a system as well as the novel operational strategy are the main novelties of this thesis. A robust algorithm for the sizing of these systems has been developed. The main inputs are the geothermal production data, the heat demand data throughout one year or more and the topology of the installation. The outputs are the sizing of the whole system, including the necessary number of production wells, the size of the heat storage and the dimensions of the pipelines amongst others. The results provide several useful insights into the initial design considerations for these systems, emphasizing particularly the importance of heat losses. Simulations are carried out for three different cases of sizing of the installation (small, medium and large) to examine the influence of system scale. In the second phase of work, two algorithms are developed which study in detail the operation of the installation throughout a random day and a whole year, respectively. The first algorithm can be a potentially powerful tool for the operators of the installation, who can know a priori how to operate the installation on a random day given the heat demand. The second algorithm is used to obtain the amount of electricity used by the pumps as well as the amount of fuel used by the peak-up boilers over a whole year. These comprise the main operational costs of the installation and are among the main inputs of the third part of the study. In the third part of the study, an integrated energetic, economic and environmental analysis of the studied installation is carried out together with a comparison with the traditional case. The results show that by implementing heat storage under the novel operational strategy, heat is generated more cheaply as all the financial indices improve, more geothermal energy is utilised and less fuel is used in the peak-up boilers, with subsequent environmental benefits, when compared to the traditional case. Furthermore, it is shown that the most attractive case of sizing is the large one, although the addition of the heat storage most greatly impacts the medium case of sizing. In other words, the geothermal component of the installation should be sized as large as possible. This analysis indicates that the proposed solution is beneficial from energetic, economic, and environmental perspectives. Therefore, it can be stated that the aim of this study is achieved in its full potential. Furthermore, the new models for the sizing, operation and economic/energetic/environmental analyses of these kind of systems can be used with few adaptations for real cases, making the practical applicability of this study evident. Having this study as a starting point, further work could include the integration of these systems with end-user demands, further analysis of component parts of the installation (such as the heat exchangers) and the integration of a heat pump to maximise utilisation of geothermal energy.
Resumo:
Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.
Resumo:
Discourse analysis as a methodology is perhaps not readily associated with substantive causality claims. At the same time the study of discourses is very much the study of conceptions of causal relations among a set, or sets, of agents. Within Europeanization research we have seen endeavours to develop discursive institutional analytical frameworks and something that comes close to the formulation of hypothesis on the effects of European Union (EU) policies and institutions on domestic change. Even if these efforts so far do not necessarily amount to substantive theories or claims of causality, it suggests that discourse analysis and the study of causality are by no means opposites. The study of Europeanization discourses may even be seen as an essential step in the move towards claims of causality in Europeanization research. This paper deals with the question of how we may move from the study of discursive causalities towards more substantive claims of causality between EU policy and institutional initiatives and domestic change.
Resumo:
Indoor Air 2016 - The 14th International Conference of Indoor Air Quality and Climate
Resumo:
The accurate prediction of stress histories for the fatigue analysis is of utmost importance for the design process of wind turbine rotor blades. As detailed, transient, and geometrically non-linear three-dimensional finite element analyses are computationally weigh too expensive, it is commonly regarded sufficient to calculate the stresses with a geometrically linear analysis and superimpose different stress states in order to obtain the complete stress histories. In order to quantify the error from geometrically linear simulations for the calculation of stress histories and to verify the practical applicability of the superposition principal in fatigue analyses, this paper studies the influence of geometric non-linearity in the example of a trailing edge bond line, as this subcomponent suffers from high strains in span-wise direction. The blade under consideration is that of the IWES IWT-7.5-164 reference wind turbine. From turbine simulations the highest edgewise loading scenario from the fatigue load cases is used as the reference. A 3D finite element model of the blade is created and the bond line fatigue assessment is performed according to the GL certification guidelines in its 2010 edition, and in comparison to the latest DNV GL standard from end of 2015. The results show a significant difference between the geometrically linear and non-linear stress analyses when the bending moments are approximated via a corresponding external loading, especially in case of the 2010 GL certification guidelines. This finding emphasizes the demand to reconsider the application of the superposition principal in fatigue analyses of modern flexible rotor blades, where geometrical nonlinearities become significant. In addition, a new load application methodology is introduced that reduces the geometrically non-linear behaviour of the blade in the finite element analysis.
Resumo:
Nowadays the rising cost of health care and pharmaceutical products, the increase in life expectancy as well as the demand for an improved quality of life, has led to an increased concern about food intake and an emergence of new concepts of nutrition [1]. Mushrooms have been pointed out as an excellent option to include in a healthy diet, due to their nutritional value [2] associated with their bioactive properties [3]. The current study presents the chemical profile of two edible species, Leccinum molle (Ban) Ban and Leccinum vulpinum Watling, harvested in the outskirts of Bragan9a (Northeastern Portugal), regarding their content in nutrients and nonnutrients. Individual profiles of sugars and fatty acids were obtained by HPLC-RI and GC-FID, respectively. Tocopherols were analysed by HPLC-fluorescence, and the non-nutrients (i.e., phenolic and other organic acids) by HPLC-PDA. The antioxidant activity of the methanolic extracts obtained from both species was assessed with different assays (e.g. reducing power, radical scavenging activity and lipid peroxidation inhibition) and their hepatotoxicity was evaluated in primary cell cultures obtained from porcine liver, PLP2. Generally, both Leccinum species revealed similar nutrient profiles, with low fat levels, fructose, mannitol and trehalose as the foremost free sugars, and higher percentages of mono- and polyunsaturated fatty acids in comparison with saturated fatty acids. The presence of bioactive compounds was also detected, namely phenolic (e.g., gallic, protocatechuic and p-hydroxybenzoic acids) and organic acids (e.g., citric and fumaric acids). Both species presented antioxidant properties, being L. vulpinum the species which showed the most promising results (higher contents of total phenolic acids and lower ECso values in all the performed assays). Neither of the extracts presented toxicity against the liver primary cells PLP2, up to maximal concentration tested (Giso > 400 μg/ml).
Resumo:
Wind energy is one of the most promising and fast growing sector of energy production. Wind is ecologically friendly and relatively cheap energy resource available for development in practically all corners of the world (where only the wind blows). Today wind power gained broad development in the Scandinavian countries. Three important challenges concerning sustainable development, i.e. energy security, climate change and energy access make a compelling case for large-scale utilization of wind energy. In Finland, according to the climate and energy strategy, accepted in 2008, the total consumption of electricity generated by means of wind farms by 2020, should reach 6 - 7% of total consumption in the country [1]. The main challenges associated with wind energy production are harsh operational conditions that often accompany the turbine operation in the climatic conditions of the north and poor accessibility for maintenance and service. One of the major problems that require a solution is the icing of turbine structures. Icing reduces the performance of wind turbines, which in the conditions of a long cold period, can significantly affect the reliability of power supply. In order to predict and control power performance, the process of ice accretion has to be carefully tracked. There are two ways to detect icing – directly or indirectly. The first way applies to the special ice detection instruments. The second one is using indirect characteristics of turbine performance. One of such indirect methods for ice detection and power loss estimation has been proposed and used in this paper. The results were compared to the results directly gained from the ice sensors. The data used was measured in Muukko wind farm, southeast Finland during a project 'Wind power in cold climate and complex terrain'. The project was carried out in 9/2013 - 8/2015 with the partners Lappeenranta university of technology, Alstom renovables España S.L., TuuliMuukko, and TuuliSaimaa.