868 resultados para Energy-efficiency
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do Grau de Mestre em Engenharia e Gestão Ambiental, ramo de Sistemas Industriais
Resumo:
We discuss the design principles of TCP within the context of heterogeneous wired/wireless networks and mobile networking. We identify three shortcomings in TCP's behavior: (i) the protocol's error detection mechanism, which does not distinguish different types of errors and thus does not suffice for heterogeneous wired/wireless environments, (ii) the error recovery, which is not responsive to the distinctive characteristics of wireless networks such as transient or burst errors due to handoffs and fading channels, and (iii) the protocol strategy, which does not control the tradeoff between performance measures such as goodput and energy consumption, and often entails a wasteful effort of retransmission and energy expenditure. We discuss a solution-framework based on selected research proposals and the associated evaluation criteria for the suggested modifications. We highlight an important angle that did not attract the required attention so far: the need for new performance metrics, appropriate for evaluating the impact of protocol strategies on battery-powered devices.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
Countries across the world are being challenged to decarbonise their energy systems in response to diminishing fossil fuel reserves, rising GHG emissions and the dangerous threat of climate change. There has been a renewed interest in energy efficiency, renewable energy and low carbon energy as policy‐makers seek to identify and put in place the most robust sustainable energy system that can address this challenge. This thesis seeks to improve the evidence base underpinning energy policy decisions in Ireland with a particular focus on natural gas, which in 2011 grew to have a 30% share of Ireland’s TPER. Natural gas is used in all sectors of the Irish economy and is seen by many as a transition fuel to a low-carbon energy system; it is also a uniquely excellent source of data for many aspects of energy consumption. A detailed decomposition analysis of natural gas consumption in the residential sector quantifies many of the structural drives of change, with activity (R2 = 0.97) and intensity (R2 = 0.69) being the best explainers of changing gas demand. The 2002 residential building regulations are subject to an ex-post evaluation, which using empirical data finds a 44 ±9.5% shortfall in expected energy savings as well as a 13±1.6% level of non-compliance. A detailed energy demand model of the entire Irish energy system is presented together with scenario analysis of a large number of energy efficiency policies, which show an aggregate reduction in TFC of 8.9% compared to a reference scenario. The role for natural gas as a transition fuel over a long time horizon (2005-2050) is analysed using an energy systems model and a decomposition analysis, which shows the contribution of fuel switching to natural gas to be worth 12 percentage points of an overall 80% reduction in CO2 emissions. Finally, an analysis of the potential for CCS in Ireland finds gas CCS to be more robust than coal CCS for changes in fuel prices, capital costs and emissions reduction and the cost optimal location for a gas CCS plant in Ireland is found to be in Cork with sequestration in the depleted gas field of Kinsale.
Resumo:
The retrofitting of existing buildings for decreased energy usage, through increased energy efficiency and for minimum carbon dioxide emissions throughout their remaining lifetime is a major area of research. This research area requires development to provide building professionals with more efficient building retrofit solution determination tools. The overarching objective of this research is to develop a tool for this purpose through the implementation of a prescribed methodology. This has been achieved in three distinct steps. Firstly, the concept of using the degree-days modelling method as an adequate means of basing retrofit decision upon was analysed and the results illustrated that the concept had merit. Secondly, the concept of combining the degree-days modelling method and the Genetic Algorithms optimisation method is investigated as a method of determining optimal thermal energy retrofit solutions. Thirdly, the combination of the degree-days modelling method and the Genetic Algorithms optimisation method were packaged into a building retrofit decision-support tool and named BRaSS (Building Retrofit Support Software). The results demonstrate clearly that, fundamental building information, simplified occupancy profiles and weather data used in a static simulation modelling method is a sufficient and adequate means to base retrofitting decisions upon. The results also show that basing retrofit decisions upon energy analysis results are the best means to guide a retrofit project and also to achieve results which are optimum for a particular building. The results also indicate that the building retrofit decision-support tool, BRaSS, is an effective method to determine optimum thermal energy retrofit solutions.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
Biogas is a mixture of methane and other gases. In its crude state, it contains carbon dioxide (CO2) that reduces its energy efficiency and hydrogen sulfide (H2S) that is toxic and highly corrosive. Because chemical methods of removal are expensive and environmentally hazardous, this project investigated an algal-based system to remove CO2 from biogas. An anaerobic digester was used to mimic landfill biogas. Iron oxide and an alkaline spray were used to remove H2S and CO2 respectively. The CO2-laden alkali solution was added to a helical photobioreactor where the algae metabolized the dissolved CO2 to generate algal biomass. Although technical issues prevented testing of the complete system for functionality, cost analysis was completed and showed that the system, in its current state, is not economically feasible. However, modifications may reduce operation costs.
Resumo:
Induction Skull Melting (ISM) is a technique for heating, melting, mixing and, possibly, evaporating reactive liquid metals at high temperatures with a minimum contact at solid walls. The presented numerical modelling involves the complete time dependent process analysis based on the coupled electromagnetic, temperature and turbulent velocity fields during the melting and liquid shape changes. The simulation model is validated against measurements of liquid metal height, temperature and heat losses in a commercial size ISM furnace. The observed typical limiting temperature plateau for increasing input electrical power is explained by the turbulent convective heat losses. Various methods to increase the superheat within the liquid melt, the process energy efficiency and stability are proposed.
Resumo:
A casting route is often the most cost-effective means of producing engineering components. However, certain materials, particularly those based on Ti, TiAl and Zr alloy systems, are very reactive in the molten condition and must be melted in special furnaces. Induction Skull Melting (ISM) is the most widely-used process for melting these alloys prior to casting components such as turbine blades, engine valves, turbocharger rotors and medical prostheses. A major research project is underway with the specific target of developing robust techniques for casting TiAl components. The aims include increasing the superheat in the molten metal to allow thin section components to be cast, improving the quality of the cast components and increasing the energy efficiency of the process. As part of this, the University of Greenwich (UK) is developing a computer model of the ISM process in close collaboration with the University of Birmingham (UK) where extensive melting trials are being undertaken. This paper describes the experimental measurements to obtain data to feed into and to validate the model. These include measurements of the true RMS current applied to the induction coil, the heat transfer from the molten metal to the crucible cooling water, and the shape of the column of semi-levitated molten metal. Data are presented for Al, Ni and TiAl.
Resumo:
Induction Skull Melting (ISM) is used for heating, melting, mixing and, possibly, evaporating reactive liquid metals at high temperatures when a minimum contact at solid walls is required. The numerical model presented here involves the complete time dependent process analysis based on the coupled electromagnetic, temperature and turbulent velocity fields during the melting and liquid shape changes. The simulation is validated against measurements of liquid metal height, temperature and heat losses in a commercial size ISM furnace. The often observed limiting temperature plateau for ever increasing electrical power input is explained by the turbulent convective heat losses. Various methods to increase the superheat within the liquid melt, the process energy efficiency and stability are proposed.
Resumo:
The time dependent numerical model of cold crucible melting is based on the coupled electromagnetic, temperature and turbulent velocity field calculation accounting for the magnetically confined liquid metal shape continuous change. The model is applied to investigate the process energy efficiency dependence on the critical choice of AC power supply frequency and an optional addition of a DC magnetic field. Test cases of the metal load up to 50 kg are considered. The behaviour of the numerical model at high AC frequencies is instructively validated by the use of the electromagnetic analytical solution for a sphere and temperature measurements in a commercial size cold crucible furnace
Resumo:
Este trabajo revisa la evolución y estado actual de la automoción eléctrica; analiza las ventajas ambientales, de eficiencia energética y de costes del motor eléctrico frente al de combustión interna; y presenta como limitaciones para el uso del vehículo eléctrico, el desarrollo actual de las baterías recargables y la lenta implantación de electrolineras. Con el objetivo de contribuir al desarrollo de una actividad económica respetuosa con el medio ambiente y basada en nuevas tecnologías, se proyecta, a partir de experiencias previas, una instalación de puntos de recarga para una ciudad de 50.000 habitantes con un parque de 100 vehículos eléctricos que dispone de dos plazas de recarga rápida (poste trifásico 400V CA), siete plazas de recarga lenta (postes monofásicos 230V CA) y de 50 módulos fotovoltaicos que producen diariamente la energía equivalente a la recarga lenta de un vehículo en los meses fríos y de dos en los meses cálidos.
Resumo:
Wireless enabled portable devices must operate with the highest possible energy efficiency while still maintaining a minimum level and quality of service to meet the user's expectations. The authors analyse the performance of a new pointer-based medium access control protocol that was designed to significantly improve the energy efficiency of user terminals in wireless local area networks. The new protocol, pointer controlled slot allocation and resynchronisation protocol (PCSAR), is based on the existing IEEE 802.11 point coordination function (PCF) standard. PCSAR reduces energy consumption by removing the need for power saving stations to remain awake and listen to the channel. Using OPNET, simulations were performed under symmetric channel loading conditions to compare the performance of PCSAR with the infrastructure power saving mode of IEEE 802.11, PCF-PS. The simulation results demonstrate a significant improvement in energy efficiency without significant reduction in performance when using PCSAR. For a wireless network consisting of an access point and 8 stations in power saving mode, the energy saving was up to 31% while using PCSAR instead of PCF-PS, depending upon frame error rate and load. The results also show that PCSAR offers significantly reduced uplink access delay over PCF-PS while modestly improving uplink throughput.
Resumo:
The performance of a new pointer-based medium-access control protocol that was designed to significantly improve the energy efficiency of user terminals in quality-of-service-enabled wireless local area networks was analysed. The new protocol, pointer-controlled slot allocation and resynchronisation protocol (PCSARe), is based on the hybrid coordination function-controlled channel access mode of the IEEE 802.11e standard. PCSARe reduces energy consumption by removing the need for power-saving stations to remain awake for channel listening. Discrete event network simulations were performed to compare the performance of PCSARe with the non-automatic power save delivery (APSD) and scheduled-APSD power-saving modes of IEEE 802.11e. The simulation results show a demonstrable improvement in energy efficiency without significant reduction in performance when using PCSARe. For a wireless network consisting of an access point and eight stations in power-saving mode, the energy saving was up to 39% when using PCSARe instead of IEEE 802.11e non-APSD. The results also show that PCSARe offers significantly reduced uplink access delay over IEEE 802.11e non-APSD, while modestly improving the uplink throughput. Furthermore, although both had the same energy consumption, PCSARe gave a 25% reduction in downlink access delay compared with IEEE 802.11e S-APSD.