925 resultados para Energy Requirements
Resumo:
This Thesis aims at building and discussing mathematical models applications focused on Energy problems, both on the thermal and electrical side. The objective is to show how mathematical programming techniques developed within Operational Research can give useful answers in the Energy Sector, how they can provide tools to support decision making processes of Companies operating in the Energy production and distribution and how they can be successfully used to make simulations and sensitivity analyses to better understand the state of the art and convenience of a particular technology by comparing it with the available alternatives. The first part discusses the fundamental mathematical background followed by a comprehensive literature review about mathematical modelling in the Energy Sector. The second part presents mathematical models for the District Heating strategic network design and incremental network design. The objective is the selection of an optimal set of new users to be connected to an existing thermal network, maximizing revenues, minimizing infrastructure and operational costs and taking into account the main technical requirements of the real world application. Results on real and randomly generated benchmark networks are discussed with particular attention to instances characterized by big networks dimensions. The third part is devoted to the development of linear programming models for optimal battery operation in off-grid solar power schemes, with consideration of battery degradation. The key contribution of this work is the inclusion of battery degradation costs in the optimisation models. As available data on relating degradation costs to the nature of charge/discharge cycles are limited, we concentrate on investigating the sensitivity of operational patterns to the degradation cost structure. The objective is to investigate the combination of battery costs and performance at which such systems become economic. We also investigate how the system design should change when battery degradation is taken into account.
Resumo:
Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.
Resumo:
Graphene, the thinnest two-dimensional material possible, is considered as a realistic candidate for the numerous applications in electronic, energy storage and conversion devices due to its unique properties, such as high optical transmittance, high conductivity, excellent chemical and thermal stability. However, the electronic and chemical properties of graphene are highly dependent on their preparation methods. Therefore, the development of novel chemical exfoliation process which aims at high yield synthesis of high quality graphene while maintaining good solution processability is of great concern. This thesis focuses on the solution production of high-quality graphene by wet-chemical exfoliation methods and addresses the applications of the chemically exfoliated graphene in organic electronics and energy storage devices.rnPlatinum is the most commonly used catalysts for fuel cells but they suffered from sluggish electron transfer kinetics. On the other hand, heteroatom doped graphene is known to enhance not only electrical conductivity but also long term operation stability. In this regard, a simple synthetic method is developed for the nitrogen doped graphene (NG) preparation. Moreover, iron (Fe) can be incorporated into the synthetic process. As-prepared NG with and without Fe shows excellent catalytic activity and stability compared to that of Pt based catalysts.rnHigh electrical conductivity is one of the most important requirements for the application of graphene in electronic devices. Therefore, for the fabrication of electrically conductive graphene films, a novel methane plasma assisted reduction of GO is developed. The high electrical conductivity of plasma reduced GO films revealed an excellent electrochemical performance in terms of high power and energy densities when used as an electrode in the micro-supercapacitors.rnAlthough, GO can be prepared in bulk scale, large amount of defect density and low electrical conductivity are major drawbacks. To overcome the intrinsic limitation of poor quality of GO and/or reduced GO, a novel protocol is extablished for mass production of high-quality graphene by means of electrochemical exfoliation of graphite. The prepared graphene shows high electrical conductivity, low defect density and good solution processability. Furthermore, when used as electrodes in organic field-effect transistors and/or in supercapacitors, the electrochemically exfoliated graphene shows excellent device performances. The low cost and environment friendly production of such high-quality graphene is of great importance for future generation electronics and energy storage devices. rn
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
In the U.S., many electric utility companies are offering demand-side management (DSM) programs to their customers as ways to save money and energy. However, it is challenging to compare these programs between utility companies throughout the U.S. because of the variability of state energy policies. For example, some states in the U.S. have deregulated electricity markets and others do not. In addition, utility companies within a state differ depending on ownership and size. This study examines 12 utilities’ experiences with DSM programs and compares the programs’ annual energy savings results that the selected utilities reported to the Energy Information Administration (EIA). The 2009 EIA data suggests that DSM program effectiveness is not significantly affected by electricity market deregulation or utility ownership. However, DSM programs seem to generally be more effective when administered by utilities located in states with energy savings requirements and DSM program mandates.
Resumo:
The study served to assure the quality of our catering, to locate problems, and to define further optimization measures at the Bern University Hospital. The main objective was to investigate whether the macronutrient and energy content of the hospital food complies with the nutritional value calculated from recipes as well as with the recommendations issued by the German Nutrition Society (DGE).
Resumo:
In recent years, advanced metering infrastructure (AMI) has been the main research focus due to the traditional power grid has been restricted to meet development requirements. There has been an ongoing effort to increase the number of AMI devices that provide real-time data readings to improve system observability. Deployed AMI across distribution secondary networks provides load and consumption information for individual households which can improve grid management. Significant upgrade costs associated with retrofitting existing meters with network-capable sensing can be made more economical by using image processing methods to extract usage information from images of the existing meters. This thesis presents a new solution that uses online data exchange of power consumption information to a cloud server without modifying the existing electromechanical analog meters. In this framework, application of a systematic approach to extract energy data from images replaces the manual reading process. One case study illustrates the digital imaging approach is compared to the averages determined by visual readings over a one-month period.
Resumo:
Basis for the economic efficiency of international supply chains rests on the efficiency of multimodal transport chains. Materials and products are transported along the edges of transport networks with the forwarder endeavouring to maximize the transport efficiency by using the effects of scale along the edges. The network nodes provide the means to have the goods transferred between the means of transport. Whilst purely economic criteria were initially the driving force for a change in the means of transport, ecological requirements are now becoming ever more relevant. The transportation chains should not only be economically presentable but also it makes sense for them to have a “green footprint”. In this context the following considerations will deal with the transfer processes within the network nodes, especially those within inland and feeder terminals. Replies are to be given to the questions as to how far the choice of the crane primary drive has an impact on energy consumption and environmental compatibility of handling the goods and which additional benefit does the recuperation of engrained energies bring during the handling process.
Resumo:
This paper proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA) to improve end-user device energy efficiency. OPAMA enhances the standard legacy Power Save Mode (PSM) of IEEE 802.11 by taking into consideration application specific requirements combined with data aggregation techniques. By establishing a balanced cost/benefit tradeoff between performance and energy consumption, OPAMA is able to improve energy efficiency, while keeping the end-user experience at a desired level. OPAMA was assessed in the OMNeT++ simulator using real traces of variable bitrate video streaming applications. The results showed the capability to enhance energy efficiency, achieving savings up to 44% when compared with the IEEE 802.11 legacy PSM.
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.
Resumo:
The widespread deployment of wireless mobile communications enables an almost permanent usage of portable devices, which imposes high demands on the battery of these devices. Indeed, battery lifetime is becoming one the most critical factors on the end-users satisfaction when using wireless communications. In this work, the optimized power save algorithm for continuous media applications (OPAMA) is proposed, aiming at enhancing the energy efficiency on end-users devices. By combining the application specific requirements with data aggregation techniques, {OPAMA} improves the standard {IEEE} 802.11 legacy Power Save Mode (PSM) performance. The algorithm uses the feedback on the end-user expected quality to establish a proper tradeoff between energy consumption and application performance. {OPAMA} was assessed in the OMNeT++ simulator, using real traces of variable bitrate video streaming applications, and in a real testbed employing a novel methodology intended to perform an accurate evaluation concerning video Quality of Experience (QoE) perceived by the end-users. The results revealed the {OPAMA} capability to enhance energy efficiency without degrading the end-user observed QoE, achieving savings up to 44 when compared with the {IEEE} 802.11 legacy PSM.
Resumo:
During the last decade wireless mobile communications have progressively become part of the people’s daily lives, leading users to expect to be “alwaysbest-connected” to the Internet, regardless of their location or time of day. This is indeed motivated by the fact that wireless access networks are increasingly ubiquitous, through different types of service providers, together with an outburst of thoroughly portable devices, namely laptops, tablets, mobile phones, among others. The “anytime and anywhere” connectivity criterion raises new challenges regarding the devices’ battery lifetime management, as energy becomes the most noteworthy restriction of the end-users’ satisfaction. This wireless access context has also stimulated the development of novel multimedia applications with high network demands, although lacking in energy-aware design. Therefore, the relationship between energy consumption and the quality of the multimedia applications perceived by end-users should be carefully investigated. This dissertation addresses energy-efficient multimedia communications in the IEEE 802.11 standard, which is the most widely used wireless access technology. It advances the literature by proposing a unique empirical assessment methodology and new power-saving algorithms, always bearing in mind the end-users’ feedback and evaluating quality perception. The new EViTEQ framework proposed in this thesis, for measuring video transmission quality and energy consumption simultaneously, in an integrated way, reveals the importance of having an empirical and high-accuracy methodology to assess the trade-off between quality and energy consumption, raised by the new end-users’ requirements. Extensive evaluations conducted with the EViTEQ framework revealed its flexibility and capability to accurately report both video transmission quality and energy consumption, as well as to be employed in rigorous investigations of network interface energy consumption patterns, regardless of the wireless access technology. Following the need to enhance the trade-off between energy consumption and application quality, this thesis proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA). By using the end-users’ feedback to establish a proper trade-off between energy consumption and application performance, OPAMA aims at enhancing the energy efficiency of end-users’ devices accessing the network through IEEE 802.11. OPAMA performance has been thoroughly analyzed within different scenarios and application types, including a simulation study and a real deployment in an Android testbed. When compared with the most popular standard power-saving mechanisms defined in the IEEE 802.11 standard, the obtained results revealed OPAMA’s capability to enhance energy efficiency, while keeping end-users’ Quality of Experience within the defined bounds. Furthermore, OPAMA was optimized to enable superior energy savings in multiple station environments, resulting in a new proposal called Enhanced Power Saving Mechanism for Multiple station Environments (OPAMA-EPS4ME). The results of this thesis highlight the relevance of having a highly accurate methodology to assess energy consumption and application quality when aiming to optimize the trade-off between energy and quality. Additionally, the obtained results based both on simulation and testbed evaluations, show clear benefits from employing userdriven power-saving techniques, such as OPAMA, instead of IEEE 802.11 standard power-saving approaches.
Resumo:
The widespread use of wireless enabled devices and the increasing capabilities of wireless technologies has promoted multimedia content access and sharing among users. However, the quality perceived by the users still depends on multiple factors such as video characteristics, device capabilities, and link quality. While video characteristics include the video time and spatial complexity as well as the coding complexity, one of the most important device characteristics is the battery lifetime. There is the need to assess how these aspects interact and how they impact the overall user satisfaction. This paper advances previous works by proposing and validating a flexible framework, named EViTEQ, to be applied in real testbeds to satisfy the requirements of performance assessment. EViTEQ is able to measure network interface energy consumption with high precision, while being completely technology independent and assessing the application level quality of experience. The results obtained in the testbed show the relevance of combined multi-criteria measurement approaches, leading to superior end-user satisfaction perception evaluation .
Resumo:
An analysis was made of composition and content of nutrients, salts, particulate and dissolved organic matter, and various plankton groups in a series of samples collected by a 140-liter sampling bottle to depth up to 150 m at 4 equatorial stations between 97° and 154°W. Large and small phytoplankton, bacteria (aggregated and dispersed), heterotrophic flagellates, infusorians, radiolarians, foraminifers, fine filter-feeders, small and large, mostly herbivorous copepods, cyclopoids, predatory calanoids, and other predators were investigated separately. Trophic relations between these elements are established from personal and published data, and rate of their metabolism and some other physiological parameters are determined. Such functional characteristics as extent of satisfaction of food requirements of organisms belonging to various trophic groups, intensity of trophic relations, balance between production and consumption by individual elements of the community, ecological efficiency, and net and specific production of the groups distinguished, of individual trophic levels, of total zooplankton, and of the community as a whole are calculated. Variations of these characteristics along the equator with decreasing upwelling intensity are examined and their possible causes and mechanisms are discussed.
Resumo:
Modern FPGAs with run-time reconfiguration allow the implementation of complex systems offering both the flexibility of software-based solutions combined with the performance of hardware. This combination of characteristics, together with the development of new specific methodologies, make feasible to reach new points of the system design space, and make embedded systems built on these platforms acquire more and more importance. However, the practical exploitation of this technique in fields that traditionally have relied on resource restricted embedded systems, is mainly limited by strict power consumption requirements, the cost and the high dependence of DPR techniques with the specific features of the device technology underneath. In this work, we tackle the previously reported problems, designing a reconfigurable platform based on the low-cost and low-power consuming Spartan-6 FPGA family. The full process to develop the platform will be detailed in the paper from scratch. In addition, the implementation of the reconfiguration mechanism, including two profiles, is reported. The first profile is a low-area and low-speed reconfiguration engine based mainly on software functions running on the embedded processor, while the other one is a hardware version of the same engine, implemented in the FPGA logic. This reconfiguration hardware block has been originally designed to the Virtex-5 family, and its porting process will be also described in this work, facing the interoperability problem among different families.