824 resultados para Energy Management Applications
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
The demands in production and associate costs at power generation through non renewable resources are increasing at an alarming rate. Solar energy is one of the renewable resource that has the potential to minimize this increase. Utilization of solar energy have been concentrated mainly on heating application. The use of solar energy in cooling systems in building would benefit greatly achieving the goal of non-renewable energy minimization. The approaches of solar energy heating system research done by initiation such as University of Wisconsin at Madison and building heat flow model research conducted by Oklahoma State University can be used to develop and optimize solar cooling building system. The research uses two approaches to develop a Graphical User Interface (GUI) software for an integrated solar absorption cooling building model, which is capable of simulating and optimizing the absorption cooling system using solar energy as the main energy source to drive the cycle. The software was then put through a number of litmus test to verify its integrity. The litmus test was conducted on various building cooling system data sets of similar applications around the world. The output obtained from the software developed were identical with established experimental results from the data sets used. Software developed by other research are catered for advanced users. The software developed by this research is not only reliable in its code integrity but also through its integrated approach which is catered for new entry users. Hence, this dissertation aims to correctly model a complete building with the absorption cooling system in appropriate climate as a cost effective alternative to conventional vapor compression system.
Resumo:
Carbon nanotubes (CNTs) are interesting materials with extraordinary properties for various applications. Here, vertically-aligned multiwalled CNTs (VA-MWCNTs) are grown by our dual radio frequency plasma enhanced chemical vapor deposition (PECVD). After optimizing the synthesis processes, these VA-MWCNTs were fabricated in to a series of devices for applications in vacuum electronics, glucose biosensors, glucose biofuel cells, and supercapacitors In particular, we have created the so-called PMMA-CNT matrices (opened-tip CNTs embedded in poly-methyl methacrylate) that are promising components in a novel energy sensing, generation and storage (SGS) system that integrate glucose biosensors, biofuel cells, and supercapacitors. The content of this thesis work is described as follows: 1. We have first optimized the synthesis of VA-MWCNTs by our PECVD technique. The effects of CH4 flow rate and growth duration on the lengths of these CNTs were studied. 2. We have characterized these VA-MWCNTs for electron field emission. We noticed that as grown CNTs suffers from high emission threshold, poor emission density and poor long-term stability. We attempted a series of experiments to understand ways to overcome these problems. First, we decrease the screening effects on VA-MWCNTs by creating arrays of self-assembled CNT bundles that are catalyst-free and opened tips. These bundles are found to enhance the field emission stability and emission density. Subsequently, we have created PMMA-CNT matrices that are excellent electron field emitters with an emission threshold field of more than two-fold lower than that of the as-grown sample. Furthermore, no significant emission degradation was observed after a continuous emission test of 40 hours (versus much shorter tests in reported literatures). Based on the new understanding we learnt from the PMMA-CNT matrices, we further created PMMA-STO-CNT matrices by embedding opened-tip VA-MWCNTs that are coated with strontium titanate (SrTiO3) with PMMA. We found that the PMMA-STO-CNT matrices have all the desired properties of the PMMA-CNT matrices. Furthermore, PMMA-STO-CNT matrices offer much lower emission threshold field, about five-fold lower than that of as grown VA-MWCNTs. The new understandings we obtained are important for practical application of VA-MWCNTs in field emission devices. 3. Subsequently, we have functionalized PMMA-CNT matrices for glucose biosensing. Our biosensor was developed by immobilized glucose oxidase (GOχ) on the opened-tip CNTs exposed on the matrices. The durability, stability and sensitivity of the biosensor were studied. In order to understand the performance of miniaturized glucose biosensors, we have then investigated the effect of working electrode area on the sensitivity and current level of our biosensors. 4. Next, functionalized PMMA-CNT matrices were utilized for energy generation and storage. We found that PMMA-CNT matrices are promising component in glucose/O2 biofuel cells (BFCs) for energy generation. The construction of these BFCs and the effect of the electrode area on the power density of these BFCs were investigated. Then, we have attempted to use PMMA-CNT matrices as supercapacitors for energy storage devices. The performance of these supercapacitors and ways to enhance their performance are discussed. 5. Finally, we further evaluated the concept of energy SGS system that integrated glucose biosensors, biofuel cells, and supercapacitors. This SGS system may be implantable to monitor and control the blood glucose level in our body.
Resumo:
ZnO has proven to be a multifunctional material with important nanotechnological applications. ZnO nanostructures can be grown in various forms such as nanowires, nanorods, nanobelts, nanocombs etc. In this work, ZnO nanostructures are grown in a double quartz tube configuration thermal Chemical Vapor Deposition (CVD) system. We focus on functionalized ZnO Nanostructures by controlling their structures and tuning their properties for various applications. The following topics have been investigated: 1. We have fabricated various ZnO nanostructures using a thermal CVD technique. The growth parameters were optimized and studied for different nanostructures. 2. We have studied the application of ZnO nanowires (ZnONWs) for field effect transistors (FETs). Unintentional n-type conductivity was observed in our FETs based on as-grown ZnO NWs. We have then shown for the first time that controlled incorporation of hydrogen into ZnO NWs can introduce p-type characters to the nanowires. We further found that the n-type behaviors remained, leading to the ambipolar behaviors of hydrogen incorporated ZnO NWs. Importantly, the detected p- and n- type behaviors are stable for longer than two years when devices were kept in ambient conditions. All these can be explained by an ab initio model of Zn vacancy-Hydrogen complexes, which can serve as the donor, acceptors, or green photoluminescence quencher, depend on the number of hydrogen atoms involved. 3. Next ZnONWs were tested for electron field emission. We focus on reducing the threshold field (Eth) of field emission from non-aligned ZnO NWs. As encouraged by our results on enhancing the conductivity of ZnO NWs by hydrogen annealing described in Chapter 3, we have studied the effect of hydrogen annealing for improving field emission behavior of our ZnO NWs. We found that optimally annealed ZnO NWs offered much lower threshold electric field and improved emission stability. We also studied field emission from ZnO NWs at moderate vacuum levels. We found that there exists a minimum Eth as we scale the threshold field with pressure. This behavior is explained by referring to Paschen’s law. 4. We have studied the application of ZnO nanostructures for solar energy harvesting. First, as-grown and (CdSe) ZnS QDs decorated ZnO NBs and ZnONWs were tested for photocurrent generation. All these nanostructures offered fast response time to solar radiation. The decoration of QDs decreases the stable current level produced by ZnONWs but increases that generated by NBs. It is possible that NBs offer more stable surfaces for the attachment of QDs. In addition, our results suggests that performance degradation of solar cells made by growing ZnO NWs on ITO is due to the increase in resistance of ITO after the high temperature growth process. Hydrogen annealing also improve the efficiency of the solar cells by decreasing the resistance of ITO. Due to the issues on ITO, we use Ni foil as the growth substrates. Performance of solar cells made by growing ZnO NWs on Ni foils degraded after Hydrogen annealing at both low (300 °C) and high (600 °C) temperatures since annealing passivates native defects in ZnONWs and thus reduce the absorption of visible spectra from our solar simulator. Decoration of QDs improves the efficiency of such solar cells by increasing absorption of light in the visible region. Using a better electrolyte than phosphate buffer solution (PBS) such as KI also improves the solar cell efficiency. 5. Finally, we have attempted p-type doping of ZnO NWs using various growth precursors including phosphorus pentoxide, sodium fluoride, and zinc fluoride. We have also attempted to create p-type carriers via introducing interstitial fluorine by annealing ZnO nanostructures in diluted fluorine gas. In brief, we are unable to reproduce the growth of reported p-type ZnO nanostructures. However; we have identified the window of temperature and duration of post-growth annealing of ZnO NWs in dilute fluorine gas which leads to suppression of native defects. This is the first experimental effort on post-growth annealing of ZnO NWs in dilute fluorine gas although this has been suggested by a recent theory for creating p-type semiconductors. In our experiments the defect band peak due to native defects is found to decrease by annealing at 300 °C for 10 – 30 minutes. One of the major future works will be to determine the type of charge carriers in our annealed ZnONWs.
Resumo:
Two technical solutions using single or dual shot offer different advantages and disadvantages for dual energy subtraction. The principles of these are explained and the main clinical applications with results are demonstrated. Elimination of overlaying bone and proof or exclusion of calcification are the primary aims of energy subtraction chest radiography, offering unique information in different clinical situations.
Resumo:
This paper proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA) to improve end-user device energy efficiency. OPAMA enhances the standard legacy Power Save Mode (PSM) of IEEE 802.11 by taking into consideration application specific requirements combined with data aggregation techniques. By establishing a balanced cost/benefit tradeoff between performance and energy consumption, OPAMA is able to improve energy efficiency, while keeping the end-user experience at a desired level. OPAMA was assessed in the OMNeT++ simulator using real traces of variable bitrate video streaming applications. The results showed the capability to enhance energy efficiency, achieving savings up to 44% when compared with the IEEE 802.11 legacy PSM.
Resumo:
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare, environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols.
Resumo:
The widespread deployment of wireless mobile communications enables an almost permanent usage of portable devices, which imposes high demands on the battery of these devices. Indeed, battery lifetime is becoming one the most critical factors on the end-users satisfaction when using wireless communications. In this work, the optimized power save algorithm for continuous media applications (OPAMA) is proposed, aiming at enhancing the energy efficiency on end-users devices. By combining the application specific requirements with data aggregation techniques, {OPAMA} improves the standard {IEEE} 802.11 legacy Power Save Mode (PSM) performance. The algorithm uses the feedback on the end-user expected quality to establish a proper tradeoff between energy consumption and application performance. {OPAMA} was assessed in the OMNeT++ simulator, using real traces of variable bitrate video streaming applications, and in a real testbed employing a novel methodology intended to perform an accurate evaluation concerning video Quality of Experience (QoE) perceived by the end-users. The results revealed the {OPAMA} capability to enhance energy efficiency without degrading the end-user observed QoE, achieving savings up to 44 when compared with the {IEEE} 802.11 legacy PSM.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
During the last decade wireless mobile communications have progressively become part of the people’s daily lives, leading users to expect to be “alwaysbest-connected” to the Internet, regardless of their location or time of day. This is indeed motivated by the fact that wireless access networks are increasingly ubiquitous, through different types of service providers, together with an outburst of thoroughly portable devices, namely laptops, tablets, mobile phones, among others. The “anytime and anywhere” connectivity criterion raises new challenges regarding the devices’ battery lifetime management, as energy becomes the most noteworthy restriction of the end-users’ satisfaction. This wireless access context has also stimulated the development of novel multimedia applications with high network demands, although lacking in energy-aware design. Therefore, the relationship between energy consumption and the quality of the multimedia applications perceived by end-users should be carefully investigated. This dissertation addresses energy-efficient multimedia communications in the IEEE 802.11 standard, which is the most widely used wireless access technology. It advances the literature by proposing a unique empirical assessment methodology and new power-saving algorithms, always bearing in mind the end-users’ feedback and evaluating quality perception. The new EViTEQ framework proposed in this thesis, for measuring video transmission quality and energy consumption simultaneously, in an integrated way, reveals the importance of having an empirical and high-accuracy methodology to assess the trade-off between quality and energy consumption, raised by the new end-users’ requirements. Extensive evaluations conducted with the EViTEQ framework revealed its flexibility and capability to accurately report both video transmission quality and energy consumption, as well as to be employed in rigorous investigations of network interface energy consumption patterns, regardless of the wireless access technology. Following the need to enhance the trade-off between energy consumption and application quality, this thesis proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA). By using the end-users’ feedback to establish a proper trade-off between energy consumption and application performance, OPAMA aims at enhancing the energy efficiency of end-users’ devices accessing the network through IEEE 802.11. OPAMA performance has been thoroughly analyzed within different scenarios and application types, including a simulation study and a real deployment in an Android testbed. When compared with the most popular standard power-saving mechanisms defined in the IEEE 802.11 standard, the obtained results revealed OPAMA’s capability to enhance energy efficiency, while keeping end-users’ Quality of Experience within the defined bounds. Furthermore, OPAMA was optimized to enable superior energy savings in multiple station environments, resulting in a new proposal called Enhanced Power Saving Mechanism for Multiple station Environments (OPAMA-EPS4ME). The results of this thesis highlight the relevance of having a highly accurate methodology to assess energy consumption and application quality when aiming to optimize the trade-off between energy and quality. Additionally, the obtained results based both on simulation and testbed evaluations, show clear benefits from employing userdriven power-saving techniques, such as OPAMA, instead of IEEE 802.11 standard power-saving approaches.
Resumo:
Epilepsy is a very complex disease which can have a variety of etiologies, co-morbidities, and a long list of psychosocial factors4. Clinical management of epilepsy patients typically includes serological tests, EEG's, and imaging studies to determine the single best antiepileptic drug (AED). Self-management is a vital component of achieving optimal health when living with a chronic disease. For patients with epilepsy self-management includes any necessary actions to control seizures and cope with any subsequent effects of the condition9; including aspects of treatment, seizure, and lifestyle. The use of computer-based applications can allow for more effective use of clinic visits and ultimately enhance the patient-provider relationship through focused discussion of determinants affecting self-management. ^ The purpose of this study is to conduct a systematic literature review on informatics application in epilepsy self-management in an effort to describe current evidence for informatics applications and decision support as an adjunct to successful clinical management of epilepsy. Each publication was analyzed for the type of study design utilized. ^ A total of 68 publications were included and categorized by the study design used, development stage, and clinical domain. Descriptive study designs comprised of three-fourths of the publications and indicate an underwhelming use of prospective studies. The vast majority of prospective studies also focused on clinician use to increase knowledge in treating patients with epilepsy. ^ Due to the chronic nature of epilepsy and the difficulty that both clinicians and patients can experience in managing epilepsy, more prospective studies are needed to evaluate applications that can effectively increase management activities. Within the last two decades of epilepsy research, management studies have employed the use of biomedical informatics applications. While the use of computer applications to manage epilepsy has increased, more progress is needed.^