900 resultados para Dynamic Energy Budget
Resumo:
In this research we focus on the Tyndall 25mm and 10mm nodes energy-aware topology management to extend sensor network lifespan and optimise node power consumption. The two tiered Tyndall Heterogeneous Automated Wireless Sensors (THAWS) tool is used to quickly create and configure application-specific sensor networks. To this end, we propose to implement a distributed route discovery algorithm and a practical energy-aware reaction model on the 25mm nodes. Triggered by the energy-warning events, the miniaturised Tyndall 10mm data collector nodes adaptively and periodically change their association to 25mm base station nodes, while 25mm nodes also change the inter-connections between themselves, which results in reconfiguration of the 25mm nodes tier topology. The distributed routing protocol uses combined weight functions to balance the sensor network traffic. A system level simulation is used to quantify the benefit of the route management framework when compared to other state of the art approaches in terms of the system power-saving.
Resumo:
Flexible cylindrical structures subjected to wind loading experience vibrations from periodic shedding of vortices in their wake. Vibrations become excessive when the natural frequencies of the cylinder coincide with the vortex shedding frequency. In this study, cylinder vibrations are transmitted to a beam inside the structure via dynamic magnifier system. This system amplifies the strain experienced by piezoelectric patches bonded to the beam to maximize the conversion from vibrational energy into electrical energy. Realworld applicability is tested using a wind tunnel to create vortex shedding and comparing the results to finite element modeling that shows the structural vibrational modes. A crucial part of this study is conditioning and storing the harvested energy, focusing on theoretical modeling, design parameter optimization, and experimental validation. The developed system is helpful in designing wind-induced energy harvesters to meet the necessity for novel energy resources.
Resumo:
Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates, consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuel combustion and cement production (E-FF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (E-LUC), mainly deforestation, are based on combined evidence from land-cover-change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (G(ATM)) is computed from the annual changes in concentration. The mean ocean CO2 sink (S-OCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in S-OCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (S-LAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2, and land-cover-change (some including nitrogen-carbon interactions). We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as +/- 1 sigma, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2004-2013), E-FF was 8.9 +/- 0.4 GtC yr(-1), E-LUC 0.9 +/- 0.5 GtC yr(-1), G(ATM) 4.3 +/- 0.1 GtC yr(-1), S-OCEAN 2.6 +/- 0.5 GtC yr(-1), and S-LAND 2.9 +/- 0.8 GtC yr(-1). For year 2013 alone, E-FF grew to 9.9 +/- 0.5 GtC yr(-1), 2.3% above 2012, continuing the growth trend in these emissions, E-LUC was 0.9 +/- 0.5 GtC yr(-1), G(ATM) was 5.4 +/- 0.2 GtC yr(-1), S-OCEAN was 2.9 +/- 0.5 GtC yr(-1), and S-LAND was 2.5 +/- 0.9 GtC yr(-1). G(ATM) was high in 2013, reflecting a steady increase in E-FF and smaller and opposite changes between S-OCEAN and S-LAND compared to the past decade (2004-2013). The global atmospheric CO2 concentration reached 395.31 +/- 0.10 ppm averaged over 2013. We estimate that E-FF will increase by 2.5% (1.3-3.5 %) to 10.1 +/- 0.6 GtC in 2014 (37.0 +/- 2.2 GtCO(2) yr(-1)), 65% above emissions in 1990, based on projections of world gross domestic product and recent changes in the carbon intensity of the global economy. From this projection of E-FF and assumed constant E-LUC for 2014, cumulative emissions of CO2 will reach about 545 +/- 55 GtC (2000 +/- 200 GtCO(2)) for 1870-2014, about 75% from E-FF and 25% from E-LUC. This paper documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this living data set (Le Quere et al., 2013, 2014). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center (doi:10.3334/CDIAC/GCP_2014).
Resumo:
Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates as well as consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover-change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2, and land-cover change (some including nitrogen–carbon interactions). We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2005–2014), EFF was 9.0 ± 0.5 GtC yr−1, ELUC was 0.9 ± 0.5 GtC yr−1, GATM was 4.4 ± 0.1 GtC yr−1, SOCEAN was 2.6 ± 0.5 GtC yr−1, and SLAND was 3.0 ± 0.8 GtC yr−1. For the year 2014 alone, EFF grew to 9.8 ± 0.5 GtC yr−1, 0.6 % above 2013, continuing the growth trend in these emissions, albeit at a slower rate compared to the average growth of 2.2 % yr−1 that took place during 2005–2014. Also, for 2014, ELUC was 1.1 ± 0.5 GtC yr−1, GATM was 3.9 ± 0.2 GtC yr−1, SOCEAN was 2.9 ± 0.5 GtC yr−1, and SLAND was 4.1 ± 0.9 GtC yr−1. GATM was lower in 2014 compared to the past decade (2005–2014), reflecting a larger SLAND for that year. The global atmospheric CO2 concentration reached 397.15 ± 0.10 ppm averaged over 2014. For 2015, preliminary data indicate that the growth in EFF will be near or slightly below zero, with a projection of −0.6 [range of −1.6 to +0.5] %, based on national emissions projections for China and the USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the global economy for the rest of the world. From this projection of EFF and assumed constant ELUC for 2015, cumulative emissions of CO2 will reach about 555 ± 55 GtC (2035 ± 205 GtCO2) for 1870–2015, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quéré et al., 2015, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center (doi:10.3334/CDIAC/GCP_2015).
Resumo:
Current variation aware design methodologies, tuned for worst-case scenarios, are becoming increasingly pessimistic from the perspective of power and performance. A good example of such pessimism is setting the refresh rate of DRAMs according to the worst-case access statistics, thereby resulting in very frequent refresh cycles, which are responsible for the majority of the standby power consumption of these memories. However, such a high refresh rate may not be required, either due to extremely low probability of the actual occurrence of such a worst-case, or due to the inherent error resilient nature of many applications that can tolerate a certain number of potential failures. In this paper, we exploit and quantify the possibilities that exist in dynamic memory design by shifting to the so-called approximate computing paradigm in order to save power and enhance yield at no cost. The statistical characteristics of the retention time in dynamic memories were revealed by studying a fabricated 2kb CMOS compatible embedded DRAM (eDRAM) memory array based on gain-cells. Measurements show that up to 73% of the retention power can be saved by altering the refresh time and setting it such that a small number of failures is allowed. We show that these savings can be further increased by utilizing known circuit techniques, such as body biasing, which can help, not only in extending, but also in preferably shaping the retention time distribution. Our approach is one of the first attempts to access the data integrity and energy tradeoffs achieved in eDRAMs for utilizing them in error resilient applications and can prove helpful in the anticipated shift to approximate computing.
Resumo:
Tese de dout., Ciências do Mar, Faculdade de Ciências do Mar e do Ambiente, Univ. do Algarve, 2003
Resumo:
This paper considers an overlapping generations model in which capital investment is financed in a credit market with adverse selection. Lenders’ inability to commit ex-ante not to bailout ex-post, together with a wealthy position of entrepreneurs gives rise to the soft budget constraint syndrome, i.e. the absence of liquidation of poor performing firms on a regular basis. This problem arises endogenously as a result of the interaction between the economic behavior of agents, without relying on political economy explanations. We found the problem more binding along the business cycle, providing an explanation to creditors leniency during booms in some LatinAmerican countries in the late seventies and early nineties.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates, consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil-fuel combustion and cement production (EFF) are based on energy statistics, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated for the first time in this budget with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2 and land cover change (some including nitrogen–carbon interactions). All uncertainties are reported as ± 1 σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2003–2012), EFF was 8.6 ± 0.4 GtC yr − 1, ELUC 0.9 ± 0.5 GtC yr − 1, GATM 4.3 ± 0.1 GtC yr − 1, S OCEAN 2.5 ± 0.5 GtC yr − 1, and S LAND 2.8 ± 0.8 GtC yr − 1. For year 2012 alone, EFF grew to 9.7 ± 0.5 GtC yr − 1, 2.2 % above 2011, reflecting a continued growing trend in these emissions, GATM was 5.1 ± 0.2 GtC yr − 1, SOCEANwas 2.9 ± 0.5 GtC yr −1, and assuming an ELU Cof 1.0 ± 0.5 GtC yr − 1 (based on the 2001–2010 average), SLAND was 2.7 ± 0.9 GtC yr − 1. GATM was high in 2012 compared to the 2003–2012 average, almost entirely reflecting the high EFF. The global atmospheric CO2 con- centration reached 392.52 ± 0.10 ppm averaged over 2012. We estimate that EFF will increase by 2.1 % (1.1–3.1 %) to 9.9 ± 0.5 GtC in 2013, 61 % above emissions in 1990, based on projections of world gross domestic product and recent changes in the carbon intensity of the economy. With this projection, cumulative emissions of CO2 will reach about 535 ± 55 GtC for 1870–2013, about 70 % from EFF (390 ± 20 GtC) and 30 % from ELUC (145 ± 50 GtC). This paper also documents any changes in the methods and data sets used in this new carbon budget from previous budgets (Le Quéré et al., 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center.
Resumo:
Chinese government commits to reach its peak carbon emissions before 2030, which requires China to implement new policies. Using a CGE model, this study conducts simulation studies on the functions of an energy tax and a carbon tax and analyzes their effects on macro-economic indices. The Chinese economy is affected at an acceptable level by the two taxes. GDP will lose less than 0.8% with a carbon tax of 100, 50, or 10 RMB/ton CO2 or 5% of the delivery price of an energy tax. Thus, the loss of real disposable personal income is smaller. Compared with implementing a single tax, a combined carbon and energy tax induces more emission reductions with relatively smaller economic costs. With these taxes, the domestic competitiveness of energy intensive industries is improved. Additionally, we found that the sooner such taxes are launched, the smaller the economic costs and the more significant the achieved emission reductions.
Resumo:
Predictions about electric energy needs, based on current electric energy models, forecast that the global energy consumption on Earth for 2050 will double present rates. Using distributed procedures for control and integration, the expected needs can be halved. Therefore implementation of Smart Grids is necessary. Interaction between final consumers and utilities is a key factor of future Smart Grids. This interaction is aimed to reach efficient and responsible energy consumption. Energy Residential Gateways (ERG) are new in-building devices that will govern the communication between user and utility and will control electric loads. Utilities will offer new services empowering residential customers to lower their electric bill. Some of these services are Smart Metering, Demand Response and Dynamic Pricing. This paper presents a practical development of an ERG for residential buildings.
Resumo:
This work is related to the improvement of the dynamic performance of the Buck converter by means of introducing an additional power path that virtually increase s the output capacitance during transients, thus improving the output impedance of the converter. It is well known that in VRM applications, with wide load steps, voltage overshoots and undershoots ma y lead to undesired performance of the load. To solve this problem, high-bandwidth high-switching frequency power converter s can be applied to reduce the transient time or a big output capacitor can be applied to reduce the output impedance. The first solution can degrade the efficiency by increasing switching losses of the MOSFETS, and the second solution is penalizing the cost and size of the output filter. The additional energy path, as presented here, is introduced with the Output Impedance Correction Circuit (OICC) based on the Controlled Current Source (CCS). The OICC is using CCS to inject or extract a current n - 1 times larger than the output capacitor current, thus virtually increasing n times the value of the output capacitance during the transients. This feature allows the usage of a low frequency Buck converter with smaller capacitor but satisfying the dynamic requirements.
Resumo:
Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.
Resumo:
Remote reprogramming capabilities are one of the major concerns in WSN platforms due to the limitations and constraints that low power wireless nodes poses, especially when energy efficiency during the reprogramming process is a critical factor for extending the battery life of the devices. Moreover, WSNs are based on low-rate protocols in which as greater the amount of data is sent, the more the possibility to lose packets during the transmitting process is. In order to overcome these limitations, in this work a novel on-the-fly reprogramming technique for modifying and updating the application running on the wireless sensor nodes is designed and implemented, based on a partial reprogramming mechanism that significantly reduces the size of the files to be downloaded to the nodes, therefore diminishing their power/time consumption. This powerful mechanism also addresses multi-experimental capabilities because it provides the possibility to download, manage, test and debug multiple applications into the wireless nodes, based on a memory map segmentation of the core. Being an on-the-fly reprogramming process, no additional resources to store and download the configuration file are needed.
Resumo:
This work is related to the output impedance improvement of a Multiphase Buck converter with Peak Current Mode Control (PCMC) by means of introducing an additional power path that virtually increases the output capacitance during transients. Various solutions that can be employed to improve the dynamic behavior of the converter system exist, but nearly all solutions are developed for a Single Phase Buck converter with Voltage Mode Control (VMC), while in the VRM applications, due to the high currents, the system is usually implemented as a Multiphase Buck Converter with Current Mode Control. The additional energy path, as presented here, is introduced with the Output Impedance Correction Circuit (OICC) based on the Controlled Current Source (CCS). The OICC is used to inject or extract a current n-1 times larger than the output capacitor current, thus virtually increasing n times the value of the output capacitance during the transients. Furthermore, this work extends the OICC concept to a Multiphase Buck Converter system while comparing proposed solution with the system that has n times bigger output capacitor. In addition, the OICC is implemented as a Synchronous Buck Converter with PCMC, thus reducing its influence on the system efficiency.