819 resultados para Energy consumption data sets


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless sensor networks (WSN) are becoming widely adopted for many applications including complicated tasks like building energy management. However, one major concern for WSN technologies is the short lifetime and high maintenance cost due to the limited battery energy. One of the solutions is to scavenge ambient energy, which is then rectified to power the WSN. The objective of this thesis was to investigate the feasibility of an ultra-low energy consumption power management system suitable for harvesting sub-mW photovoltaic and thermoelectric energy to power WSNs. To achieve this goal, energy harvesting system architectures have been analyzed. Detailed analysis of energy storage units (ESU) have led to an innovative ESU solution for the target applications. Battery-less, long-lifetime ESU and its associated power management circuitry, including fast-charge circuit, self-start circuit, output voltage regulation circuit and hybrid ESU, using a combination of super-capacitor and thin film battery, were developed to achieve continuous operation of energy harvester. Low start-up voltage DC/DC converters have been developed for 1mW level thermoelectric energy harvesting. The novel method of altering thermoelectric generator (TEG) configuration in order to match impedance has been verified in this work. Novel maximum power point tracking (MPPT) circuits, exploring the fractional open circuit voltage method, were particularly developed to suit the sub-1mW photovoltaic energy harvesting applications. The MPPT energy model has been developed and verified against both SPICE simulation and implemented prototypes. Both indoor light and thermoelectric energy harvesting methods proposed in this thesis have been implemented into prototype devices. The improved indoor light energy harvester prototype demonstrates 81% MPPT conversion efficiency with 0.5mW input power. This important improvement makes light energy harvesting from small energy sources (i.e. credit card size solar panel in 500lux indoor lighting conditions) a feasible approach. The 50mm × 54mm thermoelectric energy harvester prototype generates 0.95mW when placed on a 60oC heat source with 28% conversion efficiency. Both prototypes can be used to continuously power WSN for building energy management applications in typical office building environment. In addition to the hardware development, a comprehensive system energy model has been developed. This system energy model not only can be used to predict the available and consumed energy based on real-world ambient conditions, but also can be employed to optimize the system design and configuration. This energy model has been verified by indoor photovoltaic energy harvesting system prototypes in long-term deployed experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electron microscopy (EM) has advanced in an exponential way since the first transmission electron microscope (TEM) was built in the 1930’s. The urge to ‘see’ things is an essential part of human nature (talk of ‘seeing is believing’) and apart from scanning tunnel microscopes which give information about the surface, EM is the only imaging technology capable of really visualising atomic structures in depth down to single atoms. With the development of nanotechnology the demand to image and analyse small things has become even greater and electron microscopes have found their way from highly delicate and sophisticated research grade instruments to key-turn and even bench-top instruments for everyday use in every materials research lab on the planet. The semiconductor industry is as dependent on the use of EM as life sciences and pharmaceutical industry. With this generalisation of use for imaging, the need to deploy advanced uses of EM has become more and more apparent. The combination of several coinciding beams (electron, ion and even light) to create DualBeam or TripleBeam instruments for instance enhances the usefulness from pure imaging to manipulating on the nanoscale. And when it comes to the analytic power of EM with the many ways the highly energetic electrons and ions interact with the matter in the specimen there is a plethora of niches which evolved during the last two decades, specialising in every kind of analysis that can be thought of and combined with EM. In the course of this study the emphasis was placed on the application of these advanced analytical EM techniques in the context of multiscale and multimodal microscopy – multiscale meaning across length scales from micrometres or larger to nanometres, multimodal meaning numerous techniques applied to the same sample volume in a correlative manner. In order to demonstrate the breadth and potential of the multiscale and multimodal concept an integration of it was attempted in two areas: I) Biocompatible materials using polycrystalline stainless steel and II) Semiconductors using thin multiferroic films. I) The motivation to use stainless steel (316L medical grade) comes from the potential modulation of endothelial cell growth which can have a big impact on the improvement of cardio-vascular stents – which are mainly made of 316L – through nano-texturing of the stent surface by focused ion beam (FIB) lithography. Patterning with FIB has never been reported before in connection with stents and cell growth and in order to gain a better understanding of the beam-substrate interaction during patterning a correlative microscopy approach was used to illuminate the patterning process from many possible angles. Electron backscattering diffraction (EBSD) was used to analyse the crystallographic structure, FIB was used for the patterning and simultaneously visualising the crystal structure as part of the monitoring process, scanning electron microscopy (SEM) and atomic force microscopy (AFM) were employed to analyse the topography and the final step being 3D visualisation through serial FIB/SEM sectioning. II) The motivation for the use of thin multiferroic films stems from the ever-growing demand for increased data storage at lesser and lesser energy consumption. The Aurivillius phase material used in this study has a high potential in this area. Yet it is necessary to show clearly that the film is really multiferroic and no second phase inclusions are present even at very low concentrations – ~0.1vol% could already be problematic. Thus, in this study a technique was developed to analyse ultra-low density inclusions in thin multiferroic films down to concentrations of 0.01%. The goal achieved was a complete structural and compositional analysis of the films which required identification of second phase inclusions (through elemental analysis EDX(Energy Dispersive X-ray)), localise them (employing 72 hour EDX mapping in the SEM), isolate them for the TEM (using FIB) and give an upper confidence limit of 99.5% to the influence of the inclusions on the magnetic behaviour of the main phase (statistical analysis).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We estimate a carbon mitigation cost curve for the U.S. commercial sector based on econometric estimation of the responsiveness of fuel demand and equipment choices to energy price changes. The model econometrically estimates fuel demand conditional on fuel choice, which is characterized by a multinomial logit model. Separate estimation of end uses (e.g., heating, cooking) using the U.S. Commercial Buildings Energy Consumption Survey allows for exceptionally detailed estimation of price responsiveness disaggregated by end use and fuel type. We then construct aggregate long-run elasticities, by fuel type, through a series of simulations; own-price elasticities range from -0.9 for district heat services to -2.9 for fuel oil. The simulations form the basis of a marginal cost curve for carbon mitigation, which suggests that a price of $20 per ton of carbon would result in an 8% reduction in commercial carbon emissions, and a price of $100 per ton would result in a 28% reduction. © 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.

The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.

The main contributions of the thesis can be placed in one of the following categories.

1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.

2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.

3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.

4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Determining the evolutionary relationships among the major lineages of extant birds has been one of the biggest challenges in systematic biology. To address this challenge, we assembled or collected the genomes of 48 avian species spanning most orders of birds, including all Neognathae and two of the five Palaeognathae orders. We used these genomes to construct a genome-scale avian phylogenetic tree and perform comparative genomic analyses. FINDINGS: Here we present the datasets associated with the phylogenomic analyses, which include sequence alignment files consisting of nucleotides, amino acids, indels, and transposable elements, as well as tree files containing gene trees and species trees. Inferring an accurate phylogeny required generating: 1) A well annotated data set across species based on genome synteny; 2) Alignments with unaligned or incorrectly overaligned sequences filtered out; and 3) Diverse data sets, including genes and their inferred trees, indels, and transposable elements. Our total evidence nucleotide tree (TENT) data set (consisting of exons, introns, and UCEs) gave what we consider our most reliable species tree when using the concatenation-based ExaML algorithm or when using statistical binning with the coalescence-based MP-EST algorithm (which we refer to as MP-EST*). Other data sets, such as the coding sequence of some exons, revealed other properties of genome evolution, namely convergence. CONCLUSIONS: The Avian Phylogenomics Project is the largest vertebrate phylogenomics project to date that we are aware of. The sequence, alignment, and tree data are expected to accelerate analyses in phylogenomics and other related areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The SB distributional model of Johnson's 1949 paper was introduced by a transformation to normality, that is, z ~ N(0, 1), consisting of a linear scaling to the range (0, 1), a logit transformation, and an affine transformation, z = γ + δu. The model, in its original parameterization, has often been used in forest diameter distribution modelling. In this paper, we define the SB distribution in terms of the inverse transformation from normality, including an initial linear scaling transformation, u = γ′ + δ′z (δ′ = 1/δ and γ′ = �γ/δ). The SB model in terms of the new parameterization is derived, and maximum likelihood estimation schema are presented for both model parameterizations. The statistical properties of the two alternative parameterizations are compared empirically on 20 data sets of diameter distributions of Changbai larch (Larix olgensis Henry). The new parameterization is shown to be statistically better than Johnson's original parameterization for the data sets considered here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the buildingEXODUS evacuation model is described and discussed and attempts at qualitative and quantitative model validation are presented. The data sets used for validation are the Stapelfeldt and Milburn House evacuation data. As part of the validation exercise, the sensitivity of the building-EXODUS predictions to a range of variables is examined, including occupant drive, occupant location, exit flow capacity, exit size, occupant response times and geometry definition. An important consideration that has been highlighted by this work is that any validation exercise must be scrutinised to identify both the results generated and the considerations and assumptions on which they are based. During the course of the validation exercise, both data sets were found to be less than ideal for the purpose of validating complex evacuation. However, the buildingEXODUS evacuation model was found to be able to produce reasonable qualitative and quantitative agreement with the experimental data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document is the first out of three iterations of the DMP that will be formally delivered during the project. Version 2 is due in month 24 and version 3 towards the end of the project. The DMP thus is not a fixed document; it evolves and gains more precision and substance during the lifespan of the project. In this first version we describe the planned research data sets related to the RAGE evaluation and validation activities, and the fifteen principles that will guide data management in RAGE. The former are described in the format of the EU data management template, and the latter in terms of their guiding principle, how we propose to implement them, and when they will be implemented. This document is thus first of all relevant to WP5 and WP8 members.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates, consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuel combustion and cement production (E-FF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (E-LUC), mainly deforestation, are based on combined evidence from land-cover-change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (G(ATM)) is computed from the annual changes in concentration. The mean ocean CO2 sink (S-OCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in S-OCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (S-LAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2, and land-cover-change (some including nitrogen-carbon interactions). We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as +/- 1 sigma, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2004-2013), E-FF was 8.9 +/- 0.4 GtC yr(-1), E-LUC 0.9 +/- 0.5 GtC yr(-1), G(ATM) 4.3 +/- 0.1 GtC yr(-1), S-OCEAN 2.6 +/- 0.5 GtC yr(-1), and S-LAND 2.9 +/- 0.8 GtC yr(-1). For year 2013 alone, E-FF grew to 9.9 +/- 0.5 GtC yr(-1), 2.3% above 2012, continuing the growth trend in these emissions, E-LUC was 0.9 +/- 0.5 GtC yr(-1), G(ATM) was 5.4 +/- 0.2 GtC yr(-1), S-OCEAN was 2.9 +/- 0.5 GtC yr(-1), and S-LAND was 2.5 +/- 0.9 GtC yr(-1). G(ATM) was high in 2013, reflecting a steady increase in E-FF and smaller and opposite changes between S-OCEAN and S-LAND compared to the past decade (2004-2013). The global atmospheric CO2 concentration reached 395.31 +/- 0.10 ppm averaged over 2013. We estimate that E-FF will increase by 2.5% (1.3-3.5 %) to 10.1 +/- 0.6 GtC in 2014 (37.0 +/- 2.2 GtCO(2) yr(-1)), 65% above emissions in 1990, based on projections of world gross domestic product and recent changes in the carbon intensity of the global economy. From this projection of E-FF and assumed constant E-LUC for 2014, cumulative emissions of CO2 will reach about 545 +/- 55 GtC (2000 +/- 200 GtCO(2)) for 1870-2014, about 75% from E-FF and 25% from E-LUC. This paper documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this living data set (Le Quere et al., 2013, 2014). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center (doi:10.3334/CDIAC/GCP_2014).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates as well as consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover-change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2, and land-cover change (some including nitrogen–carbon interactions). We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2005–2014), EFF was 9.0 ± 0.5 GtC yr−1, ELUC was 0.9 ± 0.5 GtC yr−1, GATM was 4.4 ± 0.1 GtC yr−1, SOCEAN was 2.6 ± 0.5 GtC yr−1, and SLAND was 3.0 ± 0.8 GtC yr−1. For the year 2014 alone, EFF grew to 9.8 ± 0.5 GtC yr−1, 0.6 % above 2013, continuing the growth trend in these emissions, albeit at a slower rate compared to the average growth of 2.2 % yr−1 that took place during 2005–2014. Also, for 2014, ELUC was 1.1 ± 0.5 GtC yr−1, GATM was 3.9 ± 0.2 GtC yr−1, SOCEAN was 2.9 ± 0.5 GtC yr−1, and SLAND was 4.1 ± 0.9 GtC yr−1. GATM was lower in 2014 compared to the past decade (2005–2014), reflecting a larger SLAND for that year. The global atmospheric CO2 concentration reached 397.15 ± 0.10 ppm averaged over 2014. For 2015, preliminary data indicate that the growth in EFF will be near or slightly below zero, with a projection of −0.6 [range of −1.6 to +0.5] %, based on national emissions projections for China and the USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the global economy for the rest of the world. From this projection of EFF and assumed constant ELUC for 2015, cumulative emissions of CO2 will reach about 555 ± 55 GtC (2035 ± 205 GtCO2) for 1870–2015, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quéré et al., 2015, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center (doi:10.3334/CDIAC/GCP_2015).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Charge exchange X-ray and far-ultraviolet (FUV) aurorae can provide detailed insight into the interaction between solar system plasmas. Using the two complementary experimental techniques of photon emission spectroscopy and translation energy spectroscopy, we have studied state-selective charge exchange in collisions between fully ionized helium and target gasses characteristic of cometary and planetary atmospheres (H2O, CO2, CO, and CH4). The experiments were performed at velocities typical for the solar wind (200-1500 km s(-1)). Data sets are produced that can be used for modeling the interaction of solar wind alpha particles with cometary and planetary atmospheres. These data sets are used to demonstrate the diagnostic potential of helium line emission. Existing Extreme Ultraviolet Explorer (EUVE) observations of comets Hyakutake and Hale-Bopp are analyzed in terms of solar wind and coma characteristics. The case of Hale-Bopp illustrates well the dependence of the helium line emission to the collision velocity. For Hale-Bopp, our model requires low velocities in the interaction zone. We interpret this as the effect of severe post-bow shock cooling in this extraordinary large comet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A robust method for fitting to the results of gel electrophoresis assays of damage to plasmid DNA caused by radiation is presented. This method makes use of nonlinear regression to fit analytically derived dose response curves to observations of the supercoiled, open circular and linear plasmid forms simultaneously, allowing for more accurate results than fitting to individual forms. Comparisons with a commonly used analysis method show that while there is a relatively small benefit between the methods for data sets with small errors, the parameters generated by this method remain much more closely distributed around the true value in the face of increasing measurement uncertainties. This allows for parameters to be specified with greater confidence, reflected in a reduction of errors on fitted parameters. On test data sets, fitted uncertainties were reduced by 30%, similar to the improvement that would be offered by moving from triplicate to fivefold repeats (assuming standard errors). This method has been implemented in a popular spreadsheet package and made available online to improve its accessibility. (C) 2011 by Radiation Research Society

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent years have witnessed an incredibly increasing interest in the topic of incremental learning. Unlike conventional machine learning situations, data flow targeted by incremental learning becomes available continuously over time. Accordingly, it is desirable to be able to abandon the traditional assumption of the availability of representative training data during the training period to develop decision boundaries. Under scenarios of continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation, and accumulate experience over time to support future decision-making process. In this paper, we propose a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance. Detailed system level architecture and design strategies are presented in this paper. Simulation results over several real-world data sets are used to validate the effectiveness of this method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seasonal and day-to-day variations in travel behaviour and performance of private passenger vehicles can be partially explained by changes in weather conditions. Likewise, in the electricity sector, weather affects energy demand. The impact of weather conditions on private passenger vehicle performance, usership statistics and travel behaviour has been studied for conventional, internal combustion engine, vehicles. Similarly, weather-driven variability in electricity demand and generation has been investigated widely. The aim of these analyses in both sectors is to improve energy efficiency, reduce consumption in peak hours and reduce greenhouse gas emissions. However, the potential effects of seasonal weather variations on electric vehicle usage have not yet been investigated. In Ireland the government has set a target requiring 10% of all vehicles in the transport fleet to be powered by electricity by 2020 to meet part of its European Union obligations to reduce greenhouse gas emissions and increase energy efficiency. This paper fills this knowledge gap by compiling some of the published information available for internal combustion engine vehicles and applying the lessons learned and results to electric vehicles with an analysis of historical weather data in Ireland and electricity market data in a number of what-if scenarios. Areas particularly impacted by weather conditions are battery performance, energy consumption and choice of transportation mode by private individuals.