819 resultados para Energy consumption data sets
Resumo:
In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured.
Resumo:
Embedded processors are used in numerous devices executing dedicated applications. This setting makes it worthwhile to optimize the processor to the application it executes, in order to increase its power-efficiency. This paper proposes to enhance direct mapped data caches with automatically tuned randomized set index functions to achieve that goal. We show how randomization functions can be automatically generated and compare them to traditional set-associative caches in terms of performance and energy consumption. A 16 kB randomized direct mapped cache consumes 22% less energy than a 2-way set-associative cache, while it is less than 3% slower. When the randomization function is made configurable (i.e., it can be adapted to the program), the additional reduction of conflicts outweighs the added complexity of the hardware, provided there is a sufficient amount of conflict misses.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
We present nine near-infrared (NIR) spectra of supernova (SN) 2005cf at epochs from -10 to +42d with respect to B-band maximum, complementing the existing excellent data sets available for this prototypical Type Ia SN at other wavelengths. The spectra show a time evolution and spectral features characteristic of normal Type Ia SNe, as illustrated by a comparison with SNe 1999ee, 2002bo and 2003du. The broad-band spectral energy distribution (SED) of SN 2005cf is studied in combined ultraviolet (UV), optical and NIR spectra at five epochs between ~8d before and ~10d after maximum light. We also present synthetic spectra of the hydrodynamic explosion model W7, which reproduce the key properties of SN 2005cf not only at UV-optical as previously reported, but also at NIR wavelengths. From the radiative-transfer calculations we infer that fluorescence is the driving mechanism that shapes the SED of SNe Ia. In particular, the NIR part of the spectrum is almost devoid of absorption features, and instead dominated by fluorescent emission of both iron-group material and intermediate-mass elements at pre-maximum epochs, and pure iron-group material after maximum light. A single P-Cygni feature of Mgii at early epochs and a series of relatively unblended Coii lines at late phases allow us to constrain the regions of the ejecta in which the respective elements are abundant. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS.
Resumo:
Hydro-entanglement is a versatile process for bonding non-woven fabrics by the use of fine, closely-spaced, high-velocity jets of water to rearrange and entangle arrays of fibres. The cost of the process mainly depends on the amount of energy consumed. Therefore, the economy of the process is highly affected by optimisation of the energy required. In this paper a parameter called critical pressure is introduced which is indicative of the energy level requirement. The results of extensive experimental work are reported and analysed to give a clear understanding of the effect of the web and fibre properties on the critical pressure in the hydro-entanglement process. Furthermore, different energy-transfer distribution schemes are tested on various fabrics. The optimum scheme which involves the lowest energy consumption and the best fabric properties is identified. © 2001 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) exhibits fundamental limitations as a method to reduce energy consumption in computing systems. In the HPC domain, where performance is of highest priority and codes are heavily optimized to minimize idle time, DVFS has limited opportunity to achieve substantial energy savings. This paper explores if operating processors Near the transistor Threshold Volt- age (NTV) is a better alternative to DVFS for break- ing the power wall in HPC. NTV presents challenges, since it compromises both performance and reliability to reduce power consumption. We present a first of its kind study of a significance-driven execution paradigm that selectively uses NTV and algorithmic error tolerance to reduce energy consumption in performance- constrained HPC environments. Using an iterative algorithm as a use case, we present an adaptive execution scheme that switches between near-threshold execution on many cores and above-threshold execution on one core, as the computational significance of iterations in the algorithm evolves over time. Using this scheme on state-of-the-art hardware, we demonstrate energy savings ranging between 35% to 67%, while compromising neither correctness nor performance.
Resumo:
1) Executive Summary
Legislation (Autism Act NI, 2011), a cross-departmental strategy (Autism Strategy 2013-2020) and a first action plan (2013-2016) have been developed in Northern Ireland in order to support individuals and families affected by Autism Spectrum Disorder (ASD) without a prior thorough baseline assessment of need. At the same time, there are large existing data sets about the population in NI that had never been subjected to a secondary data analysis with regards to data on ASD. This report covers the first comprehensive secondary data analysis and thereby aims to inform future policy and practice.
Following a search of all existing, large-scale, regional or national data sets that were relevant to the lives of individuals and families affected by Autism Spectrum Disorder (ASD) in Northern Ireland, extensive secondary data analyses were carried out. The focus of these secondary data analyses was to distill any ASD related data from larger generic data sets. The findings are reported for each data set and follow a lifespan perspective, i.e., data related to children is reported first before data related to adults.
Key findings:
Autism Prevalence:
Of children born in 2000 in the UK,
• 0.9% (1:109) were reported to have ASD, when they were 5-year old in 2005;
• 1.8% (1:55) were reported to have ASD, when they were 7-years old in 2007;
• 3.5% (1:29) were reported to have ASD, when they were 11-year old in 2011.
In mainstream schools in Northern Ireland
• 1.2% of the children were reported to have ASD in 2006/07;
• 1.8% of the children were reported to have ASD in 2012/13.
Economic Deprivation:
• Families of children with autism (CWA) were 9%-18% worse off per week than families of children not on the autism spectrum (COA).
• Between 2006-2013 deprivation of CWA compared to COA nearly doubled as measured by eligibility for free school meals (from near 20 % to 37%)
• In 2006, CWA and COA experienced similar levels of deprivation (approx. 20%), by 2013, a considerable deprivation gap had developed, with CWA experienced 6% more deprivation than COA.
• Nearly 1/3 of primary school CWA lived in the most deprived areas in Northern Ireland.
• Nearly ½ of children with Asperger’s Syndrome who attended special school lived in the most deprived areas.
Unemployment:
• Mothers of CWA were 6% less likely to be employed than mothers of COA.
• Mothers of CWA earned 35%-56% less than mothers of COA.
• CWA were 9% less likely to live in two income families than COA.
Health:
• Pre-diagnosis, CWA were more likely than COA to have physical health problems, including walking on level ground, speech and language, hearing, eyesight, and asthma.
• Aged 3 years of age CWA experienced poorer emotional and social health than COA, this difference increased significantly by the time they were 7 years of age.
• Mothers of young CWA had lower levels of life satisfaction and poorer mental health than mothers of young COA.
Education:
• In mainstream education, children with ASD aged 11-16 years reported less satisfaction with their social relationships than COA.
• Younger children with ASD (aged 5 and 7 years) were less likely to enjoy school, were bullied more, and were more reluctant to attend school than COA.
• CWA attended school 2-3 weeks less than COA .
• Children with Asperger’s Syndrome in special schools missed the equivalent of 8-13 school days more than children with Asperger’s Syndrome in mainstream schools.
• Children with ASD attending mainstream schooling were less likely to gain 5+ GCSEs A*-C or subsequently attend university.
Further and Higher Education:
• Enrolment rates for students with ASD have risen in Further Education (FE), from 0% to 0.7%.
• Enrolment rates for students with ASD have risen in Higher Education (HE), from 0.28% to 0.45%.
• Students with ASD chose to study different subjects than students without ASD, although other factors, e.g., gender, age etc. may have played a part in subject selection.
• Students with ASD from NI were more likely than students without ASD to choose Northern Irish HE Institutions rather than study outside NI.
Participation in adult life and employment:
• A small number of adults with ASD (n=99) have benefitted from DES employment provision over the past 12 years.
• It is unknown how many adults with ASD have received employment support elsewhere (e.g. Steps to Work).
•
Awareness and Attitudes in the General Population:
• In both the 2003 and 2012 NI Life and Times Survey (NILTS), NI public reported positive attitudes towards the inclusion of children with ASD in mainstream education (see also BASE Project Vol. 2).
Gap Analysis Recommendations:
This was the first comprehensive secondary analysis with regards to ASD of existing large-scale data sets in Northern Ireland. Data gaps were identified and further replications would benefit from the following data inclusion:
• ASD should be recorded routinely in the following datasets:
o Census;
o Northern Ireland Survey of Activity Limitation (NISALD);
o Training for Success/Steps to work; Steps to Success;
o Travel survey;
o Hate crime; and
o Labour Force Survey.
• Data should be collected on the destinations/qualifications of special school leavers.
• NILT Survey autism module should be repeated in 5 years time (2017) (see full report of 1st NILT Survey autism module 2012 in BASE Project Report Volume 2).
• General public attitudes and awareness should be assessed for children and young people, using the Young Life and Times Survey (YLT) and the Kids Life and Times Survey (KLT); (this work is underway, Dillenburger, McKerr, Schubolz, & Lloyd, 2014-2015).
Resumo:
By 2015, with the proliferation of wireless multimedia applications and services (e.g., mobile TV, video on demand, online video repositories, immersive video interaction, peer to peer video streaming, and interactive video gaming), and any-time anywhere communication, the number of smartphones and tablets will exceed 6.5 billion as the most common web access devices. Data volumes in wireless multimedia data-intensive applications and mobile web services are projected to increase by a factor of 10 every five years, associated with a 20 percent increase in energy consumption, 80 percent of which is multimedia traffic related. In turn, multimedia energy consumption is rising at 16 percent per year, doubling every six years. It is estimated that energy costs alone account for as much as half of the annual operating expenditure. This has prompted concerted efforts by major operators to drastically reduce carbon emissions by up to 50 percent over the next 10 years. Clearly, there is an urgent need for new disruptive paradigms of green media to bridge the gap between wireless technologies and multimedia applications.
Resumo:
The demand for richer multimedia services, multifunctional portable devices and high data rates can only been visioned due to the improvement in semiconductor technology. Unfortunately, sub-90 nm process nodes uncover the nanometer Pandora-box exposing the barriers of technology scaling-parameter variations, that threaten the correct operation of circuits, and increased energy consumption, that limits the operational lifetime of today's systems. The contradictory design requirements for low-power and system robustness, is one of the most challenging design problems of today. The design efforts are further complicated due to the heterogeneous types of designs ( logic, memory, mixed-signal) that are included in today's complex systems and are characterized by different design requirements. This paper presents an overview of techniques at various levels of design abstraction that lead to low power and variation aware logic, memory and mixed-signal circuits and can potentially assist in meeting the strict power budgets and yield/quality requirements of future systems.
Resumo:
Thermal stability is of major importance in polymer extrusion, where product quality is dependent upon the level of melt homogeneity achieved by the extruder screw. Extrusion is an energy intensive process and optimisation of process energy usage while maintaining melt stability is necessary in order to produce good quality product at low unit cost. Optimisation of process energy usage is timely as world energy prices have increased rapidly over the last few years. In the first part of this study, a general discussion was made on the efficiency of an extruder. Then, an attempt was made to explore correlations between melt thermal stability and energy demand in polymer extrusion under different process settings and screw geometries. A commodity grade of polystyrene was extruded using a highly instrumented single screw extruder, equipped with energy consumption and melt temperature field measurement. Moreover, the melt viscosity of the experimental material was observed by using an off-line rheometer. Results showed that specific energy demand of the extruder (i.e. energy for processing of unit mass of polymer) decreased with increasing throughput whilst fluctuation in energy demand also reduced. However, the relationship between melt temperature and extruder throughput was found to be complex, with temperature varying with radial position across the melt flow. Moreover, the melt thermal stability deteriorated as throughput was increased, meaning that a greater efficiency was achieved at the detriment of melt consistency. Extruder screw design also had a significant effect on the relationship between energy consumption and melt consistency. Overall, the relationship between the process energy demand and thermal stability seemed to be negatively correlated and also it was shown to be highly complex in nature. Moreover, the level of process understanding achieved here can help to inform selection of equipment and setting of operating conditions to optimise both energy and thermal efficiencies in parallel.
Resumo:
Extrusion is one of the fundamental production methods in the polymer processing industry and is used in the production of a large number of commodities in a diverse industrial sector. Being an energy intensive production method, process energy efficiency is one of the major concerns and the selection of the most energy efficient processing conditions is a key to reducing operating costs. Usually, extruders consume energy through the drive motor, barrel heaters, cooling fans, cooling water pumps, gear pumps, etc. Typically the drive motor is the largest energy consuming device in an extruder while barrel/die heaters are responsible for the second largest energy demand. This study is focused on investigating the total energy demand of an extrusion plant under various processing conditions while identifying ways to optimise the energy efficiency. Initially, a review was carried out on the monitoring and modelling of the energy consumption in polymer extrusion. Also, the power factor, energy demand and losses of a typical extrusion plant were discussed in detail. The mass throughput, total energy consumption and power factor of an extruder were experimentally observed over different processing conditions and the total extruder energy demand was modelled empirically and also using a commercially available extrusion simulation software. The experimental results show that extruder energy demand is heavily coupled between the machine, material and process parameters. The total power predicted by the simulation software exhibits a lagging offset compared with the experimental measurements. Empirical models are in good agreement with the experimental measurements and hence these can be used in studying process energy behaviour in detail and to identify ways to optimise the process energy efficiency.
Resumo:
Building Information Modelling (BIM) is growing in pace, not only in design and construction stages, but also in the analysis of facilities throughout their life cycle. With this continued growth and utilisation of BIM processes, comes the possibility to adopt such procedures, to accurately measure the energy efficiency of buildings, to accurately estimate their energy usage. To this end, the aim of this research is to investigate if the introduction of BIM Energy Performance Assessment in the form of software analysis, provides accurate results, when compared with actual energy consumption recorded. Through selective sampling, three domestic case studies are scrutinised, with baseline figures taken from existing energy providers, the results scrutinised and compared with calculations provided from two separate BIM energy analysis software packages. Of the numerous software packages available, criterion sampling is used to select two of the most prominent platforms available on the market today. The two packages selected for scrutiny are Integrated Environmental Solutions - Virtual Environment (IES-VE) and Green Building Studio (GBS). The results indicate that IES-VE estimated the energy use in region of ±8% in two out of three case studies while GBS estimated usage approximately ±5%. The findings indicate that the introduction of BIM energy performance assessment, using proprietary software analysis, is a viable alternative to manual calculations of building energy use, mainly due to the accuracy and speed of assessing, even the most complex models. Given the surge in accurate and detailed BIM models and the importance placed on the continued monitoring and control of buildings energy use within today’s environmentally conscious society, this provides an alternative means by which to accurately assess a buildings energy usage, in a quick and cost effective manner.
Resumo:
In the study of complex genetic diseases, the identification of subgroups of patients sharing similar genetic characteristics represents a challenging task, for example, to improve treatment decision. One type of genetic lesion, frequently investigated in such disorders, is the change of the DNA copy number (CN) at specific genomic traits. Non-negative Matrix Factorization (NMF) is a standard technique to reduce the dimensionality of a data set and to cluster data samples, while keeping its most relevant information in meaningful components. Thus, it can be used to discover subgroups of patients from CN profiles. It is however computationally impractical for very high dimensional data, such as CN microarray data. Deciding the most suitable number of subgroups is also a challenging problem. The aim of this work is to derive a procedure to compact high dimensional data, in order to improve NMF applicability without compromising the quality of the clustering. This is particularly important for analyzing high-resolution microarray data. Many commonly used quality measures, as well as our own measures, are employed to decide the number of subgroups and to assess the quality of the results. Our measures are based on the idea of identifying robust subgroups, inspired by biologically/clinically relevance instead of simply aiming at well-separated clusters. We evaluate our procedure using four real independent data sets. In these data sets, our method was able to find accurate subgroups with individual molecular and clinical features and outperformed the standard NMF in terms of accuracy in the factorization fitness function. Hence, it can be useful for the discovery of subgroups of patients with similar CN profiles in the study of heterogeneous diseases.
Resumo:
Energy consumption has become an important area of research of late. With the advent of new manycore processors, situations have arisen where not all the processors need to be active to reach an optimal relation between performance and energy usage. In this paper a study of the power and energy usage of a series of benchmarks, the PARSEC and the SPLASH- 2X Benchmark Suites, on the Intel Xeon Phi for different threads configurations, is presented. To carry out this study, a tool was designed to monitor and record the power usage in real time during execution time and afterwards to compare the r
Resumo:
This special issue provides the latest research and development on wireless mobile wearable communications. According to a report by Juniper Research, the market value of connected wearable devices is expected to reach $1.5 billion by 2014, and the shipment of wearable devices may reach 70 million by 2017. Good examples of wearable devices are the prominent Google Glass and Microsoft HoloLens. As wearable technology is rapidly penetrating our daily life, mobile wearable communication is becoming a new communication paradigm. Mobile wearable device communications create new challenges compared to ordinary sensor networks and short-range communication. In mobile wearable communications, devices communicate with each other in a peer-to-peer fashion or client-server fashion and also communicate with aggregation points (e.g., smartphones, tablets, and gateway nodes). Wearable devices are expected to integrate multiple radio technologies for various applications' needs with small power consumption and low transmission delays. These devices can hence collect, interpret, transmit, and exchange data among supporting components, other wearable devices, and the Internet. Such data are not limited to people's personal biomedical information but also include human-centric social and contextual data. The success of mobile wearable technology depends on communication and networking architectures that support efficient and secure end-to-end information flows. A key design consideration of future wearable devices is the ability to ubiquitously connect to smartphones or the Internet with very low energy consumption. Radio propagation and, accordingly, channel models are also different from those in other existing wireless technologies. A huge number of connected wearable devices require novel big data processing algorithms, efficient storage solutions, cloud-assisted infrastructures, and spectrum-efficient communications technologies.