819 resultados para Energy consumption data sets
Resumo:
This paper presents a comprehensive and robust strategy for the estimation of battery model parameters from noise corrupted data. The deficiencies of the existing methods for parameter estimation are studied and the proposed parameter estimation strategy improves on earlier methods by working optimally for low as well as high discharge currents, providing accurate estimates even under high levels of noise, and with a wide range of initial values. Testing on different data sets confirms the performance of the proposed parameter estimation strategy.
Resumo:
The ATLAS and CMS collaborations at the LHC have performed analyses on the existing data sets, studying the case of one vector-like fermion or multiplet coupling to the standard model Yukawa sector. In the near future, with more data available, these experimental collaborations will start to investigate more realistic cases. The presence of more than one extra vector-like multiplet is indeed a common situation in many extensions of the standard model. The interplay of these vector-like multiplet between precision electroweak bounds, flavour and collider phenomenology is a important question in view of establishing bounds or for the discovery of physics beyond the standard model. In this work we study the phenomenological consequences of the presence of two vector-like multiplets. We analyse the constraints on such scenarios from tree-level data and oblique corrections for the case of mixing to each of the SM generations. In the present work, we limit to scenarios with two top-like partners and no mixing in the down-sector.
Resumo:
In this second of the two-part study, the results of the Tank-to-Wheels study reported in the first part are combined with Well-to-Tank results in this paper to provide a comprehensive Well-to-Wheels energy consumption and greenhouse gas emissions evaluation of automotive fuels in India. The results indicate that liquid fuels derived from petroleum have Well-to-Tank efficiencies in the range of 75-85% with liquefied petroleum gas being the most efficient fuel in the Well-to-Tank stage with 85% efficiency. Electricity has the lowest efficiency of 20% which is mainly attributed due to its dependence on coal and 25.4% losses during transmission and distribution. The complete Well-to-Wheels results show diesel vehicles to be the most efficient among all configurations, specifically the diesel-powered split hybrid electric vehicle. Hydrogen engine configurations are the least efficient due to low efficiency of production of hydrogen from natural gas. Hybridizing electric vehicles reduces the Well-to-Wheels greenhouse gas emissions substantially with split hybrid configuration being the most efficient. Electric vehicles do not offer any significant improvement over gasoline-powered configurations; however a shift towards renewable sources for power generation and reduction in losses during transmission and distribution can make it a feasible option in the future. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Ontogenetic patterns in the percent dry weight (%DW) and energy density (joules per gram of wet weight) were studied in the early life stages of the subtropical estuarine and marine gray snapper Lutjanus griseus and the warmtemperate estuarine and marine spotted seatrout Cynoscion nebulosus. The %DW was variable for individuals of both species but increased significantly through larval to juvenile stages (<20% for fish ,50 mm standard length to 20–30% for fish >50 mm). The lipid percentage, which was determined only for gray snapper, was also variable between individuals but showed significant increase with body size. Strong relationships between percent dry weight and energy density were evident for both species; however, the slopes of regressions were significantly lower than in general multispecies models, demonstrating the need for species- and stagespecific energy density data in bioenergetics models.
Resumo:
Scalable video coding allows an efficient provision of video services at different quality levels with different energy demands. According to the specific type of service and network scenario, end users and/or operators may decide to choose among different energy versus quality combinations. In order to deal with the resulting trade-off, in this paper we analyze the number of video layers that are worth to be received taking into account the energy constraints. A single-objective optimization is proposed based on dynamically selecting the number of layers, which is able to minimize the energy consumption with the constraint of a minimal quality threshold to be reached. However, this approach cannot reflect the fact that the same increment of energy consumption may result in different increments of visual quality. Thus, a multiobjective optimization is proposed and a utility function is defined in order to weight the energy consumption and the visual quality criteria. Finally, since the optimization solving mechanism is computationally expensive to be implemented in mobile devices, a heuristic algorithm is proposed. This way, significant energy consumption reduction will be achieved while keeping reasonable quality levels.
Resumo:
The possibilities of digital research have altered the production, publication and use of research results. Academic research practice and culture are changing or have already been transformed, but to a large degree the system of academic recognition has not yet adapted to the practices and possibilities of digital research. This applies especially to research data, which are increasingly produced, managed, published and archived, but play hardly a role yet in practices of research assessment. The aim of the workshop was to bring experts and stakeholders from research institutions, universities, scholarly societies and funding agencies together in order to review, discuss and build on possibilities to implement the culture of sharing and to integrate publication of data into research assessment procedures. The report 'The Value of Research Data - Metrics for datasets from a cultural and technical point of view' was presented and discussed. Some of the key finding were that data sharing should be considered normal research practice, in fact not sharing should be considered malpractice. Research funders and universities should support and encourage data sharing. There are a number of important aspects to consider when making data count in research and evaluation procedures. Metrics are a necessary tool in monitoring the sharing of data sets. However, data metrics are at present not very well developed and there is not yet enough experience in what these metrics actually mean. It is important to implement the culture of sharing through codes of conducts in the scientific communities. For further key findings please read the report.
Resumo:
[EN]The generation of spikes by neurons is energetically a costly process and the evaluation of the metabolic energy required to maintain the signaling activity of neurons a challenge of practical interest. Neuron models are frequently used to represent the dynamics of real neurons but hardly ever to evaluate the electrochemical energy required to maintain that dynamics. This paper discusses the interpretation of a Hodgkin-Huxley circuit as an energy model for real biological neurons and uses it to evaluate the consumption of metabolic energy in the transmission of information between neurons coupled by electrical synapses, i.e., gap junctions. We show that for a single postsynaptic neuron maximum energy efficiency, measured in bits of mutual information per molecule of adenosine triphosphate (ATP) consumed, requires maximum energy consumption. For groups of parallel postsynaptic neurons we determine values of the synaptic conductance at which the energy efficiency of the transmission presents clear maxima at relatively very low values of metabolic energy consumption. Contrary to what could be expected, the best performance occurs at a low energy cost.
Resumo:
[EN] Studies have reported a negative association between dairy product consumption and weight status. However, not as much research has focused on cheese; therefore, the aim of this study was to study the association between cheese intake and overweight and obesity in a representative Basque adult population. A food frequency questionnaire (FFQ) was obtained from a random sample of 1081 adults (530 males and 551 females, 17–96 years old). Cheese consumption data were expressed as g/1000 kcal/day. The prevalence of overweight/obesity was higher in men (55.1%) than in women (35.4%) (p50.001). Participants with low or moderate intake of fresh and processed cheese demonstrated a higher prevalence of excess weight, compared with those with higher consumption. The confounding variables selected in multivariate analysis were occupational status and age in both genders; and place of residence in men. In conclusion, negative associations were found between consumption of some types of cheese and overweight and obesity in this population.
Resumo:
Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.
The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.
The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.
To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.
Resumo:
O Brasil é um dos maiores consumidores per capita de açúcar e estudos têm mostrado um papel específico do consumo excessivo de açúcar no ganho de peso. Com o aumento do ganho de peso observado em vários países, e também no Brasil, é importante testar quais mensagens, estratégias e propostas de intervenção seriam eficazes na prevenção dessa epidemia. Os dados reportados são referentes a um ensaio randomizado por conglomerado, controlado, conduzido em 20 escolas municipais na cidade metropolitana de Niterói no Estado de Rio de Janeiro, de março a dezembro de 2007, que testou a eficácia de orientações para merendeiras objetivando reduzir a disponibilidade de açúcar e de alimentos fontes de açúcar na alimentação escolar e no consumo delas. A intervenção consistiu em um programa de educação nutricional nas escolas usando mensagens, atividades e material educativo que encorajassem a redução da adição de açúcar na alimentação escolar pelas merendeiras e no consumo delas. A redução da disponibilidade per capita de açúcar pelas escolas foi analisada através de planilhas com dados da utilização dos itens do estoque. O consumo individual das merendeiras foi avaliado através de questionário de freqüência de consumo alimentar. As medidas antropométricas e bioquímicas foram realizadas de acordo com técnicas padronizadas. As escolas de intervenção apresentaram maior redução da disponibilidade per capita de açúcar quando comparadas às escolas controle (-6,0 kg vs. 3,4 kg), mas sem diferença estatisticamente significante. Houve redução no consumo de doces e bebidas açucaradas nas merendeiras dos dois grupos, mas o consumo de açúcar não apresentou diferenças estatisticamente significativas entre eles. Houve redução do consumo de energia total nos dois grupos, mas sem diferença entre eles, e sem modificação dos percentuais de adequação dos macronutrientes em relação ao consumo de energia. Ao final do estudo somente as merendeiras do grupo de intervenção conseguiram manter a perda de peso, porém sem diferença estatisticamente significante. A estratégia de redução da disponibilidade e do consumo de açúcar por merendeiras de escolas públicas não atingiu o principal objetivo de redução de adição de açúcar. Uma análise secundária dos dados avaliou a associação entre a auto-percepção da saúde e da qualidade da alimentação com o excesso de peso e concentração elevada de colesterol sérico das merendeiras na linha de base. As perguntas de auto-percepção foram coletadas por entrevista. Dentre as que consideraram a sua alimentação como saudável, 40% apresentavam colesterol elevado e 61% apresentavam excesso de peso vs. 68% e 74%, respectivamente, para as que consideraram a sua alimentação como não-saudável. Dentre as que consideraram a sua saúde como boa, 41% apresentavam colesterol elevado e 59% apresentavam excesso de peso vs. 71% e 81%, respectivamente, para as que consideraram a sua saúde como ruim. A maioria das mulheres que relatou ter alimentação saudável apresentou maior frequência de consumo de frutas, verduras e legumes, feijão, leite e derivados e menor freqüência de consumo de refrigerante. Conclui-se que perguntas únicas e simples como as utilizadas para a auto-avaliação da saúde podem também ter importância na avaliação da alimentação.
Resumo:
Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.
Resumo:
In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.
In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.
Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.
In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.
Resumo:
The report describes the results of preliminary analyses of data obtained from a series of water temperature loggers sited at various distances (0.8 to 21.8 km) downstream of Kielder dam on the River North Tyne and in two natural tributaries. The report deals with three aspects of the water temperature records: An analysis of an operational aspect of the data sets for selected stations, a simple examination of the effects of impoundment upon water temperature at or close to the point of release, relative to natural river temperatures, and an examination of rate of change of monthly means of daily mean, maximum, minimum and range (maximum - minimum) with distance downstream of the point of release during 1983.
Resumo:
The experimental consequence of Regge cuts in the angular momentum plane are investigated. The principle tool in the study is the set of diagrams originally proposed by Amati, Fubini, and Stanghellini. Mandelstam has shown that the AFS cuts are actually cancelled on the physical sheet, but they may provide a useful guide to the properties of the real cuts. Inclusion of cuts modifies the simple Regge pole predictions for high-energy scattering data. As an example, an attempt is made to fit high energy elastic scattering data for pp, ṗp, π±p, and K±p, by replacing the Igi pole by terms representing the effect of a Regge cut. The data seem to be compatible with either a cut or the Igi pole.