819 resultados para Energy consumption data sets
Resumo:
We consider a class of initial data sets (Σ,h,K) for the Einstein constraint equations which we define to be generalized Brill (GB) data. This class of data is simply connected, U(1)²-invariant, maximal, and four-dimensional with two asymptotic ends. We study the properties of GB data and in particular the topology of Σ. The GB initial data sets have applications in geometric inequalities in general relativity. We construct a mass functional M for GB initial data sets and we show:(i) the mass of any GB data is greater than or equals M, (ii) it is a non-negative functional for a broad subclass of GB data, (iii) it evaluates to the ADM mass of reduced t − φi symmetric data set, (iv) its critical points are stationary U(1)²-invariant vacuum solutions to the Einstein equations. Then we use this mass functional and prove two geometric inequalities: (1) a positive mass theorem for subclass of GB initial data which includes Myers-Perry black holes, (2) a class of local mass-angular momenta inequalities for U(1)²-invariant black holes. Finally, we construct a one-parameter family of initial data sets which we show can be seen as small deformations of the extreme Myers- Perry black hole which preserve the horizon geometry and angular momenta but have strictly greater energy.
Resumo:
the work towards increased energy efficiency. In order to plan and perform effective energy renovation of the buildings, it is necessary to have adequate information on the current status of the buildings in terms of architectural features and energy needs. Unfortunately, the official statistics do not include all of the needed information for the whole building stock. This paper aims to fill the gaps in the statistics by gathering data from studies, projects and national energy agencies, and by calibrating TRNSYS models against the existing data to complete missing energy demand data, for countries with similar climate, through simulation. The survey was limited to residential and office buildings in the EU member states (before July 2013). This work was carried out as part of the EU FP7 project iNSPiRe. The building stock survey revealed over 70% of the residential and office floor area is concentrated in the six most populated countries. The total energy consumption in the residential sector is 14 times that of the office sector. In the residential sector, single family houses represent 60% of the heated floor area, albeit with different share in the different countries, indicating that retrofit solutions cannot be focused only on multi-family houses. The simulation results indicate that residential buildings in central and southern European countries are not always heated to 20 °C, but are kept at a lower temperature during at least part of the day. Improving the energy performance of these houses through renovation could allow the occupants to increase the room temperature and improve their thermal comfort, even though the potential for energy savings would then be reduced.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
Introduction: Life expectancy is increasing and becoming a characteristic phenomenon of developed countries and, increasingly, of developing countries, such as Brazil. The aging process causes changes of some physiological functions such as loss of smell, taste, loss of appetite, among other things that end up changing the food intake of these individuals. Objectives: This study aimed to assess food consumption of the young and long-lived elderly in a city in southern Brazil. Methods: A cross-sectional survey conducted through home visits in Palmeira das Missões - RS, Brazil. The sociodemographic, anthropometrical and dietary data were collected through questionnaires and 24-hour recall. The adequacy of nutrients was assessed according to the Dietary Reference Intakes. Data were analyzed using SPSS 18.0 software. Results: The study included 424 older adults, 84,4% (n = 358) aged less than 80 years old and 15,6% (n = 66) older than 80. The intake of energy and protein was insufficient for both young elderly and the oldest. The consumption of vitamins and minerals has been insufficient in all seniors except for iron, which presented an excessive intake. There was a statistically significant difference between the elderly and oldest only for the consumption of lipids and vitamin B12. Conclusion: The majority of studies with elderly corroborate the results found in this article. An inadequate intake of nutrients can develop nutritional deficiencies, and consequently it can result in physiological and pathological changes which would compromise the functional capacity of the elderly. Energy consumption was insufficient and macronutrients were inadequate, both for the young elderly as for the oldest. Additionally, the consumption of vitamins and minerals was insufficient to everyone except the iron, which presented excessive intake for young and oldest elderly.
Resumo:
Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.
Resumo:
To analyze the characteristics and predict the dynamic behaviors of complex systems over time, comprehensive research to enable the development of systems that can intelligently adapt to the evolving conditions and infer new knowledge with algorithms that are not predesigned is crucially needed. This dissertation research studies the integration of the techniques and methodologies resulted from the fields of pattern recognition, intelligent agents, artificial immune systems, and distributed computing platforms, to create technologies that can more accurately describe and control the dynamics of real-world complex systems. The need for such technologies is emerging in manufacturing, transportation, hazard mitigation, weather and climate prediction, homeland security, and emergency response. Motivated by the ability of mobile agents to dynamically incorporate additional computational and control algorithms into executing applications, mobile agent technology is employed in this research for the adaptive sensing and monitoring in a wireless sensor network. Mobile agents are software components that can travel from one computing platform to another in a network and carry programs and data states that are needed for performing the assigned tasks. To support the generation, migration, communication, and management of mobile monitoring agents, an embeddable mobile agent system (Mobile-C) is integrated with sensor nodes. Mobile monitoring agents visit distributed sensor nodes, read real-time sensor data, and perform anomaly detection using the equipped pattern recognition algorithms. The optimal control of agents is achieved by mimicking the adaptive immune response and the application of multi-objective optimization algorithms. The mobile agent approach provides potential to reduce the communication load and energy consumption in monitoring networks. The major research work of this dissertation project includes: (1) studying effective feature extraction methods for time series measurement data; (2) investigating the impact of the feature extraction methods and dissimilarity measures on the performance of pattern recognition; (3) researching the effects of environmental factors on the performance of pattern recognition; (4) integrating an embeddable mobile agent system with wireless sensor nodes; (5) optimizing agent generation and distribution using artificial immune system concept and multi-objective algorithms; (6) applying mobile agent technology and pattern recognition algorithms for adaptive structural health monitoring and driving cycle pattern recognition; (7) developing a web-based monitoring network to enable the visualization and analysis of real-time sensor data remotely. Techniques and algorithms developed in this dissertation project will contribute to research advances in networked distributed systems operating under changing environments.
An empirical investigation of the impact of global energy transition on Nigerian oil and gas exports
Resumo:
18 months embargo on the thesis and check appendix for copy right materials
Error, Bias, and Long-Branch Attraction in Data for Two Chloroplast Photosystem Genes in Seed Plants
Resumo:
Sequences of two chloroplast photosystem genes, psaA and psbB, together comprising about 3,500 bp, were obtained for all five major groups of extant seed plants and several outgroups among other vascular plants. Strongly supported, but significantly conflicting, phylogenetic signals were obtained in parsimony analyses from partitions of the data into first and second codon positions versus third positions. In the former, both genes agreed on a monophyletic gymnosperms, with Gnetales closely related to certain conifers. In the latter, Gnetales are inferred to be the sister group of all other seed plants, with gymnosperms paraphyletic. None of the data supported the modern ‘‘anthophyte hypothesis,’’ which places Gnetales as the sister group of flowering plants. A series of simulation studies were undertaken to examine the error rate for parsimony inference. Three kinds of errors were examined: random error, systematic bias (both properties of finite data sets), and statistical inconsistency owing to long-branch attraction (an asymptotic property). Parsimony reconstructions were extremely biased for third-position data for psbB. Regardless of the true underlying tree, a tree in which Gnetales are sister to all other seed plants was likely to be reconstructed for these data. None of the combinations of genes or partitions permits the anthophyte tree to be reconstructed with high probability. Simulations of progressively larger data sets indicate the existence of long-branch attraction (statistical inconsistency) for third-position psbB data if either the anthophyte tree or the gymnosperm tree is correct. This is also true for the anthophyte tree using either psaA third positions or psbB first and second positions. A factor contributing to bias and inconsistency is extremely short branches at the base of the seed plant radiation, coupled with extremely high rates in Gnetales and nonseed plant outgroups. M. J. Sanderson,* M. F. Wojciechowski,*† J.-M. Hu,* T. Sher Khan,* and S. G. Brady
Resumo:
This thesis is a documented energy audit and long term study of energy and water reduction in a ghee factory. Global production of ghee exceeds 4 million tonnes annually. The factory in this study refines dairy products by non-traditional centrifugal separation and produces 99.9% pure, canned, crystallised Anhydrous Milk Fat (Ghee). Ghee is traditionally made by batch processing methods. The traditional method is less efficient, than centrifugal separation. An in depth systematic investigation was conducted of each item of major equipment including; ammonia refrigeration, a steam boiler, canning equipment, pumps, heat exchangers and compressed air were all fine-tuned. Continuous monitoring of electrical usage showed that not every initiative worked, others had pay back periods of less than a year. In 1994-95 energy consumption was 6,582GJ and in 2003-04 it was 5,552GJ down 16% for a similar output. A significant reduction in water usage was achieved by reducing the airflow in the refrigeration evaporative condensers to match the refrigeration load. Water usage has fallen 68% from18ML in 1994-95 to 5.78ML in 2003-04. The methods reported in this thesis could be applied to other industries, which have similar equipment, and other ghee manufacturers.
Resumo:
Engineering education for elementary school students is a new and increasingly important domain of research by mathematics, science, technology, and engineering educators. Recent research has raised questions about the context of engineering problems that are meaningful, engaging, and inspiring for young students. In the present study an environmental engineering activity was implemented in two classes of 11-year-old students in Cyprus. The problem required students to use the data to develop a procedure for selecting among alternative countries from which to buy water. Students created a range of models that adequately solved the problem although not all models took into account all of the data provided. The models varied in the number of problem factors taken into consideration and also in the different approaches adopted in dealing with the problem factors. At least two groups of students integrated into their models the environmental aspect of the problem (energy consumption, water pollution) and further refined their models. Results provide evidence that engineering model-eliciting activities can be successfully integrated in the elementary mathematics curriculum. These activities provide rich opportunities for students to deal with engineering contexts and to apply their learning in mathematics and science to solving real-world engineering problems.
Resumo:
1. Ecological data sets often use clustered measurements or use repeated sampling in a longitudinal design. Choosing the correct covariance structure is an important step in the analysis of such data, as the covariance describes the degree of similarity among the repeated observations. 2. Three methods for choosing the covariance are: the Akaike information criterion (AIC), the quasi-information criterion (QIC), and the deviance information criterion (DIC). We compared the methods using a simulation study and using a data set that explored effects of forest fragmentation on avian species richness over 15 years. 3. The overall success was 80.6% for the AIC, 29.4% for the QIC and 81.6% for the DIC. For the forest fragmentation study the AIC and DIC selected the unstructured covariance, whereas the QIC selected the simpler autoregressive covariance. Graphical diagnostics suggested that the unstructured covariance was probably correct. 4. We recommend using DIC for selecting the correct covariance structure.
Resumo:
An educational priority of many nations is to enhance mathematical learning in early childhood. One area in need of special attention is that of statistics. This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling activities. Such modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (i.e., identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. Results are reported from the first year of a three-year longitudinal study in which three classes of first-grade children and their teachers engaged in activities that required the creation of data models. The theme of “Looking after our Environment,” a component of the children’s science curriculum at the time, provided the context for the activities. Findings focus on how the children dealt with given complex attributes and how they generated their own attributes in classifying broad data sets, and the nature of the models the children created in organising, structuring, and representing their data.
Resumo:
Seasonal patterns have been found in a remarkable range of health conditions, including birth defects, respiratory infections and cardiovascular disease. Accurately estimating the size and timing of seasonal peaks in disease incidence is an aid to understanding the causes and possibly to developing interventions. With global warming increasing the intensity of seasonal weather patterns around the world, a review of the methods for estimating seasonal effects on health is timely. This is the first book on statistical methods for seasonal data written for a health audience. It describes methods for a range of outcomes (including continuous, count and binomial data) and demonstrates appropriate techniques for summarising and modelling these data. It has a practical focus and uses interesting examples to motivate and illustrate the methods. The statistical procedures and example data sets are available in an R package called ‘season’. Adrian Barnett is a senior research fellow at Queensland University of Technology, Australia. Annette Dobson is a Professor of Biostatistics at The University of Queensland, Australia. Both are experienced medical statisticians with a commitment to statistical education and have previously collaborated in research in the methodological developments and applications of biostatistics, especially to time series data. Among other projects, they worked together on revising the well-known textbook "An Introduction to Generalized Linear Models," third edition, Chapman Hall/CRC, 2008. In their new book they share their knowledge of statistical methods for examining seasonal patterns in health.
Resumo:
This dissertation develops the model of a prototype system for the digital lodgement of spatial data sets with statutory bodies responsible for the registration and approval of land related actions under the Torrens Title system. Spatial data pertain to the location of geographical entities together with their spatial dimensions and are classified as point, line, area or surface. This dissertation deals with a sub-set of spatial data, land boundary data that result from the activities performed by surveying and mapping organisations for the development of land parcels. The prototype system has been developed, utilising an event-driven paradigm for the user-interface, to exploit the potential of digital spatial data being generated from the utilisation of electronic techniques. The system provides for the creation of a digital model of the cadastral network and dependent data sets for an area of interest from hard copy records. This initial model is calibrated on registered control and updated by field survey to produce an amended model. The field-calibrated model then is electronically validated to ensure it complies with standards of format and content. The prototype system was designed specifically to create a database of land boundary data for subsequent retrieval by land professionals for surveying, mapping and related activities. Data extracted from this database are utilised for subsequent field survey operations without the need to create an initial digital model of an area of interest. Statistical reporting of differences resulting when subsequent initial and calibrated models are compared, replaces the traditional checking operations of spatial data performed by a land registry office. Digital lodgement of survey data is fundamental to the creation of the database of accurate land boundary data. This creation of the database is fundamental also to the efficient integration of accurate spatial data about land being generated by modem technology such as global positioning systems, and remote sensing and imaging, with land boundary information and other information held in Government databases. The prototype system developed provides for the delivery of accurate, digital land boundary data for the land registration process to ensure the continued maintenance of the integrity of the cadastre. Such data should meet also the more general and encompassing requirements of, and prove to be of tangible, longer term benefit to the developing, electronic land information industry.
Resumo:
The Queensland Coal Industry Employees Health Scheme was implemented in 1993 to provide health surveillance for all Queensland coal industry workers. Tt1e government, mining employers and mining unions agreed that the scheme should operate for seven years. At the expiry of the scheme, an assessment of the contribution of health surveillance to meet coal industry needs would be an essential part of determining a future health surveillance program. This research project has analysed the data made available between 1993 and 1998. All current coal industry employees have had at least one health assessment. The project examined how the centralised nature of the Health Scheme benefits industry by identi~)jng key health issues and exploring their dimensions on a scale not possible by corporate based health surveillance programs. There is a body of evidence that indicates that health awareness - on the scale of the individual, the work group and the industry is not a part of the mining industry culture. There is also growing evidence that there is a need for this culture to change and that some change is in progress. One element of this changing culture is a growth in the interest by the individual and the community in information on health status and benchmarks that are reasonably attainable. This interest opens the way for health education which contains personal, community and occupational elements. An important element of such education is the data on mine site health status. This project examined the role of health surveillance in the coal mining industry as a tool for generating the necessary information to promote an interest in health awareness. The Health Scheme Database provides the material for the bulk of the analysis of this project. After a preliminary scan of the data set, more detailed analysis was undertaken on key health and related safety issues that include respiratory disorders, hearing loss and high blood pressure. The data set facilitates control for confounding factors such as age and smoking status. Mines can be benchmarked to identify those mines with effective health management and those with particular challenges. While the study has confirmed the very low prevalence of restrictive airway disease such as pneu"moconiosis, it has demonstrated a need to examine in detail the emergence of obstructive airway disease such as bronchitis and emphysema which may be a consequence of the increasing use of high dust longwall technology. The power of the Health Database's electronic data management is demonstrated by linking the health data to other data sets such as injury data that is collected by the Department of l\1mes and Energy. The analysis examines serious strain -sprain injuries and has identified a marked difference between the underground and open cut sectors of the industry. The analysis also considers productivity and OHS data to examine the extent to which there is correlation between any pairs ofJpese and previously analysed health parameters. This project has demonstrated that the current structure of the Coal Industry Employees Health Scheme has largely delivered to mines and effective health screening process. At the same time, the centralised nature of data collection and analysis has provided to the mines, the unions and the government substantial statistical cross-sectional data upon which strategies to more effectively manage health and relates safety issues can be based.