805 resultados para hardware computing
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Resumo:
Small clusters of gallium oxide, technologically important high temperature ceramic, together with interaction of nucleic acid bases with graphene and small-diameter carbon nanotube are focus of first principles calculations in this work. A high performance parallel computing platform is also developed to perform these calculations at Michigan Tech. First principles calculations are based on density functional theory employing either local density or gradient-corrected approximation together with plane wave and gaussian basis sets. The bulk Ga2O3 is known to be a very good candidate for fabricating electronic devices that operate at high temperatures. To explore the properties of Ga2O3 at nonoscale, we have performed a systematic theoretical study on the small polyatomic gallium oxide clusters. The calculated results find that all lowest energy isomers of GamOn clusters are dominated by the Ga-O bonds over the metal-metal or the oxygen-oxygen bonds. Analysis of atomic charges suggest the clusters to be highly ionic similar to the case of bulk Ga2O3. In the study of sequential oxidation of these slusters starting from Ga2O, it is found that the most stable isomers display up to four different backbones of constituent atoms. Furthermore, the predicted configuration of the ground state of Ga2O is recently confirmed by the experimental result of Neumark's group. Guided by the results of calculations the study of gallium oxide clusters, performance related challenge of computational simulations, of producing high performance computers/platforms, has been addressed. Several engineering aspects were thoroughly studied during the design, development and implementation of the high performance parallel computing platform, rama, at Michigan Tech. In an attempt to stay true to the principles of Beowulf revolutioni, the rama cluster was extensively customized to make it easy to understand, and use - for administrators as well as end-users. Following the results of benchmark calculations and to keep up with the complexity of systems under study, rama has been expanded to a total of sixty four processors. Interest in the non-covalent intereaction of DNA with carbon nanotubes has steadily increased during past several years. This hybrid system, at the junction of the biological regime and the nanomaterials world, possesses features which make it very attractive for a wide range of applicatioins. Using the in-house computational power available, we have studied details of the interaction between nucleic acid bases with graphene sheet as well as high-curvature small-diameter carbon nanotube. The calculated trend in the binding energies strongly suggests that the polarizability of the base molecules determines the interaction strength of the nucleic acid bases with graphene. When comparing the results obtained here for physisorption on the small diameter nanotube considered with those from the study on graphene, it is observed that the interaction strength of nucleic acid bases is smaller for the tube. Thus, these results show that the effect of introducing curvature is to reduce the binding energy. The binding energies for the two extreme cases of negligible curvature (i.e. flat graphene sheet) and of very high curvature (i.e. small diameter nanotube) may be considered as upper and lower bounds. This finding represents an important step towards a better understanding of experimentally observed sequence-dependent interaction of DNA with Carbon nanotubes.
Resumo:
The development of embedded control systems for a Hybrid Electric Vehicle (HEV) is a challenging task due to the multidisciplinary nature of HEV powertrain and its complex structures. Hardware-In-the-Loop (HIL) simulation provides an open and convenient environment for the modeling, prototyping, testing and analyzing HEV control systems. This thesis focuses on the development of such a HIL system for the hybrid electric vehicle study. The hardware architecture of the HIL system, including dSPACE eDrive HIL simulator, MicroAutoBox II and MotoTron Engine Control Module (ECM), is introduced. Software used in the system includes dSPACE Real-Time Interface (RTI) blockset, Automotive Simulation Models (ASM), Matlab/Simulink/Stateflow, Real-time Workshop, ControlDesk Next Generation, ModelDesk and MotoHawk/MotoTune. A case study of the development of control systems for a single shaft parallel hybrid electric vehicle is presented to summarize the functionality of this HIL system.
Resumo:
Many methodologies dealing with prediction or simulation of soft tissue deformations on medical image data require preprocessing of the data in order to produce a different shape representation that complies with standard methodologies, such as mass–spring networks, finite element method s (FEM). On the other hand, methodologies working directly on the image space normally do not take into account mechanical behavior of tissues and tend to lack physics foundations driving soft tissue deformations. This chapter presents a method to simulate soft tissue deformations based on coupled concepts from image analysis and mechanics theory. The proposed methodology is based on a robust stochastic approach that takes into account material properties retrieved directly from the image, concepts from continuum mechanics and FEM. The optimization framework is solved within a hierarchical Markov random field (HMRF) which is implemented on the graphics processor unit (GPU See Graphics processing unit ).
Resumo:
Background Access to health care can be described along four dimensions: geographic accessibility, availability, financial accessibility and acceptability. Geographic accessibility measures how physically accessible resources are for the population, while availability reflects what resources are available and in what amount. Combining these two types of measure into a single index provides a measure of geographic (or spatial) coverage, which is an important measure for assessing the degree of accessibility of a health care network. Results This paper describes the latest version of AccessMod, an extension to the Geographical Information System ArcView 3.×, and provides an example of application of this tool. AccessMod 3 allows one to compute geographic coverage to health care using terrain information and population distribution. Four major types of analysis are available in AccessMod: (1) modeling the coverage of catchment areas linked to an existing health facility network based on travel time, to provide a measure of physical accessibility to health care; (2) modeling geographic coverage according to the availability of services; (3) projecting the coverage of a scaling-up of an existing network; (4) providing information for cost effectiveness analysis when little information about the existing network is available. In addition to integrating travelling time, population distribution and the population coverage capacity specific to each health facility in the network, AccessMod can incorporate the influence of landscape components (e.g. topography, river and road networks, vegetation) that impact travelling time to and from facilities. Topographical constraints can be taken into account through an anisotropic analysis that considers the direction of movement. We provide an example of the application of AccessMod in the southern part of Malawi that shows the influences of the landscape constraints and of the modes of transportation on geographic coverage. Conclusion By incorporating the demand (population) and the supply (capacities of heath care centers), AccessMod provides a unifying tool to efficiently assess the geographic coverage of a network of health care facilities. This tool should be of particular interest to developing countries that have a relatively good geographic information on population distribution, terrain, and health facility locations.
Resumo:
There is a growing interest in simulating natural phenomena in computer graphics applications. Animating natural scenes in real time is one of the most challenging problems due to the inherent complexity of their structure, formed by millions of geometric entities, and the interactions that happen within. An example of natural scenario that is needed for games or simulation programs are forests. Forests are difficult to render because the huge amount of geometric entities and the large amount of detail to be represented. Moreover, the interactions between the objects (grass, leaves) and external forces such as wind are complex to model. In this paper we concentrate in the rendering of falling leaves at low cost. We present a technique that exploits graphics hardware in order to render thousands of leaves with different falling paths in real time and low memory requirements.
Resumo:
Funded by the US-EU Atlantis Program, the International Cooperation in Ambient Computing Education Project is establishing an international knowledge-building community for developing a broader computer science curriculum aimed at preparing students for real-world problems in a multidisciplinary, global world. The project is collaboration among Troy University (USA), University of Sunderland (UK), FernUniversität in Hagen (Germany), Universidade do Algarve (Portugal), University of Arkansas at Little Rock (USA) and San Diego State University (USA). The curriculum will include aspects of social science, cognitive science, human-computer interaction, organizational studies, global studies, and particular application areas as well as core computer science subjects. Programs offered at partner institutions will form trajectories through the curriculum. A degree will be defined in terms of combinations of trajectories which will satisfy degree requirements set by accreditation organizations. This is expected to lead to joint- or dual-degree programs among the partner institutions in the future. This paper describes the goals and activities of the project and discusses implementation issues.
Resumo:
Die voranschreitende Entwicklung von Konzepten und Systemen zur Nutzung digitaler Informationen im industriellen Umfeld eröffnet verschiedenste Möglichkeiten zur Optimierung der Informationsverarbeitung und damit der Prozesseffektivität und -effizienz. Werden die relevanten Daten zu Produkten oder Prozessen jedoch lediglich in digitaler Form zur Verfügung gestellt, fällt ein Eingriff des Menschen in die virtuelle Welt immer schwerer. Auf Grundlage dessen wird am Beispiel der RFIDTechnologie dargestellt, inwiefern digitale Informationen durch die Verwendung von in den Arbeitsablauf integrierten Systemen für den Menschen nutzbar werden. Durch die Entwicklung eines Systems zur papierlosen Produktion und Logistik werden exemplarisch Einsatzszenarien zur Unterstützung des Mitarbeiters in Montageprozessen sowie zur Vermeidung von Fehlern in der Kommissionierung aufgezeigt. Dazu findet neben einer am Kopf getragenen Datenbrille zur Visualisierung der Informationen ein mobiles RFID-Lesegerät Anwendung, mit Hilfe dessen die digitalen Transponderdaten ohne zusätzlichen Aufwand für den Anwender genutzt werden können.
Resumo:
Cloud computing is a new development that is based on the premise that data and applications are stored centrally and can be accessed through the Internet. Thisarticle sets up a broad analysis of how the emergence of clouds relates to European competition law, network regulation and electronic commerce regulation, which we relate to challenges for the further development of cloud services in Europe: interoperability and data portability between clouds; issues relating to vertical integration between clouds and Internet Service Providers; and potential problems for clouds to operate on the European Internal Market. We find that these issues are not adequately addressed across the legal frameworks that we analyse, and argue for further research into how to better facilitate innovative convergent services such as cloud computing through European policy – especially in light of the ambitious digital agenda that the European Commission has set out.
Resumo:
The development of the Internet has made it possible to transfer data ‘around the globe at the click of a mouse’. Especially fresh business models such as cloud computing, the newest driver to illustrate the speed and breadth of the online environment, allow this data to be processed across national borders on a routine basis. A number of factors cause the Internet to blur the lines between public and private space: Firstly, globalization and the outsourcing of economic actors entrain an ever-growing exchange of personal data. Secondly, the security pressure in the name of the legitimate fight against terrorism opens the access to a significant amount of data for an increasing number of public authorities.And finally,the tools of the digital society accompany everyone at each stage of life by leaving permanent individual and borderless traces in both space and time. Therefore, calls from both the public and private sectors for an international legal framework for privacy and data protection have become louder. Companies such as Google and Facebook have also come under continuous pressure from governments and citizens to reform the use of data. Thus, Google was not alone in calling for the creation of ‘global privacystandards’. Efforts are underway to review established privacy foundation documents. There are similar efforts to look at standards in global approaches to privacy and data protection. The last remarkable steps were the Montreux Declaration, in which the privacycommissioners appealed to the United Nations ‘to prepare a binding legal instrument which clearly sets out in detail the rights to data protection and privacy as enforceable human rights’. This appeal was repeated in 2008 at the 30thinternational conference held in Strasbourg, at the 31stconference 2009 in Madrid and in 2010 at the 32ndconference in Jerusalem. In a globalized world, free data flow has become an everyday need. Thus, the aim of global harmonization should be that it doesn’t make any difference for data users or data subjects whether data processing takes place in one or in several countries. Concern has been expressed that data users might seek to avoid privacy controls by moving their operations to countries which have lower standards in their privacy laws or no such laws at all. To control that risk, some countries have implemented special controls into their domestic law. Again, such controls may interfere with the need for free international data flow. A formula has to be found to make sure that privacy at the international level does not prejudice this principle.
Resumo:
Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.
Resumo:
We present in this paper several contributions on the collision detection optimization centered on hardware performance. We focus on the broad phase which is the first step of the collision detection process and propose three new ways of parallelization of the well-known Sweep and Prune algorithm. We first developed a multi-core model takes into account the number of available cores. Multi-core architecture enables us to distribute geometric computations with use of multi-threading. Critical writing section and threads idling have been minimized by introducing new data structures for each thread. Programming with directives, like OpenMP, appears to be a good compromise for code portability. We then proposed a new GPU-based algorithm also based on the "Sweep and Prune" that has been adapted to multi-GPU architectures. Our technique is based on a spatial subdivision method used to distribute computations among GPUs. Results show that significant speed-up can be obtained by passing from 1 to 4 GPUs in a large-scale environment.