978 resultados para Data Centers
Resumo:
The ocean plays an important role in modulating the mass balance of the polar ice sheets by interacting with the ice shelves in Antarctica and with the marine-terminating outlet glaciers in Greenland. Given that the flux of warm water onto the continental shelf and into the sub-ice cavities is steered by complex bathymetry, a detailed topography data set is an essential ingredient for models that address ice-ocean interaction. We followed the spirit of the global RTopo-1 data set and compiled consistent maps of global ocean bathymetry, upper and lower ice surface topographies and global surface height on a spherical grid with now 30-arc seconds resolution. We used the General Bathymetric Chart of the Oceans (GEBCO, 2014) as the backbone and added the International Bathymetric Chart of the Arctic Ocean version 3 (IBCAOv3) and the Interna- tional Bathymetric Chart of the Southern Ocean (IBCSO) version 1. While RTopo-1 primarily aimed at a good and consistent representation of the Antarctic ice sheet, ice shelves and sub-ice cavities, RTopo-2 now also contains ice topographies of the Greenland ice sheet and outlet glaciers. In particular, we aimed at a good representation of the fjord and shelf bathymetry sur- rounding the Greenland continent. We corrected data from earlier gridded products in the areas of Petermann Glacier, Hagen Bræ and Sermilik Fjord assuming that sub-ice and fjord bathymetries roughly follow plausible Last Glacial Maximum ice flow patterns. For the continental shelf off northeast Greenland and the floating ice tongue of Nioghalvfjerdsfjorden Glacier at about 79°N, we incorporated a high-resolution digital bathymetry model considering original multibeam survey data for the region. Radar data for surface topographies of the floating ice tongues of Nioghalvfjerdsfjorden Glacier and Zachariæ Isstrøm have been obtained from the data centers of Technical University of Denmark (DTU), Operation Icebridge (NASA/NSF) and Alfred Wegener Institute (AWI). For the Antarctic ice sheet/ice shelves, RTopo-2 largely relies on the Bedmap-2 product but applies corrections for the geometry of Getz, Abbot and Fimbul ice shelf cavities.
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. ^ However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.^
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. ^ In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.^
Resumo:
Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.
Resumo:
Sub-ice shelf circulation and freezing/melting rates in ocean general circulation models depend critically on an accurate and consistent representation of cavity geometry. Existing global or pan-Antarctic data sets have turned out to contain various inconsistencies and inaccuracies. The goal of this work is to compile independent regional fields into a global data set. We use the S-2004 global 1-minute bathymetry as the backbone and add an improved version of the BEDMAP topography for an area that roughly coincides with the Antarctic continental shelf. Locations of the merging line have been carefully adjusted in order to get the best out of each data set. High-resolution gridded data for upper and lower ice surface topography and cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and George VI Ice Shelves, and for Pine Island Glacier have been carefully merged into the ambient ice and ocean topographies. Multibeam survey data for bathymetry in the former Larsen B cavity and the southeastern Bellingshausen Sea have been obtained from the data centers of Alfred Wegener Institute (AWI), British Antarctic Survey (BAS) and Lamont-Doherty Earth Observatory (LDEO), gridded, and again carefully merged into the existing bathymetry map. The global 1-minute dataset (RTopo-1 Version 1.0.5) has been split into two netCDF files. The first contains digital maps for global bedrock topography, ice bottom topography, and surface elevation. The second contains the auxiliary maps for data sources and the surface type mask. A regional subset that covers all variables for the region south of 50 deg S is also available in netCDF format. Datasets for the locations of grounding and coast lines are provided in ASCII format.
Resumo:
Récemment, beaucoup d’efforts ont été investis afin de développer des modulateurs sur silicium pour les télécommunications optiques et leurs domaines d’applications. Ces modulateurs sont utiles pour les centres de données à courte portée et à haut débit. Ainsi, ce travail porte sur la caractérisation de deux types de modulateurs à réseau de Bragg intégré sur silicium comportant une jonction PN entrelacée dont le but est de réaliser une modulation de la longueur d’onde de Bragg par le biais de l’application d’un tension de polarisation inverse réalisant une déplétion des porteurs au sein du guide d’onde. Pour le premier modulateur à réseau de Bragg, la période de la jonction PN est différente de celle du réseau de Bragg tandis que le deuxième modulateur à réseau de Bragg a la période de sa jonction PN en accord avec celle du réseau de Bragg. Ces différences apporteront un comportement différent du modulateur impliquant donc une transmission de données de qualité différente et c’est ce que nous cherchons à caractériser. L’avantage de ce modulateur à réseau de Bragg est qu’il est relativement simple à designer et possède un réseau de Bragg uniforme dont on connaît déjà très bien les caractéristiques. La première étape dans la caractérisation de ces modulateurs fut de réaliser des mesures optiques, uniquement, afin de constater la réponse spectrale en réflexion et en transmission. Par la suite, nous sommes passé par l’approche usuelle, c’est à dire en réalisant des mesures DC sur les modulateurs. Ce mémoire montre également les résultats pratiques sur le comportement des électrodes et de la jonction PN. Mais il rend compte également des résultats de la transmission de données de ces modulateurs par l’utilisation d’une modulation OOK et PAM-4 et permet de mettre en évidence les différences en terme d’efficacité de modulation de ces deux modulateurs. Nous discutons alors de la pertinence de ce choix de design par rapport à ce que l’on peut trouver actuellement dans la littérature.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.
Resumo:
The popularity of cloud computing has led to a dramatic increase in the number of data centers in the world. The ever-increasing computational demands along with the slowdown in technology scaling has ushered an era of power-limited servers. Techniques such as near-threshold computing (NTC) can be used to improve energy efficiency in the post-Dennard scaling era. This paper describes an architecture based on the FD-SOI process technology for near-threshold operation in servers. Our work explores the trade-offs in energy and performance when running a wide range of applications found in private and public clouds, ranging from traditional scale-out applications, such as web search or media streaming, to virtualized banking applications. Our study demonstrates the benefits of near-threshold operation and proposes several directions to synergistically increase the energy proportionality of a near-threshold server.
Resumo:
The multi-faced evolution of network technologies ranges from big data centers to specialized network infrastructures and protocols for mission-critical operations. For instance, technologies such as Software Defined Networking (SDN) revolutionized the world of static configuration of the network - i.e., by removing the distributed and proprietary configuration of the switched networks - centralizing the control plane. While this disruptive approach is interesting from different points of view, it can introduce new unforeseen vulnerabilities classes. One topic of particular interest in the last years is industrial network security, an interest which started to rise in 2016 with the introduction of the Industry 4.0 (I4.0) movement. Networks that were basically isolated by design are now connected to the internet to collect, archive, and analyze data. While this approach got a lot of momentum due to the predictive maintenance capabilities, these network technologies can be exploited in various ways from a cybersecurity perspective. Some of these technologies lack security measures and can introduce new families of vulnerabilities. On the other side, these networks can be used to enable accurate monitoring, formal verification, or defenses that were not practical before. This thesis explores these two fields: by introducing monitoring, protections, and detection mechanisms where the new network technologies make it feasible; and by demonstrating attacks on practical scenarios related to emerging network infrastructures not protected sufficiently. The goal of this thesis is to highlight this lack of protection in terms of attacks on and possible defenses enabled by emerging technologies. We will pursue this goal by analyzing the aforementioned technologies and by presenting three years of contribution to this field. In conclusion, we will recapitulate the research questions and give answers to them.
Resumo:
This paper presents an Advanced Traveler Information System (ATIS) developed on Android platform, which is open source and free. The developed application has as its main objective the free use of a Vehicle-to- Infrastructure (V2I) communication through the wireless network access points available in urban centers. In addition to providing the necessary information for an Intelligent Transportation System (ITS) to a central server, the application also receives the traffic data close to the vehicle. Once obtained this traffic information, the application displays them to the driver in a clear and efficient way, allowing the user to make decisions about his route in real time. The application was tested in a real environment and the results are presented in the article. In conclusion we present the benefits of this application. © 2012 IEEE.
Resumo:
The aim of this study was to evaluate the performance of the Centers for Dental Specialties (CDS) in the country and associations with sociodemographic indicators of the municipalities, structural variables of services and primary health care organization in the years 2004-2009. The study used secondary data from procedures performed in the CDS to the specialties of periodontics, endodontics, surgery and primary care. Bivariate analysis by χ2 test was used to test the association between the dependent variable (performance of the CDS) with the independents. Then, Poisson regression analysis was performed. With regard to the overall achievement of targets, it was observed that the majority of CDS (69.25%) performance was considered poor/regular. The independent factors associated with poor/regular performance of CDS were: municipalities belonging to the Northeast, South and Southeast regions, with lower Human Development Index (HDI), lower population density, and reduced time to deployment. HDI and population density are important for the performance of the CDS in Brazil. Similarly, the peculiarities related to less populated areas as well as regional location and time of service implementation CDS should be taken into account in the planning of these services.
Resumo:
The concepts of health promotion, self-care and community participation emerged during the 1970s and, since then, their application has grown rapidly in the developed world, showing evidence of effectiveness. In spite of this, a major part of the population in the developing countries still has no access to specialized dental care such as endodontic treatment, dental care for patients with special needs, minor oral surgery, periodontal treatment and oral diagnosis. This review focuses on a program of the Brazilian Federal Government named CEOs (Dental Specialty Centers), which is an attempt to solve the dental care deficit of a population that is suffering from oral diseases and whose oral health care needs have not been addressed by the regular programs offered by the SUS (Unified National Health System). Literature published from 2000 to the present day, using electronic searches by Medline, Scielo, Google and hand-searching was considered. The descriptors used were Brazil, Oral health, Health policy, Health programs, and Dental Specialty Centers. There are currently 640 CEOs in Brazil, distributed in 545 municipal districts, carrying out dental procedures with major complexity. Based on this data, it was possible to conclude that public actions on oral health must involve both preventive and curative procedures aiming to minimize the oral health distortions still prevailing in developing countries like Brazil.
Resumo:
ABSTRACT Microphysical and thermodynamical features of two tropical systems, namely Hurricane Ivan and Typhoon Conson, and one sub-tropical, Catarina, have been analyzed based on space-born radar PR measurements available on the TRMM satellite. The procedure to classify the reflectivity profiles followed the Heymsfield et al (2000) and Steiner et al (1995) methodologies. The water and ice content have been calculated using a relationship obtained with data of the surface SPOL radar and PR in Rondonia State in Brazil. The diabatic heating rate due to latent heat release has been estimated using the methodology developed by Tao et al (1990). A more detailed analysis has been performed for Hurricane Catarina, the first of its kind in South Atlantic. High water content mean value has been found in Conson and Ivan at low levels and close to their centers. Results indicate that hurricane Catarina was shallower than the other two systems, with less water and the water was concentrated closer to its center. The mean ice content in Catarina was about 0.05 g kg-1 while in Conson it was 0.06 g kg-1 and in Ivan 0.08 g kg-1. Conson and Ivan had water content up to 0.3 g kg-1 above the 0ºC layer, while Catarina had less than 0.15 g kg-1. The latent heat released by Catarina showed to be very similar to the other two systems, except in the regions closer to the center.