981 resultados para internet data centers


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Internet é uma tecnologia que revolucionou o mundo, criando novas formas de interação entre pessoas, organizações e negócios. O setor hoteleiro é um segmento que muito tem se beneficiado dos serviços suportados pela Internet. O objetivo do estudo é identificar os diferentes fatores que influenciam ao uso da Internet sob três dimensões: individual, organizacional e ambiental. Um modelo conceitual foi postulado contendo nove variáveis independentes sobre duas variáveis dependentes, relativas ao padrão de uso da Internet. Os dados foram coletados junto a 52 hotéis localizados no litoral do Recife – PE. O resultado da análise inferencial dos dados mostrou um padrão diferenciado de uso da Internet nos hotéis de pequeno, médio e grande porte e como os fatores acima descritos podem ser mais bem explorados a fim de se atingir um eficiente padrão de uso, aumentando suas posições competitivas. Baseadas na análise e resultados obtidos do estudo, são esboçadas algumas recomendações e implicações para futuras pesquisas. ABSTRACT:The Internet technology has revolutionized the world, creating new forms of interaction among people, organizations and businesses. The hotel sector has reaped many benefits from services supported by the Internet. The object of this study is to explore different factors that influence the adoption of the Internet in three areas: individual, organizational and environment. A conceptual framework was advanced containing nine independent variables and two dependent variables related to the usage of the Internet. Data was collected from 52 hotels located along the coast of Recife, PE, Brazil. Analysis of the data has demonstrated the Internet use in small, medium and large size hotels. Some attributes of the Internet usage could be better utilized by owners and managers in order to achieve a more efficient pattern of use, improving their competitive position. Based on the findings obtained from the study, some recommendations and implications for future research are advanced

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Internet é uma tecnologia que revolucionou o mundo, criando novas formas de interação entre pessoas, organizações e negócios. O setor hoteleiro é um segmento que muito tem se beneficiado dos serviços suportados pela Internet. O objetivo do estudo é identificar os diferentes fatores que influenciam ao uso da Internet sob três dimensões: individual, organizacional e ambiental. Um modelo conceitual foi postulado contendo nove variáveis independentes sobre duas variáveis dependentes, relativas ao padrão de uso da Internet. Os dados foram coletados junto a 52 hotéis localizados no litoral do Recife – PE. O resultado da análise inferencial dos dados mostrou um padrão diferenciado de uso da Internet nos hotéis de pequeno, médio e grande porte e como os fatores acima descritos podem ser mais bem explorados a fim de se atingir um eficiente padrão de uso, aumentando suas posições competitivas. Baseadas na análise e resultados obtidos do estudo, são esboçadas algumas recomendações e implicações para futuras pesquisas. ABSTRACT:The Internet technology has revolutionized the world, creating new forms of interaction among people, organizations and businesses. The hotel sector has reaped many benefits from services supported by the Internet. The object of this study is to explore different factors that influence the adoption of the Internet in three areas: individual, organizational and environment. A conceptual framework was advanced containing nine independent variables and two dependent variables related to the usage of the Internet. Data was collected from 52 hotels located along the coast of Recife, PE, Brazil. Analysis of the data has demonstrated the Internet use in small, medium and large size hotels. Some attributes of the Internet usage could be better utilized by owners and managers in order to achieve a more efficient pattern of use, improving their competitive position. Based on the findings obtained from the study, some recommendations and implications for future research are advanced

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, the 380V DC and 48V DC distribution systems have been extensively studied for the latest data centers. It is widely believed that the 380V DC system is a very promising candidate because of its lower cable cost compared to the 48V DC system. However, previous studies have not adequately addressed the low reliability issue with the 380V DC systems due to large amount of series connected batteries. In this thesis, a quantitative comparison for the two systems has been presented in terms of efficiency, reliability and cost. A new multi-port DC UPS with both high voltage output and low voltage output is proposed. When utility ac is available, it delivers power to the load through its high voltage output and charges the battery through its low voltage output. When utility ac is off, it boosts the low battery voltage and delivers power to the load form the battery. Thus, the advantages of both systems are combined and the disadvantages of them are avoided. High efficiency is also achieved as only one converter is working in either situation. Details about the design and analysis of the new UPS are presented. For the main AC-DC part of the new UPS, a novel bridgeless three-level single-stage AC-DC converter is proposed. It eliminates the auxiliary circuit for balancing the capacitor voltages and the two bridge rectifier diodes in previous topology. Zero voltage switching, high power factor, and low component stresses are achieved with this topology. Compared to previous topologies, the proposed converter has a lower cost, higher reliability, and higher efficiency. The steady state operation of the converter is analyzed and a decoupled model is proposed for the converter. For the battery side converter as a part of the new UPS, a ZVS bidirectional DC-DC converter based on self-sustained oscillation control is proposed. Frequency control is used to ensure the ZVS operation of all four switches and phase shift control is employed to regulate the converter output power. Detailed analysis of the steady state operation and design of the converter are presented. Theoretical, simulation, and experimental results are presented to verify the effectiveness of the proposed concepts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In current data centers, an application (e.g., MapReduce, Dryad, search platform, etc.) usually generates a group of parallel flows to complete a job. These flows compose a coflow and only completing them all is meaningful to the application. Accordingly, minimizing the average Coflow Completion Time (CCT) becomes a critical objective of flow scheduling. However, achieving this goal in today's Data Center Networks (DCNs) is quite challenging, not only because the schedule problem is theoretically NP-hard, but also because it is tough to perform practical flow scheduling in large-scale DCNs. In this paper, we find that minimizing the average CCT of a set of coflows is equivalent to the well-known problem of minimizing the sum of completion times in a concurrent open shop. As there are abundant existing solutions for concurrent open shop, we open up a variety of techniques for coflow scheduling. Inspired by the best known result, we derive a 2-approximation algorithm for coflow scheduling, and further develop a decentralized coflow scheduling system, D-CAS, which avoids the system problems associated with current centralized proposals while addressing the performance challenges of decentralized suggestions. Trace-driven simulations indicate that D-CAS achieves a performance close to Varys, the state-of-the-art centralized method, and outperforms Baraat, the only existing decentralized method, significantly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Improving energy efficiency has become increasingly important in data centers in recent years to reduce the rapidly growing tremendous amounts of electricity consumption. The power dissipation of the physical servers is the root cause of power usage of other systems, such as cooling systems. Many efforts have been made to make data centers more energy efficient. One of them is to minimize the total power consumption of these servers in a data center through virtual machine consolidation, which is implemented by virtual machine placement. The placement problem is often modeled as a bin packing problem. Due to the NP-hard nature of the problem, heuristic solutions such as First Fit and Best Fit algorithms have been often used and have generally good results. However, their performance leaves room for further improvement. In this paper we propose a Simulated Annealing based algorithm, which aims at further improvement from any feasible placement. This is the first published attempt of using SA to solve the VM placement problem to optimize the power consumption. Experimental results show that this SA algorithm can generate better results, saving up to 25 percentage more energy than First Fit Decreasing in an acceptable time frame.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stream ciphers are symmetric key cryptosystems that are used commonly to provide confidentiality for a wide range of applications; such as mobile phone, pay TV and Internet data transmissions. This research examines the features and properties of the initialisation processes of existing stream ciphers to identify flaws and weaknesses, then presents recommendations to improve the security of future cipher designs. This research investigates well-known stream ciphers: A5/1, Sfinks and the Common Scrambling Algorithm Stream Cipher (CSA-SC). This research focused on the security of the initialisation process. The recommendations given are based on both the results in the literature and the work in this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adopting a multi-theoretical approach, I examine external auditors’ perceptions of the reasons why organizations do or do not adopt cloud computing. I interview forensic accountants and IT experts about the adoption, acceptance, institutional motives, and risks of cloud computing. Although the medium to large accounting firms where the external auditors worked almost exclusively used private clouds, both private and public cloud services were gaining a foothold among many of their clients. Despite the advantages of cloud computing, data confidentiality and the involvement of foreign jurisdictions remain a concern, particularly if the data are moved outside Australia. Additionally, some organizations seem to understand neither the technology itself nor their own requirements, which may lead to poorly negotiated contracts and service agreements. To minimize the risks associated with cloud computing, many organizations turn to hybrid solutions or private clouds that include national or dedicated data centers. To the best of my knowledge, this is the first empirical study that reports on cloud computing adoption from the perspectives of external auditors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examines young people s political participation in transnational meetings. Methodologically the study aims to shed light on multi-sited global ethnography. Young people are viewed here as a social age group sensitive to critical, alternative and even radical political participation. The diversity of the young actors and their actions is captured by using several different methods. What is more, the study spurs us coming from the Global North to develop social science research towards methodological cosmopolitanism and to consider our research practices from a moral cosmopolitan perspective. The research sites are the EU Presidency Youth Event (2006 Hyvinkää, Finland), the Global Young Greens Founding Conference (2007 Nairobi, Kenya), the European Social Forum (2008 Malmö, Sweden) and three World Social Forums (2006 Bamako, Mali; 2007 Nairobi Kenya and 2009 Belém, Brazil). The data consists of participant observation, documents and media articles of the meetings, interviews, photos, video, and internet data. This multidisciplinary study combines youth research, development studies, performative social science and political sociology. In this research the diverse field of youth political participation in transnational agoras is studied by using a cross-table of cosmopolitan resources (or the lack of them) and everydaymakers expert citizen dichotomy. First, the young participants of the EU Presidency youth event are studied as an example of expert citizens with cosmopolitan resources (these resources include, for example, language skills, higher education and international social network). Second, the study analyses those everyday-makers who use performative politics to demonstrate their political missions here and now. But in order to make the social movement global they need cosmopolitan resources to be able to use the social media tools and work globally. Third, the study reflects upon the difficulties of reaching those actors who lack cosmopolitan resources, either everyday-makers or expert citizens. The go-along method and the use of the interpreters are shown as ways to reach these young people s political missions. Fourth, the research underlines the importance of contact zones (i.e. spaces or situations where the aforementioned orientations and their differences temporarily disappear or weaken) for deeper democracy and for boosted dialogue between different kinds of participants. Keywords: political participation, young people, multi-sited ethnography, youth research, political sociology, development studies, performative social science

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thermal management of distributed electronics similar to data centers is studied using a bi-disperse porous medium (BDPM) approach. The BDPM channel comprises heat generating micro-porous square blocks, separated by macro-pores. Laminar forced convection cooling fluid of Pr = 0.7 saturates both the micro- and macro-pores. Bi-dispersion effect is induced by varying the macro-pore volume fraction phi(E), and by changing the number of porous blocks N-2, both representing re-distribution of the electronics. When 0.2 <= phi(E) <= 0.86, the heat transfer No is enhanced twice (from similar to 550 to similar to 1100) while the pressure drop Delta p* reduces almost eightfold. For phi(E) < 0.5, No reduces quickly to reach a minimum at the mono -disperse porous medium (MDPM) limit (phi(E) -> 0). Compared to N-2 = 1 case, No for BDPM configuration is high when N-2 >> 1, i.e., the micro-porous blocks are many and well distributed. The Nu increase with Re changes from non-linear to linear as N-2 increases from 1 to 81, with corresponding insignificant pumping power increase. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During 11-12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication and funding. An important outcome of this meeting was the creation of a Specialist Protein Resource Network that we believe will improve coordination of the activities of its member resources. We invite further protein database resources to join the network and continue the dialogue.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis describes the design and implementation of a situation awareness application. The application gathers data from sensors including accelerometers for monitoring earthquakes, carbon monoxide sensors for monitoring fires, radiation detectors, and dust sensors. The application also gathers Internet data sources including data about traffic congestion on daily commute routes, information about hazards, news relevant to the user of the application, and weather. The application sends the data to a Cloud computing service which aggregates data streams from multiple sites and detects anomalies. Information from the Cloud service is then displayed by the application on a tablet, computer monitor, or television screen. The situation awareness application enables almost all members of a community to remain aware of critical changes in their environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.

The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.

The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.

To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.