978 resultados para cloud computing, cloud federation, concurrent live migration, data center, qemu, kvm, libvirt


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Discounted Cumulative Gain (DCG) is a well-known ranking evaluation measure for models built with multiple relevance graded data. By handling tagging data used in recommendation systems as an ordinal relevance set of {negative,null,positive}, we propose to build a DCG based recommendation model. We present an efficient and novel learning-to-rank method by optimizing DCG for a recommendation model using the tagging data interpretation scheme. Evaluating the proposed method on real-world datasets, we demonstrate that the method is scalable and outperforms the benchmarking methods by generating a quality top-N item recommendation list.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Topology-based methods have been successfully used for the analysis and visualization of piecewise-linear functions defined on triangle meshes. This paper describes a mechanism for extending these methods to piecewise-quadratic functions defined on triangulations of surfaces. Each triangular patch is tessellated into monotone regions, so that existing algorithms for computing topological representations of piecewise-linear functions may be applied directly to the piecewise-quadratic function. In particular, the tessellation is used for computing the Reeb graph, a topological data structure that provides a succinct representation of level sets of the function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. Malleable applications, where the number of processors on which the applications execute can be changed during executions, can make use of their malleability to better tolerate high failure rates. We present AdFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. AdFT framework includes cost models for evaluating the benefits of various fault tolerance actions including checkpointing, live-migration and rescheduling, and runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in application performance, and is effective even for petascale systems and beyond.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theoretical investigations have been carried out to analyze and compare the link power budget and power dissipation of non-return-to-zero (NRZ), pulse amplitude modulation-4 (PAM-4), carrierless amplitude and phase modulation-16 (CAP-16) and 16-quadrature amplitude modulation-orthogonal frequency division multiplexing (16-QAM-OFDM) systems for data center interconnect scenarios. It is shown that for multimode fiber (MMF) links, NRZ modulation schemes with electronic equalization offer the best link power budget margins with the least power dissipation for short transmission distances up to 200 m; while OOFDM is the only scheme which can support a distance of 300 m albeit with power dissipation as high as 4 times that of NRZ. For short single mode fiber (SMF) links, all the modulation schemes offer similar link power budget margins for fiber lengths up to 15 km, but NRZ and PAM-4 are preferable due to their system simplicity and low power consumption. For lengths of up to 30 km, CAP-16 and OOFDM are required although the schemes consume 2 and 4 times as much power respectively compared to that of NRZ. OOFDM alone allows link operation up to 35 km distances. © 1983-2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a low latency optical data center top of rack switch using recirculation buffering and a hybrid MZ/SOA switch architecture to reduce the network power dissipated on future optically connected server chips by 53%. © OSA 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High dimensional biomimetic informatics (HDBI) is a novel theory of informatics developed in recent years. Its primary object of research is points in high dimensional Euclidean space, and its exploratory and resolving procedures are based on simple geometric computations. However, the mathematical descriptions and computing of geometric objects are inconvenient because of the characters of geometry. With the increase of the dimension and the multiformity of geometric objects, these descriptions are more complicated and prolix especially in high dimensional space. In this paper, we give some definitions and mathematical symbols, and discuss some symbolic computing methods in high dimensional space systematically from the viewpoint of HDBI. With these methods, some multi-variables problems in high dimensional space can be solved easily. Three detailed algorithms are presented as examples to show the efficiency of our symbolic computing methods: the algorithm for judging the center of a circle given three points on this circle, the algorithm for judging whether two points are on the same side of a hyperplane, and the algorithm for judging whether a point is in a simplex constructed by points in high dimensional space. Two experiments in blurred image restoration and uneven lighting image correction are presented for all these algorithms to show their good behaviors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Like human immunodeficiency virus type 1 (HIV-1), simian immunodeficiency virus of chimpanzees (SIVcpz) can cause CD4+ T cell loss and premature death. Here, we used molecular surveillance tools and mathematical modeling to estimate the impact of SIVcpz infection on chimpanzee population dynamics. Habituated (Mitumba and Kasekela) and non-habituated (Kalande) chimpanzees were studied in Gombe National Park, Tanzania. Ape population sizes were determined from demographic records (Mitumba and Kasekela) or individual sightings and genotyping (Kalande), while SIVcpz prevalence rates were monitored using non-invasive methods. Between 2002-2009, the Mitumba and Kasekela communities experienced mean annual growth rates of 1.9% and 2.4%, respectively, while Kalande chimpanzees suffered a significant decline, with a mean growth rate of -6.5% to -7.4%, depending on population estimates. A rapid decline in Kalande was first noted in the 1990s and originally attributed to poaching and reduced food sources. However, between 2002-2009, we found a mean SIVcpz prevalence in Kalande of 46.1%, which was almost four times higher than the prevalence in Mitumba (12.7%) and Kasekela (12.1%). To explore whether SIVcpz contributed to the Kalande decline, we used empirically determined SIVcpz transmission probabilities as well as chimpanzee mortality, mating and migration data to model the effect of viral pathogenicity on chimpanzee population growth. Deterministic calculations indicated that a prevalence of greater than 3.4% would result in negative growth and eventual population extinction, even using conservative mortality estimates. However, stochastic models revealed that in representative populations, SIVcpz, and not its host species, frequently went extinct. High SIVcpz transmission probability and excess mortality reduced population persistence, while intercommunity migration often rescued infected communities, even when immigrating females had a chance of being SIVcpz infected. Together, these results suggest that the decline of the Kalande community was caused, at least in part, by high levels of SIVcpz infection. However, population extinction is not an inevitable consequence of SIVcpz infection, but depends on additional variables, such as migration, that promote survival. These findings are consistent with the uneven distribution of SIVcpz throughout central Africa and explain how chimpanzees in Gombe and elsewhere can be at equipoise with this pathogen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The US National Oceanic and Atmospheric Administration (NOAA) Fisheries Continuous Plankton Recorder (CPR) Survey has sampled four routes: Boston–Nova Scotia (1961–present), New York toward Bermuda (1976–present), Narragansett Bay–Mount Hope Bay–Rhode Island Sound (1998–present) and eastward of Chesapeake Bay (1974–1980). NOAA involvement began in 1974 when it assumed responsibility for the existing Boston–Nova Scotia route from what is now the UK's Sir Alister Hardy Foundation for Ocean Science (SAHFOS). Training, equipment and computer software were provided by SAHFOS to ensure continuity for this and standard protocols for any new routes. Data for the first 14 years of this route were provided to NOAA by SAHFOS. Comparison of collection methods; sample processing; and sample identification, staging and counting techniques revealed near-consistency between NOAA and SAHFOS. One departure involved phytoplankton counting standards. This has since been addressed and the data corrected. Within- and between-survey taxonomic and life-stage names and their consistency through time were, and continue to be, an issue. For this, a cross-reference table has been generated that contains the SAHFOS taxonomic code, NOAA taxonomic code, NOAA life-stage code, National Oceanographic Data Center (NODC) taxonomic code, Integrated Taxonomic Information System (ITIS) serial number and authority and consistent use/route. This table is available for review/use by other CPR surveys. Details of the NOAA and SAHFOS comparison and analytical techniques unique to NOAA are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While WiFi monitoring networks have been deployed in previous research, to date none have assessed live network data from an open access, public environment. In this paper we describe the construction of a replicable, independent WLAN monitoring system and address some of the challenges in analysing the resultant traffic. Analysis of traffic from the system demonstrates that basic traffic information from open-access networks varies over time (temporal inconsistency). The results also show that arbitrary selection of Request-Reply intervals can have a significant effect on Probe and Association frame exchange calculations, which can impact on the ability to detect flooding attacks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Intrusion Detection System (IDS) is a common means of protecting networked systems from attack or malicious misuse. The deployment of an IDS can take many different forms dependent on protocols, usage and cost. This is particularly true of Wireless Intrusion Detection Systems (WIDS) which have many detection challenges associated with data transmission through an open, shared medium, facilitated by fundamental changes at the Physical and MAC layers. WIDS need to be considered in more detail at these lower layers than their wired counterparts as they face unique challenges. The remainder of this chapter will investigate three of these challenges where WiFi deviates significantly from that of wired counterparts:

• Attacks Specific to WiFi Networks: Outlining the additional threats which WIDS must account for: Denial of Service, Encryption Bypass and AP Masquerading attacks.

• The Effect of Deployment Architecture on WIDS Performance: Demonstrating that the deployment environment of a network protected by a WIDS can influence the prioritisation of attacks.

• The Importance of Live Data in WiFi Research: Investigating the different choices for research data sources with an emphasis on encouraging live network data collection for future WiFi research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High speed downlink packet access (HSDPA) was introduced to UMTS radio access segment to provide higher capacity for new packet switched services. As a result, packet switched sessions with multiple diverse traffic flows such as concurrent voice and data, or video and data being transmitted to the same user are a likely commonplace cellular packet data scenario. In HSDPA, radio access network (RAN) buffer management schemes are essential to support the end-to-end QoS of such sessions. Hence in this paper we present the end-to-end performance study of a proposed RAN buffer management scheme for multi-flow sessions via dynamic system-level HSDPA simulations. The scheme is an enhancement of a time-space priority (TSP) queuing strategy applied to the node B MAC-hs buffer allocated to an end user with concurrent real-time (RT) and non-real-time (NRT) flows during a multi-flow session. The experimental multi- flow scenario is a packet voice call with concurrent TCP-based file download to the same user. Results show that with the proposed enhancements to the TSP-based RAN buffer management, end-to-end QoS performance gains accrue to the NRT flow without compromising RT flow QoS of the same end user session

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasingly large amounts of data are stored in main memory of data center servers. However, DRAM-based memory is an important consumer of energy and is unlikely to scale in the future. Various byte-addressable non-volatile memory (NVM) technologies promise high density and near-zero static energy, however they suffer from increased latency and increased dynamic energy consumption.

This paper proposes to leverage a hybrid memory architecture, consisting of both DRAM and NVM, by novel, application-level data management policies that decide to place data on DRAM vs. NVM. We analyze modern column-oriented and key-value data stores and demonstrate the feasibility of application-level data management. Cycle-accurate simulation confirms that our methodology reduces the energy with least performance degradation as compared to the current state-of-the-art hardware or OS approaches. Moreover, we utilize our techniques to apportion DRAM and NVM memory sizes for these workloads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peak power consumption is the first order design constraint of data centers. Though peak power consumption is rarely, if ever, observed, the entire data center facility must prepare for it, leading to inefficient usage of its resources. The most prominent way for addressing this issue is to limit the power consumption of the data center IT facility far below its theoretical peak value. Many approaches have been proposed to achieve that, based on the same small set of enforcement mechanisms, but there has been no corresponding work on systematically examining the advantages and disadvantages of each such mechanism. In the absence of such a study,it is unclear what is the optimal mechanism for a given computing environment, which can lead to unnecessarily poor performance if an inappropriate scheme is used. This paper fills this gap by comparing for the first time five widely used power capping mechanisms under the same hardware/software setting. We also explore possible alternative power capping mechanisms beyond what has been previously proposed and evaluate them under the same setup. We systematically analyze the strengths and weaknesses of each mechanism, in terms of energy efficiency, overhead, and predictable behavior. We show how these mechanisms can be combined in order to implement an optimal power capping mechanism which reduces the slow down compared to the most widely used mechanism by up to 88%. Our results provide interesting insights regarding the different trade-offs of power capping techniques, which will be useful for designing and implementing highly efficient power capping in the future. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A first-stage collision database is assembled which contains electron-impact excitation, ionization, and recombination rate coefficients for Be, Be+, Be2+, and Be3+. The first-stage database is constructed using the R-matrix with pseudo-states, time-dependent close-coupling, and perturbative, distorted-wave methods. A second-stage collision database is then assembled which contains generalized collisional-radiative and radiated power loss coefficients. The second-stage database is constructed by solution of collisional-radiative equations in the quasi-static equilibrium approximation using the first-stage database. Both collision database stages reside in electronic form at the ORNL Controlled Fusion Atomic Data Center and in the ADAS database, and are easily accessed over the worldwide internet. © 2007 Elsevier Inc. All rights reserved.