951 resultados para load balancing algorithm
Resumo:
The increase in data center dependent services has made energy optimization of data centers one of the most exigent challenges in today's Information Age. The necessity of green and energy-efficient measures is very high for reducing carbon footprint and exorbitant energy costs. However, inefficient application management of data centers results in high energy consumption and low resource utilization efficiency. Unfortunately, in most cases, deploying an energy-efficient application management solution inevitably degrades the resource utilization efficiency of the data centers. To address this problem, a Penalty-based Genetic Algorithm (GA) is presented in this paper to solve a defined profile-based application assignment problem whilst maintaining a trade-off between the power consumption performance and resource utilization performance. Case studies show that the penalty-based GA is highly scalable and provides 16% to 32% better solutions than a greedy algorithm.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.
Resumo:
Although live VM migration has been intensively studied, the problem of live migration of multiple interdependent VMs has hardly been investigated. The most important problem in the live migration of multiple interdependent VMs is how to schedule VM migrations as the schedule will directly affect the total migration time and the total downtime of those VMs. Aiming at minimizing both the total migration time and the total downtime simultaneously, this paper presents a Strength Pareto Evolutionary Algorithm 2 (SPEA2) for the multi-VM migration scheduling problem. The SPEA2 has been evaluated by experiments, and the experimental results show that the SPEA2 can generate a set of VM migration schedules with a shorter total migration time and a shorter total downtime than an existing genetic algorithm, namely Random Key Genetic Algorithm (RKGA). This paper also studies the scalability of the SPEA2.
Resumo:
Background Resistance exercise is emerging as a potential adjunct therapy to aid in the management of breast cancer-related lymphedema (BCRL). However, the mechanisms underlying the relationships between the acute and long-term benefits of resistance exercise on BCRL are not well understood. Purpose. To examine the acute inflammatory response to upper-body resistance exercise in women with BCRL and to compare these effects between resistance exercises involving low-, moderate- and high-loads. The impact on lymphoedema status and associated symptoms was also compared. Methods Twenty-one women aged 62 ± 10 years with mild to severe BCRL participated in the study. Participants completed a low-load (15-20 repetition maximum), moderate-load (10-12 repetition maximum) and high-load (6-8 repetition maximum) exercise sessions consisting of three sets of six upper-body resistance exercises. Sessions were completed in a randomized order separated by a seven to 10 day wash-out period. Venous blood samples were obtained to assess markers of exercise-induced muscle damage and inflammation (creatine kinase [CK], C-reactive protein [CRP], interleukin-6 [IL-6] and tumour necrosis factor-alpha [TNF-α]). Lymphoedema status was assessed using bioimpedance spectroscopy and arm circumferences, and associated symptoms were assessed using visual analogue scales (VAS) for pain, heaviness and tightness. Measurements were conducted before and 24 hours after the exercise sessions. Results No significant changes in CK, CRP, IL-6 and TNF-α were observed following the low-, moderate- or high-load resistance exercise sessions. There were no significant changes in arm swelling or symptom severity scores across the three resistance exercise conditions. Conclusions The magnitude of acute exercise-induced inflammation following upper-body resistance exercise in women with BCRL does not vary between resistance exercise loads. Given these observations, moderate- to high-load resistance training is recommended for this patient population as these loads prompt superior physiological and functional benefits.
Resumo:
Projective Hjelmslev planes and affine Hjelmslev planes are generalisations of projective planes and affine planes. We present an algorithm for constructing projective Hjelmslev planes and affine Hjelmslev planes that uses projective planes, affine planes and orthogonal arrays. We show that all 2-uniform projective Hjelmslev planes, and all 2-uniform affine Hjelmslev planes can be constructed in this way. As a corollary it is shown that all $2$-uniform affine Hjelmslev planes are sub-geometries of $2$-uniform projective Hjelmslev planes.
Resumo:
This paper investigates communication protocols for relaying sensor data from animal tracking applications back to base stations. While Delay Tolerant Networks (DTNs) are well suited to such challenging environments, most existing protocols do not consider the available energy that is particularly important when tracking devices can harvest energy. This limits both the network lifetime and delivery probability in energy-constrained applications to the point when routing performance becomes worse than using no routing at all. Our work shows that substantial improvement in data yields can be achieved through simple yet efficient energy-aware strategies. Conceptually, there is need for balancing the energy spent on sensing, data mulling, and delivery of direct packets to destination. We use empirical traces collected in a flying fox (fruit bat) tracking project and show that simple threshold-based energy-aware strategies yield up to 20% higher delivery rates. Furthermore, these results generalize well for a wide range of operating conditions.
Resumo:
Objective Several genetic risk variants for ankylosing spondylitis (AS) have been identified in genome-wide association studies. Our objective was to examine whether familial AS cases have a higher genetic load of these susceptibility variants. Methods Overall, 502 AS patients were examined, consisting of 312 patients who had first-degree relatives (FDRs) with AS (familial) and 190 patients who had no FDRs with AS or spondylarthritis (sporadic). All patients and affected FDRs fulfilled the modified New York criteria for AS. The patients were recruited from 2 US cohorts (the North American Spondylitis Consortium and the Prospective Study of Outcomes in Ankylosing Spondylitis) and from the UK-Oxford cohort. The frequencies of AS susceptibility loci in IL-23R, IL1R2, ANTXR2, ERAP-1, 2 intergenic regions on chromosomes 2p15 and 21q22, and HLA-B27 status as determined by the tag single-nucleotide polymorphism (SNP) rs4349859 were compared between familial and sporadic cases of AS. Association between SNPs and multiplex status was assessed by logistic regression controlling for sibship size. Results HLA-B27 was significantly more prevalent in familial than sporadic cases of AS (odds ratio 4.44 [95% confidence interval 2.06, 9.55], P = 0.0001). Furthermore, the AS risk allele at chromosome 21q22 intergenic region showed a trend toward higher frequency in the multiplex cases (P = 0.08). The frequency of the other AS risk variants did not differ significantly between familial and sporadic cases, either individually or combined. Conclusion HLA-B27 is more prevalent in familial than sporadic cases of AS, demonstrating higher familial aggregation of AS in patients with HLA-B27 positivity. The frequency of the recently described non-major histocompatibility complex susceptibility loci is not markedly different between the sporadic and familial cases of AS.
Resumo:
Rolling-element bearing failures are the most frequent problems in rotating machinery, which can be catastrophic and cause major downtime. Hence, providing advance failure warning and precise fault detection in such components are pivotal and cost-effective. The vast majority of past research has focused on signal processing and spectral analysis for fault diagnostics in rotating components. In this study, a data mining approach using a machine learning technique called anomaly detection (AD) is presented. This method employs classification techniques to discriminate between defect examples. Two features, kurtosis and Non-Gaussianity Score (NGS), are extracted to develop anomaly detection algorithms. The performance of the developed algorithms was examined through real data from a test to failure bearing. Finally, the application of anomaly detection is compared with one of the popular methods called Support Vector Machine (SVM) to investigate the sensitivity and accuracy of this approach and its ability to detect the anomalies in early stages.
Resumo:
We present an algorithm for multiarmed bandits that achieves almost optimal performance in both stochastic and adversarial regimes without prior knowledge about the nature of the environment. Our algorithm is based on augmentation of the EXP3 algorithm with a new control lever in the form of exploration parameters that are tailored individually for each arm. The algorithm simultaneously applies the “old” control lever, the learning rate, to control the regret in the adversarial regime and the new control lever to detect and exploit gaps between the arm losses. This secures problem-dependent “logarithmic” regret when gaps are present without compromising on the worst-case performance guarantee in the adversarial regime. We show that the algorithm can exploit both the usual expected gaps between the arm losses in the stochastic regime and deterministic gaps between the arm losses in the adversarial regime. The algorithm retains “logarithmic” regret guarantee in the stochastic regime even when some observations are contaminated by an adversary, as long as on average the contamination does not reduce the gap by more than a half. Our results for the stochastic regime are supported by experimental validation.
Resumo:
We investigate the terminating concept of BKZ reduction first introduced by Hanrot et al. [Crypto'11] and make extensive experiments to predict the number of tours necessary to obtain the best possible trade off between reduction time and quality. Then, we improve Buchmann and Lindner's result [Indocrypt'09] to find sub-lattice collision in SWIFFT. We illustrate that further improvement in time is possible through special setting of SWIFFT parameters and also through the combination of different reduction parameters adaptively. Our contribution also include a probabilistic simulation approach top-up deterministic simulation described by Chen and Nguyen [Asiacrypt'11] that can able to predict the Gram-Schmidt norms more accurately for large block sizes.
Resumo:
This paper explores novel driving experiences that make use of gamification and augmented reality in the car. We discuss our design considerations, which are grounded in road safety psychology and video game design theory. We aim to address the tension between safe driving practices and player engagement. Specifically, we propose a holistic, iterative thinking process inspired by game design cognition and share our insights generated through the application of this process. We present preliminary game concepts that blend digital components with physical elements from the driving environment. We further highlight how this design process helped us to iteratively evolve these concepts towards being safer while maintaining fun. These insights and game design cognition itself will be useful to the AutomotiveUI community investigating similar novel driving experiences.
Resumo:
Phosphorus has a number of indispensable biochemical roles, but its natural deposition and the low solubility of phosphates as well as their rapid transformation to insoluble forms make the element commonly the growth-limiting nutrient, particularly in aquatic ecosystems. Famously, phosphorus that reaches water bodies is commonly the main cause of eutrophication. This undesirable process can severely affect many aquatic biotas in the world. More management practices are proposed but long-term monitoring of phosphorus level is necessary to ensure that the eutrophication won't occur. Passive sampling techniques, which have been developed over the last decades, could provide several advantages to the conventional sampling methods including simpler sampling devices, more cost-effective sampling campaign, providing flow proportional load as well as representative average of concentrations of phosphorus in the environment. Although some types of passive samplers are commercially available, their uses are still scarcely reported in the literature. In Japan, there is limited application of passive sampling technique to monitor phosphorus even in the field of agricultural environment. This paper aims to introduce the relatively new P-sampling techniques and their potential to use in environmental monitoring studies.
Resumo:
Background To date bone-anchored prostheses are used to alleviate the concerns caused by socket suspended prostheses and to improve the quality of life of transfemoral amputees (TFA). Currently, two implants are commercially available (i.e., OPRA (Integrum AB, Sweden), ILP (Orthodynamics GmbH, Germany)). [1-17]The success of the OPRA technique is codetermined by the rehabilitation program. TFA fitted with an osseointegrated implant perform progressive mechanical loading (i.e. static load bearing exercises (LBE)) to facilitate bone remodelling around the implant.[18, 19] Aim This study investigated the trustworthiness of monitoring the load prescribed (LP) during experimental static LBEs using the vertical force provided by a mechanical bathroom scale that is considered a surrogate of the actual load applied. Method Eleven unilateral TFAs fitted with an OPRA implant performed five trials in four loading conditions. The forces and moments on the three axes of the implant were measured directly with an instrumented pylon including a six-channel transducer. The “axial” and “vectorial” comparisons corresponding to the difference between the force applied on the long axis of the fixation and LP as well as the resultant of the three components of the load applied and LP, respectively were analysed Results For each loading condition, Wilcoxon One-Sample Signed Rank Tests were used to investigate if significant differences (p<0.05) could be demonstrated between the force applied on the long axis and LP, and between the resultant of the force and LP. The results demonstrated that the raw axial and vectorial differences were significantly different from zero in all conditions (p<0.05), except for the vectorial difference for the 40 kg loading condition (p=0.182). The raw axial difference was negative for all the participants in every loading condition, except for TFA03 in the 10 kg condition (11.17 N). Discussion & Conclusion This study showed a significant lack of axial compliance. The load applied on the long axis was significantly smaller than LP in every loading condition. This led to a systematic underloading of the long axis of the implant during the proposed experimental LBE. Monitoring the vertical force might be only partially reflective of the actual load applied, particularly on the long axis of the implant.
Resumo:
In this paper, we propose a new load distribution strategy called `send-and-receive' for scheduling divisible loads, in a linear network of processors with communication delay. This strategy is designed to optimally utilize the network resources and thereby minimizes the processing time of entire processing load. A closed-form expression for optimal size of load fractions and processing time are derived when the processing load originates at processor located in boundary and interior of the network. A condition on processor and link speed is also derived to ensure that the processors are continuously engaged in load distributions. This paper also presents a parallel implementation of `digital watermarking problem' on a personal computer-based Pentium Linear Network (PLN) topology. Experiments are carried out to study the performance of the proposed strategy and results are compared with other strategies found in literature.
Resumo:
Bearing failure is a form of localized failure that occurs when thin-walled cold-formed steel sections are subjected to concentrated loads or support reactions. To determine the bearing capacity of cold-formed channel sections, a unified design equation with different bearing coefficients is given in the current North American specification AISI S100 and the Australian/New Zealand standard AS/NZS 4600. However, coefficients are not available for unlipped channel sections that are normally fastened to supports through their flanges. Eurocode 3 Part 1.3 includes bearing capacity equations for different load cases, but does not distinguish between fastened and unfastened support conditions. Therefore, an experimental study was conducted to determine the bearing capacities of these sections as used in floor systems. Twenty-eight web bearing tests on unlipped channel sections with restrained flanges were conducted under End One Flange (EOF) and Interior One Flange (IOF) load cases. Using the results from this study, a new equation was proposed within the AISI S100 and AS/NZS 4600 guidelines to determine the bearing capacities of cold-formed unlipped channels with flanges fastened to supports. A new design rule was also proposed based on the direct strength method.