32 resultados para Distributed data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick “repairs,” which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:



We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletesan arbitrary node from the network, then the network responds by quickly adding a small number of new edges.

We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than O(log Delta) times its original diameter, where Delta is the maximum degree of the network initially. We note that for many peer-to-peer systems, Delta is polylogarithmic, so the diameter increase would be a O(loglog n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions,without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been - in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most - log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degreewould have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network. © Springer-Verlag 2012.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Implications Provision of environmental enrichment in line with that required by welfare-based quality assurance schemesdoes not always appear to lead to clear improvements in broiler chicken welfare. This research perhaps serves to highlightthe deficit in information regarding the ‘real world’ implications of enrichment with perches, string and straw bales.

Introduction Earlier work showed that provision of natural light and straw bales improved leg health in commercial broilerchickens (Bailie et al., 2013). This research aimed to determine if additional welfare benefits were shown in windowedhouses by increasing straw bale provision (Study 1), or by providing perches and string in addition to straw bales (Study 2).

Material and methods Commercial windowed houses in Northern Ireland containing ~23,000 broiler chickens (placed inhouses as hatched) were used in this research which took place in 2011. In Study 1 two houses on a single farm wereassigned to one of two treatments: (1) 30 straw bales per house (1 bale/44m2), or (2) 45 straw bales per house (1bale/29m2). Bales of wheat straw, each measuring 80cm x 40cm x 40cm were provided from day 10 of the rearing cycle,as in Bailie et al. (2013). Treatments were replicated over 6 production cycles (using 276,000 Ross 308 and Cobb birds),and were swapped between houses in each replicate. In Study 2, four houses on a single farm were assigned to 1 of 4treatments in a 2 x 2 factorial design. Treatments involved 2 levels of access to perches (present (24/house), or absent), and2 levels of access to string (present (24/house), or absent), and both types of enrichment were provided from the start of thecycle. Each perch consisted of a horizontal, wooden beam (300 cm x 5 cm x 5cm) with a rounded upper edge resting on 2supports (15 cm high). In the string treatment, 6 pieces of white nylon string (60 cm x 10 mm) were tied at their mid-pointto the wire above each of 4 feeder lines. Thirty straw bales were also provided per house from day 10. This study wasreplicated over 4 production cycles using 368,000 Ross 308 birds. In both studies behaviour was observed between 0900and 1800 hours in weeks 3-5 of the cycle. In Study 1, 8 focal birds were selected in each house each week, and generalactivity, exploratory and social behaviours recorded directly for 10 minutes. In Study 2, 10 minute video recordings weremade of 6 different areas (that did not contain enrichment) of each house each week. The percentage of birds engaged inlocomotion or standing was determined through scan sampling these recordings at 120 second intervals. Four perches andfour pieces of string were filmed for 25 mins in each house that contained these enrichments on one day per week. The totalnumber of times the perch or string was used was recorded, along with the duration of each bout. In both studies, gaitscores (0 (perfect) to 5 (unable to walk)) and latency to lie (measured in seconds from when a bird had been encouraged tostand) were recorded in 25 birds in each house each week. Farm and abattoir records were also used in both studies todetermine the number of birds culled for leg and other problems, mortality levels, slaughter weights, and levels of pododermatitis and hock burn. Data were analysed using SPSS (version 20.0) and treatment and age effects on behaviouralparameters were determined in normally distributed data using ANOVA (‘Straw bale density*week’, or‘string*perches*week’ as appropriate), and in non-normally distributed data using Kuskall-Wallace tests (P<0.05 forsignificance) . Treatment (but not age) effects on performance and health data were determined using the same testsdepending on normality of data.

Results Average slaughter weight, and levels of mortality, culling, hock burn and pododermatitis were not affected bytreatment in either study (P<0.05). In Study 1 straw bale (SB) density had no significant effect on the frequency orduration of behaviours including standing, walking, ground pecking, dust bathing, pecking at bales or aggression, or onaverage gait score (P>0.05). However, the average latency to lie was greater when fewer SB were provided (30SB 23.38s,45SB 18.62s, P<0.01). In Study 2 there was an interaction between perches (Pe) and age in lying behaviour, with higherpercentages of birds observed lying in the Pe treatment during weeks 4 and 5 (week 3 +Pe 77.0 -Pe 80.9, week 4 +Pe 79.5 -Pe 75.2, week 5 +Pe 78.4 -Pe 76.2, P<0.02). There was also a significant interaction between string (S) and age inlocomotory behaviour, with higher percentages of birds observed in locomotion in the string treatment during week 3 butnot weeks 4 and 5 (week 3 +S 4.9 -S 3.9, week 4 +S 3.3 -S 3.7, week 5 +S 2.6 -S 2.8, P<0.04). There was also aninteraction between S and age in average gait scores, with lower gait scores in the string treatment in weeks 3 and 5 (week3: +S 0.7, -S 0.9, week 4: +S 1.5, -S 1.4, week 5: +S 1.9, -S 2.0, P<0.05). On average per 25 min observation there were15.1 (±13.6) bouts of perching and 19.2 (±14.08) bouts of string pecking, lasting 117.4 (±92.7) and 4.2 (±2.0) s for perchesand string, respectively.

Conclusion Increasing straw bale levels from 1 bale/44m2 to 1 bale/29m2 floor space does not appear to lead to significantimprovements in the welfare of broilers in windowed houses. The frequent use of perches and string suggests that thesestimuli have the potential to improve welfare. Provision of string also appeared to positively influence walking ability.However, this effect was numerically small, was only shown in certain weeks and was not reflected in the latency to lie.Further research on optimum design and level of provision of enrichment items for broiler chickens is warranted. Thisshould include measures of overall levels of activity (both in the vicinity of, and away from, enrichment items).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PEGS (Production and Environmental Generic Scheduler) is a generic production scheduler that produces good schedules over a wide range of problems. It is centralised, using search strategies with the Shifting Bottleneck algorithm. We have also developed an alternative distributed approach using software agents. In some cases this reduces run times by a factor of 10 or more. In most cases, the agent-based program also produces good solutions for published benchmark data, and the short run times make our program useful for a large range of problems. Test results show that the agents can produce schedules comparable to the best found so far for some benchmark datasets and actually better schedules than PEGS on our own random datasets. The flexibility that agents can provide for today's dynamic scheduling is also appealing. We suggest that in this sort of generic or commercial system, the agent-based approach is a good alternative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional Time Division Multiple Access (TDMA) protocol provides deterministic periodic collision free data transmissions. However, TDMA lacks flexibility and exhibits low efficiency in dynamic environments such as wireless LANs. On the other hand contention-based MAC protocols such as the IEEE 802.11 DCF are adaptive to network dynamics but are generally inefficient in heavily loaded or large networks. To take advantage of the both types of protocols, a D-CVDMA protocol is proposed. It is based on the k-round elimination contention (k-EC) scheme, which provides fast contention resolution for Wireless LANs. D-CVDMA uses a contention mechanism to achieve TDMA-like collision-free data transmissions, which does not need to reserve time slots for forthcoming transmissions. These features make the D-CVDMA robust and adaptive to network dynamics such as node leaving and joining, changes in packet size and arrival rate, which in turn make it suitable for the delivery of hybrid traffic including multimedia and data content. Analyses and simulations demonstrate that D-CVDMA outperforms the IEEE 802.11 DCF and k-EC in terms of network throughput, delay, jitter, and fairness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Haptic information originates from a different human sense (touch), therefore the quality of service (QoS) required to supporthaptic traffic is significantly different from that used to support conventional real-time traffic such as voice or video. Each type ofnetwork impairment has different (and severe) impacts on the user’s haptic experience. There has been no specific provision of QoSparameters for haptic interaction. Previous research into distributed haptic virtual environments (DHVEs) have concentrated onsynchronization of positions (haptic device or virtual objects), and are based on client-server architectures.We present a new peerto-peer DHVE architecture that further extends this to enable force interactions between two users whereby force data are sent tothe remote peer in addition to positional information. The work presented involves both simulation and practical experimentationwhere multimodal data is transmitted over a QoS-enabled IP network. Both forms of experiment produce consistent results whichshow that the use of specific QoS classes for haptic traffic will reduce network delay and jitter, leading to improvements in users’haptic experiences with these types of applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A robust method for fitting to the results of gel electrophoresis assays of damage to plasmid DNA caused by radiation is presented. This method makes use of nonlinear regression to fit analytically derived dose response curves to observations of the supercoiled, open circular and linear plasmid forms simultaneously, allowing for more accurate results than fitting to individual forms. Comparisons with a commonly used analysis method show that while there is a relatively small benefit between the methods for data sets with small errors, the parameters generated by this method remain much more closely distributed around the true value in the face of increasing measurement uncertainties. This allows for parameters to be specified with greater confidence, reflected in a reduction of errors on fitted parameters. On test data sets, fitted uncertainties were reduced by 30%, similar to the improvement that would be offered by moving from triplicate to fivefold repeats (assuming standard errors). This method has been implemented in a popular spreadsheet package and made available online to improve its accessibility. (C) 2011 by Radiation Research Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multicore computational accelerators such as GPUs are now commodity components for highperformance computing at scale. While such accelerators have been studied in some detail as stand-alone computational engines, their integration in large-scale distributed systems raises new challenges and trade-offs. In this paper, we present an exploration of resource management alternatives for building asymmetric accelerator-based distributed systems. We present these alternatives in the context of a capabilities-aware framework for data-intensive computing, which uses an enhanced implementation of the MapReduce programming model for accelerator-based clusters, compared to the state of the art. The framework can transparently utilize heterogeneous accelerators for deriving high performance with low programming effort. Our work is the first to compare heterogeneous types of accelerators, GPUs and a Cell processors, in the same environment and the first to explore the trade-offs between compute-efficient and control-efficient accelerators on data-intensive systems. Our investigation shows that our framework scales well with the number of different compute nodes. Furthermore, it runs simultaneously on two different types of accelerators, successfully adapts to the resource capabilities, and performs 26.9% better on average than a static execution approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In studies of radiation-induced DNA fragmentation and repair, analytical models may provide rapid and easy-to-use methods to test simple hypotheses regarding the breakage and rejoining mechanisms involved. The random breakage model, according to which lesions are distributed uniformly and independently of each other along the DNA, has been the model most used to describe spatial distribution of radiation-induced DNA damage. Recently several mechanistic approaches have been proposed that model clustered damage to DNA. In general, such approaches focus on the study of initial radiation-induced DNA damage and repair, without considering the effects of additional (unwanted and unavoidable) fragmentation that may take place during the experimental procedures. While most approaches, including measurement of total DNA mass below a specified value, allow for the occurrence of background experimental damage by means of simple subtractive procedures, a more detailed analysis of DNA fragmentation necessitates a more accurate treatment. We have developed a new, relatively simple model of DNA breakage and the resulting rejoining kinetics of broken fragments. Initial radiation-induced DNA damage is simulated using a clustered breakage approach, with three free parameters: the number of independently located clusters, each containing several DNA double-strand breaks (DSBs), the average number of DSBs within a cluster (multiplicity of the cluster), and the maximum allowed radius within which DSBs belonging to the same cluster are distributed. Random breakage is simulated as a special case of the DSB clustering procedure. When the model is applied to the analysis of DNA fragmentation as measured with pulsed-field gel electrophoresis (PFGE), the hypothesis that DSBs in proximity rejoin at a different rate from that of sparse isolated breaks can be tested, since the kinetics of rejoining of fragments of varying size may be followed by means of computer simulations. The problem of how to account for background damage from experimental handling is also carefully considered. We have shown that the conventional procedure of subtracting the background damage from the experimental data may lead to erroneous conclusions during the analysis of both initial fragmentation and DSB rejoining. Despite its relative simplicity, the method presented allows both the quantitative and qualitative description of radiation-induced DNA fragmentation and subsequent rejoining of double-stranded DNA fragments. (C) 2004 by Radiation Research Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conditional Gaussian (CG) distributions allow the inclusion of both discrete and continuous variables in a model assuming that the continuous variable is normally distributed. However, the CG distributions have proved to be unsuitable for survival data which tends to be highly skewed. A new method of analysis is required to take into account continuous variables which are not normally distributed. The aim of this paper is to introduce the more appropriate conditional phase-type (C-Ph) distribution for representing a continuous non-normal variable while also incorporating the causal information in the form of a Bayesian network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ineffective risk stratification can delay diagnosis of serious disease in patients with hematuria. We applied a systems biology approach to analyze clinical, demographic and biomarker measurements (n = 29) collected from 157 hematuric patients: 80 urothelial cancer (UC) and 77 controls with confounding pathologies.

Methods: On the basis of biomarkers, we conducted agglomerative hierarchical clustering to identify patient and biomarker clusters. We then explored the relationship between the patient clusters and clinical characteristics using Chi-square analyses. We determined classification errors and areas under the receiver operating curve of Random Forest Classifiers (RFC) for patient subpopulations using the biomarker clusters to reduce the dimensionality of the data.

Results: Agglomerative clustering identified five patient clusters and seven biomarker clusters. Final diagnoses categories were non-randomly distributed across the five patient clusters. In addition, two of the patient clusters were enriched with patients with ‘low cancer-risk’ characteristics. The biomarkers which contributed to the diagnostic classifiers for these two patient clusters were similar. In contrast, three of the patient clusters were significantly enriched with patients harboring ‘high cancer-risk” characteristics including proteinuria, aggressive pathological stage and grade, and malignant cytology. Patients in these three clusters included controls, that is, patients with other serious disease and patients with cancers other than UC. Biomarkers which contributed to the diagnostic classifiers for the largest ‘high cancer- risk’ cluster were different than those contributing to the classifiers for the ‘low cancer-risk’ clusters. Biomarkers which contributed to subpopulations that were split according to smoking status, gender and medication were different.

Conclusions: The systems biology approach applied in this study allowed the hematuric patients to cluster naturally on the basis of the heterogeneity within their biomarker data, into five distinct risk subpopulations. Our findings highlight an approach with the promise to unlock the potential of biomarkers. This will be especially valuable in the field of diagnostic bladder cancer where biomarkers are urgently required. Clinicians could interpret risk classification scores in the context of clinical parameters at the time of triage. This could reduce cystoscopies and enable priority diagnosis of aggressive diseases, leading to improved patient outcomes at reduced costs. © 2013 Emmert-Streib et al; licensee BioMed Central Ltd.