882 resultados para Distributed replication system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graph analytics is an important and computationally demanding class of data analytics. It is essential to balance scalability, ease-of-use and high performance in large scale graph analytics. As such, it is necessary to hide the complexity of parallelism, data distribution and memory locality behind an abstract interface. The aim of this work is to build a scalable graph analytics framework that does not demand significant parallel programming experience based on NUMA-awareness.
The realization of such a system faces two key problems:
(i)~how to develop a scale-free parallel programming framework that scales efficiently across NUMA domains; (ii)~how to efficiently apply graph partitioning in order to create separate and largely independent work items that can be distributed among threads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind generation in highly interconnected power networks creates local and centralised stability issues based on their proximity to conventional synchronous generators and load centres. This paper examines the large disturbance stability issues (i.e. rotor angle and voltage stability) in power networks with geographically distributed wind resources in the context of a number of dispatch scenarios based on profiles of historical wind generation for a real power network. Stability issues have been analysed using novel stability indices developed from dynamic characteristics of wind generation. The results of this study show that localised stability issues worsen when significant penetration of both conventional and wind generation is present due to their non-complementary characteristics. In contrast, network stability improves when either high penetration of wind and synchronous generation is present in the network. Therefore, network regions can be clustered into two distinct stability groups (i.e. superior stability and inferior stability regions). Network stability improves when a voltage control strategy is implemented at wind farms, however both stability clusters remain unchanged irrespective of change in the control strategy. Moreover, this study has shown that the enhanced fault ride-through (FRT) strategy for wind farms can improve both voltage and rotor angle stability locally, but only a marginal improvement is evident in neighbouring regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Future power systems are expected to integrate large-scale stochastic and intermittent generation and load due to reduced use of fossil fuel resources, including renewable energy sources (RES) and electric vehicles (EV). Inclusion of such resources poses challenges for the dynamic stability of synchronous transmission and distribution networks, not least in terms of generation where system inertia may not be wholly governed by large-scale generation but displaced by small-scale and localised generation. Energy storage systems (ESS) can limit the impact of dispersed and distributed generation by offering supporting reserve while accommodating large-scale EV connection; the latter (load) also participating in storage provision. In this paper, a local energy storage system (LESS) is proposed. The structure, requirement and optimal sizing of the LESS are discussed. Three operating modes are detailed, including: 1) storage pack management; 2) normal operation; and 3) contingency operation. The proposed LESS scheme is evaluated using simulation studies based on data obtained from the Northern Ireland regional and residential network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Utilization of renewable energy sources and energy storage systems is increasing with fostering new policies on energy industries. However, the increase of distributed generation hinders the reliability of power systems. In order to stabilize them, a virtual power plant emerges as a novel power grid management system. The VPP has a role to make a participation of different distributed energy resources and energy storage systems. This paper defines core technology of the VPP which are demand response and ancillary service concerning about Korea, America and Europe cases. It also suggests application solutions of the VPP to V2G market for restructuring national power industries in Korea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The renewable energy sources (RES) will play a vital role in the future power needs in view of the increasing demand of electrical energy and depletion of fossil fuel with its environmental impact. The main constraints of renewable energy (RE) generation are high capital investment, fluctuation in generation and requirement of vast land area. Distributed RE generation on roof top of buildings will overcome these issues to some extent. Any system will be feasible only if it is economically viable and reliable. Economic viability depends on the availability of RE and requirement of energy in specific locations. This work is directed to examine the economic viability of the system at desired location and demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past years, we could observe a significant amount of new robotic systems in science, industry, and everyday life. To reduce the complexity of these systems, the industry constructs robots that are designated for the execution of a specific task such as vacuum cleaning, autonomous driving, observation, or transportation operations. As a result, such robotic systems need to combine their capabilities to accomplish complex tasks that exceed the abilities of individual robots. However, to achieve emergent cooperative behavior, multi-robot systems require a decision process that copes with the communication challenges of the application domain. This work investigates a distributed multi-robot decision process, which addresses unreliable and transient communication. This process composed by five steps, which we embedded into the ALICA multi-agent coordination language guided by the PROViDE negotiation middleware. The first step encompasses the specification of the decision problem, which is an integral part of the ALICA implementation. In our decision process, we describe multi-robot problems by continuous nonlinear constraint satisfaction problems. The second step addresses the calculation of solution proposals for this problem specification. Here, we propose an efficient solution algorithm that integrates incomplete local search and interval propagation techniques into a satisfiability solver, which forms a satisfiability modulo theories (SMT) solver. In the third decision step, the PROViDE middleware replicates the solution proposals among the robots. This replication process is parameterized with a distribution method, which determines the consistency properties of the proposals. In a fourth step, we investigate the conflict resolution. Therefore, an acceptance method ensures that each robot supports one of the replicated proposals. As we integrated the conflict resolution into the replication process, a sound selection of the distribution and acceptance methods leads to an eventual convergence of the robot proposals. In order to avoid the execution of conflicting proposals, the last step comprises a decision method, which selects a proposal for implementation in case the conflict resolution fails. The evaluation of our work shows that the usage of incomplete solution techniques of the constraint satisfaction solver outperforms the runtime of other state-of-the-art approaches for many typical robotic problems. We further show by experimental setups and practical application in the RoboCup environment that our decision process is suitable for making quick decisions in the presence of packet loss and delay. Moreover, PROViDE requires less memory and bandwidth compared to other state-of-the-art middleware approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet of things (IoT) is still in its infancy and has attracted much interest in many industrial sectors including medical fields, logistics tracking, smart cities and automobiles. However, as a paradigm, it is susceptible to a range of significant intrusion threats. This paper presents a threat analysis of the IoT and uses an Artificial Neural Network (ANN) to combat these threats. A multi-level perceptron, a type of supervised ANN, is trained using internet packet traces, then is assessed on its ability to thwart Distributed Denial of Service (DDoS/DoS) attacks. This paper focuses on the classification of normal and threat patterns on an IoT Network. The ANN procedure is validated against a simulated IoT network. The experimental results demonstrate 99.4% accuracy and can successfully detect various DDoS/DoS attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract-The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an artificial immune system (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by collaborative filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen - antibody interaction for matching and antibody - antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract. The use of artificial immune systems in intrusion detection is an appealing concept for two reasons. Firstly, the human immune system provides the human body with a high level of protection from invading pathogens, in a robust, self-organised and distributed manner. Secondly, current techniques used in computer security are not able to cope with the dynamic and increasingly complex nature of computer systems and their security. It is hoped that biologically inspired approaches in this area, including the use of immune-based systems will be able to meet this challenge. Here we collate the algorithms used, the development of the systems and the outcome of their implementation. It provides an introduction and review of the key developments within this field, in addition to making suggestions for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by Collaborative Filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The explosion in mobile data traffic is a driver for future network operator technologies, given its large potential to affect both network performance and generated revenue. The concept of distributed mobility management (DMM) has emerged in order to overcome efficiency-wise limitations in centralized mobility approaches, proposing not only the distribution of anchoring functions but also dynamic mobility activation sensitive to the applications needs. Nevertheless, there is not an acceptable solution for IP multicast in DMM environments, as the first proposals based on MLD Proxy are prone to tunnel replication problem or service disruption. We propose the application of PIM-SM in mobility entities as an alternative solution for multicast support in DMM, and introduce an architecture enabling mobile multicast listeners support over distributed anchoring frameworks in a network-efficient way. The architecture aims at providing operators with flexible options to provide multicast mobility, supporting three modes: the first one introduces basic IP multicast support in DMM; the second improves subscription time through extensions to the mobility protocol, obliterating the dependence on MLD protocol; and the third enables fast listener mobility by avoiding potentially slow multicast tree convergence latency in larger infrastructures, by benefiting from mobility tunnels. The different modes were evaluated by mathematical analysis regarding disruption time and packet loss during handoff against several parameters, total and tunneling packet delivery cost, and regarding packet and signaling overhead.