977 resultados para Distributed replication system


Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a growing need for massive computational resources for the analysis of new astronomical datasets. To tackle this problem, we present here our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, AstroGrid) and the computa- tional grid (e.g. TeraGrid, COSMOS etc.). We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of compu- tational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We discuss our planned usages of the VOTechBroker in computing a huge number of n–point correlation functions from the SDSS data and mas- sive model-fitting of millions of CMBfast models to WMAP data. We also discuss other applications including the determination of the XMM Cluster Survey selection function and the construction of new WMAP maps.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We outline our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, Astro- Grid) and the computational grid. We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of computational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We present our planned usage of the VOTechBroker in computing a huge number of n–point correlation functions from the SDSS, as well as fitting over a million CMBfast models to the WMAP data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently a number of highly publicised incidents of Distributed Denial of Service (DDoS) attacks have made people aware of the importance of providing available securely the grids’ data and services to users. This paper introduces the vulnerability of grids to DDoS attacks, and proposes a distributed defense system that has a mixture deployment of sub-systems to protect grids from DDoS attacks. According to the simulation experiments, this system is effective to defend grids against attacks. It can avoid overall network congestion and provide more resources to legitimate grid users.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A distributed database system is subject to site failure and link failure. This paper presents a reactive system approach to achieving the fault-tolerance in such a system. The reactive system concepts are an attractive paradigm for system design, development and maintenance because it separates policies from mechanisms. In the paper we give a solution using different reactive modules to implement the fault-tolerant policies and the failure detection mechanisms. The solution shows that they can be separated without impact on each other thus the system can adapt to constant changes in user requirements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the problem of performance modeling of large-scale heterogeneous distributed systems with emphases on enterprise grid computing systems. To this end, we present an analytical model that can be employed to explore the effectiveness of different design approaches so that one can have an intelligent choice during design and evaluation a cost-effective large-scale heterogeneous distributed computing system. The model is validated through comprehensive simulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the problem of performance modeling of heterogeneous multi-cluster computing systems. We present an analytical model that can be employed to explore the effectiveness of different design approaches so that one can have an intelligent choice during design and evaluation of a cost effective large-scale heterogeneous distributed computing system. The proposed model considers stochastic quantities as well as processor heterogeneity of the target system. The analysis is based on a parametric fat-tree network, the m-port n-tree, and a deterministic routing algorithm. The correctness of the proposed model is validated through comprehensive simulation of different types of clusters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vigorous begging is usually seen as an expression of parent–offspring conflict over limited resources. Chicks signal need by begging, but the evolution of honest signals requires the signals to be costly. Although some possible costs have been identified, the cost-inducing mechanisms underlying this widely distributed signalling system remain unclear. Because hormones associated with stress and hunger (corticosterone) and aggressive behaviour (testosterone) have deleterious side-effects, signalling costs may be coupled to the expression of such hormones, if they are closely associated with the signal. We tested whether begging in chicks of thin-billed prions (Aves, Procellariiformes) is associated with secretion of corticosterone and testosterone. Prion chicks honestly signalled their nutritional state. Begging increased with decreased body condition, both within and between chicks. Adults responded to more intense begging by delivering larger meals. Chick testosterone levels were positively correlated with measures of begging intensity and the mean body condition of chicks was correlated positively with testosterone and negatively with corticosterone. In a cross-fostering experiment, the change in testosterone and corticosterone between control and experimental periods was positively correlated with the change in begging intensity. This is the first experimental evidence that the control of chick begging by endogenously produced testosterone and corticosterone may form a mechanism controlling parental provisioning in birds, and that chick behaviour can help to explain the variation in growth patterns between individual birds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current attempts to manage parallel applications on Clusters of Workstations (COWs) have either generally followed the parallel execution environment approach or been extensions to existing network operating systems, both of which do not provide complete or satisfactory solutions. The efficient and transparent management of parallelism within the COW environment requires enhanced methods of process instantiation, mapping of parallel process to workstations, maintenance of process relationships, process communication facilities, and process coordination mechanisms. The aim of this research is to synthesise, design, develop and experimentally study a system capable of efficiently and transparently managing SPMD parallelism on a COW. This system should both improve the performance of SPMD based parallel programs and relieve the programmer from the involvement into parallelism management in order to allow them to concentrate on application programming. It is also the aim of this research to show that such a system, to achieve these objectives, is best achieved by adding new special services and exploiting the existing services of a client/server and microkernel based distributed operating system. To achieve these goals the research methods of the experimental computer science should be employed. In order to specify the scope of this project, this work investigated the issues related to parallel processing on COWs and surveyed a number of relevant systems including PVM, NOW and MOSIX. It was shown that although the MOSIX system provide a number of good services related to parallelism management, none of the system forms a complete solution. The problems identified with these systems include: instantiation services that are not suited to parallel processing; duplication of services between the parallelism management environment and the operating system; and poor levels of transparency. A high performance and transparent system capable of managing the execution of SPMD parallel applications was synthesised and the specific services of process instantiation, process mapping and process interaction detailed. The process instantiation service designed here provides the capability to instantiate parallel processes using either creation or duplication methods and also supports multiple and group based instantiation which is specifically design for SPMD parallel processing. The process mapping service provides the combination of process allocation and dynamic load balancing to ensure the load of a COW remains balanced not only at the time a parallel program is initialised but also during the execution of the program. The process interaction service guarantees to maintain transparently process relationships, communications and coordination services between parallel processes regardless of their location within the COW. The combination of these services provides an original architecture and organisation of a system that is capable of fully managing the execution of SPMD parallel applications on a COW. A logical design of a parallelism management system was developed derived from the synthesised system and was shown that it should ideally be based on a distributed operating system employing the client server model. The client/server based distributed operating system provides the level of transparency, modularity and flexibility necessary for a complete parallelism management system. The services identified in the synthesised system have been mapped to a set of server processes including: Process Instantiation Server providing advanced multiple and group based process creation and duplication; Process Mapping Server combining load collection, process allocation and dynamic load balancing services; and Process Interaction Server providing transparent interprocess communication and coordination. A Process Migration Server was also identified as vital to support both the instantiation and mapping servers. The RHODOS client/server and microkernel based distributed operating system was selected to carry out research into the detailed design and to be used for the implementation this parallelism management system. RHODOS was enhanced to provide the required servers and resulted in the development of the REX Manager, Global Scheduler and Process Migration Manager to provide the services of process instantiation, mapping and migration, respectively. The process interaction services were already provided within RHODOS and only required some extensions to the existing Process Manager and IPC Managers. Through a variety of experiments it was shown that when this system was used to support the execution of SPMD parallel applications the overall execution times were improved, especially when multiple and group based instantiation services are employed. The RHODOS PMS was also shown to greatly reduce the programming burden experienced by users when writing SPMD parallel applications by providing a small set of powerful primitives specially designed to support parallel processing. The system was also shown to be applicable and has been used in a variety of other research areas such as Distributed Shared Memory, Parallelising Compilers and assisting the port of PVM to the RHODOS system. The RHODOS Parallelism Management System (PMS) provides a unique and creative solution to the problem of transparently and efficiently controlling the execution of SPMD parallel applications on COWs. Combining advanced services such as multiple and group based process creation and duplication; combined process allocation and dynamic load balancing; and complete COW wide transparency produces a totally new system that addresses many of the problems not addressed in other systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new approach for speech enhancement in the presence of non-stationary and rapidly changing background noise. A distributed microphone system is used to capture the acoustic characteristics of the environment. The input of each microphone is then classified either as speech or one of the predetermined noise types. Further enhancement of speech in respective microphones is carried out using a modified spectral subtraction algorithm that incorporates multiple noise models to quickly adapt to rapid background noise changes. Tests on real world speech captured under diverse conditions demonstrate the effectiveness of this method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a distributed, surveillance system that works in large and complex indoor environments. To track and recognize behaviors of people, we propose the use of the Abstract Hidden Markov Model (AHMM), which can be considered as an extension of the Hidden Markov Model (HMM), where the single Markov chain in the HMM is replaced by a hierarchy of Markov policies. In this policy hierarchy, each behavior can be represented as a policy at the corresponding level of abstraction. The noisy observations are handled in the same way as an HMM and an efficient Rao-Blackwellised particle filter method is used to compute the probabilities of the current policy at different levels of the hierarchy The novelty of the paper lies in the implementation of a scalable framework in the context of both the scale of behaviors and the size of the environment, making it ideal for distributed surveillance. The results of the system demonstrate the ability to answer queries about people's behaviors at different levels of details using multiple cameras in a large and complex indoor environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In surveillance systems for monitoring people behaviours, it is important to build systems that can adapt to the signatures of people's tasks and movements in the environment. At the same time, it is important to cope with noisy observations produced by a set of cameras with possibly different characteristics. In previous work, we have implemented a distributed surveillance system designed for complex indoor environments [1]. The system uses the Abstract Hidden Markov mEmory Model (AHMEM) for modelling and specifying complex human behaviours that can take place in the environment. Given a sequence of observations from a set of cameras, the system employs approximate probabilistic inference to compute the likelihood of different possible behaviours in real-time. This paper describes the techniques that can be used to learn the different camera noise models and the human movement models to be used in this system. The system is able to monitor and classify people behaviours as data is being gathered, and we provide classification results showing the system is able to identify behaviours of people from their movement signatures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present a distributed surveillance system that uses multiple cheap static cameras to track multiple people in indoor environments. The system has a set of Camera Processing Modules and a Central Module to coordinate the tracking tasks among the cameras. Since each object in the scene can be tracked by a number of cameras, the problem is how to choose the most appropriate camera for each object. We propose a novel algorithm to allocate objects to cameras using the object-to-camera distance while taking into account occlusion. The algorithm attempts to assign objects in the overlapping fields of view to the nearest camera which can see the object without occlusion. Experimental results show that the system can coordinate cameras to track people properly and can deal well with occlusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Hadoop framework provides a powerful way to handle Big Data. Since Hadoop has inherent defects of high memory overhead and low computing performance in processing massive small files, we implement three methods and propose two strategies for solving small files problem in this paper. First, we implement three methods, i.e., Hadoop Archives (HAR), Sequence Files (SF) and CombineFileInputFormat (CFIF), to compensate the existing defects of Hadoop. Moreover, we propose two strategies for meeting the actual needs of different users. Finally, we evaluate the efficiency of the implemented methods and the validity of the proposed strategies. The experimental results show that our methods and strategies can improve the efficiency of massive small files processing, thereby enhancing the overall performance of Hadoop. © 2014 ISSN 1881-803X.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Com o intuito de utilizar uma rede com protocolo IP para a implementação de malhas fechadas de controle, este trabalho propõe-se a realizar um estudo da operação de um sistema de controle dinâmico distribuído, comparando-o com a operação de um sistema de controle local convencional. Em geral, a decisão de projetar uma arquitetura de controle distribuído é feita baseada na simplicidade, na redução dos custos e confiabilidade; portanto, um diferencial bastante importante é a utilização da rede IP. O objetivo de uma rede de controle não é transmitir dados digitais, mas dados analógicos amostrados. Assim, métricas usuais em redes de computadores, como quantidade de dados e taxa de transferências, tornam-se secundárias em uma rede de controle. São propostas técnicas para tratar os pacotes que sofrem atrasos e recuperar o desempenho do sistema de controle através da rede IP. A chave para este método é realizar a estimação do conteúdo dos pacotes que sofrem atrasos com base no modelo dinâmico do sistema, mantendo o sistema com um nível adequado de desempenho. O sistema considerado é o controle de um manipulador antropomórfico com dois braços e uma cabeça de visão estéreo totalizando 18 juntas. Os resultados obtidos mostram que se pode recuperar boa parte do desempenho do sistema.