774 resultados para peer-to-peer (P2P) computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste artigo, apresentam-se os resultados de pesquisa qualitativa em que se objetivou investigar a aplicabilidade do método L.E.SCAnning em empresas sociais de economia peer-to-peer (P2P). A motivação partiu da ideia de a autossustentabilidade ser, a longo prazo, um dos maiores desafios das organizações, especialmente aquelas lastreadas na economia social, dentre elas, as empresas P2P. No entanto, empresas sociais são potencialmente negócios dinâmicos e progressistas com os quais o mercado empresarial poderia aprender, uma vez que experimentam e inovam. Partindo exatamente desse espírito inovador, muitas empresas sociais voltaram-se para o modelo crowdfunding de economia P2P, que se configura como tendência emergente de organização colaborativa de recursos na Web. Sob esse prisma, um dos novos desenvolvimentos em gestão que se aplicam à atividade de organizações com enfoque sistêmico é a prática da Inteligência Estratégica Antecipativa Coletiva (IEAc). Nesse sentido, no estudo de caso investigou-se a empresa social francesa Babyloan para compreender de que maneira a organização busca, monitora e utiliza a informação captada do meio externo para sua atuação, prototipando, com base nesse diagnóstico, a aplicação de um ciclo do método L.E.SCAnning. Os resultados deste estudo sugerem que o entendimento pragmático do cenário externo, por meio da IEAc, favorece decisões que trazem uma marca de empreendedorismo e inovação, e tem no universo da economia social P2P, ambiente fortemente baseado em percepção, um impacto potencial significativo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

JXTA define un conjunto de seis protocolos básicos especialmente adecuados para una computación ad hoc, permanente, multi-hop, peer-to-peer (P2P). Estos protocolos permiten que los iguales cooperen y formen grupos autónomos de pares. Este artículo presenta un método que proporciona servicios de seguridad en los protocolos básicos: protección de datos, autenticidad, integridad y no repudio. Los mecanismos que se presentan son totalmente distribuidos y basados ¿¿en un modelo puro peer-to-peer, que no requieren el arbitraje de un tercero de confianza o una relación de confianza establecida previamente entre pares, que es uno de los principales retos en este tipo de entornos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a distributed mobile system that allows accessing and processing of data distributed across mobile devices without an external communication infrastructure. Copyright © 2015 ICST.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concurrent aims to be a different type of task distribution system compared to what MPI like system do. It adds a simple but powerful application abstraction layer to distribute the logic of an entire application onto a swarm of clusters holding similarities with volunteer computing systems. Traditional task distributed systems will just perform simple tasks onto the distributed system and wait for results. Concurrent goes one step further by letting the tasks and the application decide what to do. The programming paradigm is then totally async without any waits for results and based on notifications once a computation has been performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed Computing frameworks belong to a class of programming models that allow developers to

launch workloads on large clusters of machines. Due to the dramatic increase in the volume of

data gathered by ubiquitous computing devices, data analytic workloads have become a common

case among distributed computing applications, making Data Science an entire field of

Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,

a sequence of operations they wish to apply on this dataset, and some constraint they may have

related to their work (performances, QoS, budget, etc). However, it is actually extremely

difficult, without domain expertise, to perform data science. One need to select the right amount

and type of resources, pick up a framework, and configure it. Also, users are often running their

application in shared environments, ruled by schedulers expecting them to specify precisely their resource

needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and

profiling are hard, high dimensional problems that block users from making the right

configuration choices and determining the right amount of resources they need. Paradoxically, the

system is gathering a large amount of monitoring data at runtime, which remains unused.

In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit

monitoring data to learn about workloads, and process user requests into a tailored execution

context. In this work, we study different techniques that have been used to make steps toward

such system awareness, and explore a new way to do so by implementing machine learning

techniques to recommend a specific subset of system configurations for Apache Spark applications.

Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight

the complexity in choosing the best one for a given workload.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, researchers have introduced the notion of super-peers to improve signaling efficiency as well as lookup performance of peer-to-peer (P2P) systems. In a separate development, recent works on applications of mobile ad hoc networks (MANET) have seen several proposals on utilizing mobile fleets such as city buses to deploy a mobile backbone infrastructure for communication and Internet access in a metropolitan environment. This paper further explores the possibility of deploying P2P applications such as content sharing and distributed computing, over this mobile backbone infrastructure. Specifically, we study how city buses may be deployed as a mobile system of super-peers. We discuss the main motivations behind our proposal, and outline in detail the design of a super-peer based structured P2P system using a fleet of city buses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As características do tráfego na Internet são cada vez mais complexas devido à crescente diversidade de aplicações, à existência de diferenças drásticas no comportamento de utilizadores, à mobilidade de utilizadores e equipamentos, à complexidade dos mecanismos de geração e controlo de tráfego, e à crescente diversidade dos tipos de acesso e respectivas capacidades. Neste cenário é inevitável que a gestão da rede seja cada vez mais baseada em medições de tráfego em tempo real. Devido à elevada quantidade de informação que é necessário processar e armazenar, é também cada vez maior a necessidade das plataformas de medição de tráfego assumirem uma arquitectura distribuída, permitindo o armazenamento distribuído, replicação e pesquisa dos dados medidos de forma eficiente, possivelmente imitando o paradigma Peer-to-Peer (P2P). Esta dissertação descreve a especificação, implementação e teste de um sistema de medição de tráfego com uma arquitectura distribuída do tipo P2P, que fornece aos gestores de rede uma ferramenta para configurar remotamente sistemas de monitorização instalados em diversos pontos da rede para a realização de medições de tráfego. O sistema pode também ser usado em redes orientadas à comunidade onde os utilizadores podem partilhar recursos das suas máquinas para permitir que outros realizem medições e partilhem os dados obtidos. O sistema é baseado numa rede de overlay com uma estrutura hierárquica organizada em áreas de medição. A rede de overlay é composta por dois tipos de nós, denominados de probes e super-probes, que realizam as medições e armazenam os resultados das mesmas. As superprobes têm ainda a função de garantir a ligação entre áreas de medição e gerir a troca de mensagens entre a rede e as probes a elas conectadas. A topologia da rede de overlay pode mudar dinamicamente, com a inserção de novos nós e a remoção de outros, e com a promoção de probes a super-probes e viceversa, em resposta a alterações dos recursos disponíveis. Os nós armazenam dois tipos de resultados de medições: Light Data Files (LDFs) e Heavy Data Files (HDFs). Os LDFs guardam informação relativa ao atraso médio de ida-evolta de cada super-probe para todos os elementos a ela ligados e são replicados em todas as super-probes, fornecendo uma visão simples mas facilmente acessível do estado da rede. Os HDFs guardam os resultados detalhados das medições efectuadas a nível do pacote ou do fluxo e podem ser replicados em alguns nós da rede. As réplicas são distribuídas pela rede tendo em consideração os recursos disponíveis nos nós, de forma a garantir resistência a falhas. Os utilizadores podem configurar medições e pesquisar os resultados através do elemento denominado de cliente. Foram realizados diversos testes de avaliação do sistema que demonstraram estar o mesmo a operar correctamente e de forma eficiente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are three key driving forces behind the development of Internet Content Management Systems (CMS) - a desire to manage the explosion of content, a desire to provide structure and meaning to content in order to make it accessible, and a desire to work collaboratively to manipulate content in some meaningful way. Yet the traditional CMS has been unable to meet the latter of these requirements, often failing to provide sufficient tools for collaboration in a distributed context. Peer-to-Peer (P2P) systems are networks in which every node is an equal participant (whether transmitting data, exchanging content, or invoking services) and there is an absence of any centralised administrative or coordinating authorities. P2P systems are inherently more scalable than equivalent client-server implementations as they tend to use resources at the edge of the network much more effectively. This paper details the rationale and design of a P2P middleware for collaborative content management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While pursuing the objective to investigate the potential for the P2P innovation to enhance financial inclusion in Brazil, the P2P industry and the current market environment were analyzed in order to highlight the factors that can facilitate this desired enhancement. There seems to be no doubt that there is substantial potential for the P2P industry worldwide and in Brazil but, beyond this, a considerable part of this industry could be providing financially inclusive products. The P2P industry in Brazil needs to recognize the potential for growing, not only the industry itself, but also the market for financially inclusive P2P products. The first section of this thesis focuses on financial inclusion briefly in order to establish the frame of what is being addressed. Subsequently the P2P industry is analyzed globally, locally in Brazil and with regard to financial inclusion. The study is conducted through an interview with the founder of a P2P platform in Brazil and its data collection is used to build a case study which allowed for an analysis of the potential for financial inclusion of the P2P industry and the development of key success factors with regard to converting this potential into results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La popolarita` dei giochi online e` in crescita, ma allo stesso tempo le architetture proposte dagli sviluppatori e le connessioni di cui sono dotati gli utenti sembrano restare non adeguate a questo. Nella tesi si descrive un'architettura peer-to-peer che riesce ad effettuare una riduzione nella perdita dei pacchetti grazie al meccanismo del Network Coding senza effetti collaterali per la latenza.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Content providers from the music industry argue that peer-to-peer (P2P) networks such as KaZaA, Morpheus, iMesh, or Audiogalaxy are an enormous threat to their business. They furthermore blame these networks for their recent decline in sales figures. For this reason, an empirical investigation was conducted during a period of 6 weeks on one of the most popular files-sharing systems, in order to determine the quantity and quality of pirated music songs shared. We present empirical evidence as to what extent and in which quality music songs are being shared. A number of hypotheses are outlined and were tested. We studied, among other things, the number of users online and the number of flies accessible on such networks, the free riding problem, and the duration per search request. We further tested to see if there are any differences in the accessibility of songs based on the nationality of the artist, the language of the song, and the corresponding chart position. Finally, we outline the main hurdles users may face when downloading illegal music and the probability of obtaining high quality music tracks on such peer-to-peer networks.