86 resultados para ad-hoc networks distributed algorithms atomic distributed shared memory
Resumo:
Aquest projecte consisteix en el disseny i desenvolupament d'una arquitectura de serveis sota el paradigma dels agents inteligents. El propòsit d'ADASMI (Architecture for Dynamic Agent Service Management and Interaction) és permetre la gestió i utilització de serveis per altres agents. L'arquitectura s'ha implementat utilitzant la plataforma d'agents de JADE i es pot utilitzar amb qualsevol altra plataforma que compleixi els estàndards d'IEEE FIPA. A més, és prou flexible com per adaptar-se en entorns dinàmics, com per exemple les xarxes ad-hoc en situacions d'emergència.
Resumo:
We survey the main theoretical aspects of models for Mobile Ad Hoc Networks (MANETs). We present theoretical characterizations of mobile network structural properties, different dynamic graph models of MANETs, and finally we give detailed summaries of a few selected articles. In particular, we focus on articles dealing with connectivity of mobile networks, and on articles which show that mobility can be used to propagate information between nodes of the network while at the same time maintaining small transmission distances, and thus saving energy.
Resumo:
A common operation in wireless ad hoc networks is the flooding of broadcast messages to establish network topologies and routing tables. The flooding of broadcast messages is, however, a resource consuming process. It might require the retransmission of messages by most network nodes. It is, therefore, very important to optimize this operation. In this paper, we first analyze the multipoint relaying (MPR) flooding mechanism used by the Optimized Link State Routing (OLSR) protocol to distribute topology control (TC) messages among all the system nodes. We then propose a new flooding method, based on the fusion of two key concepts: distance-enabled multipoint relaying and connected dominating set (CDS) flooding. We present experimental simulationsthat show that our approach improves the performance of previous existing proposals.
Resumo:
Los mecanismos de seguridad son uno de losrequisitos fundamentales para el buen funcionamiento de los protocolos de redes ad hoc móviles. En este artículo se analizan los problemas de seguridad de los protocolos de encaminamiento básicos, se describen las soluciones de seguridad existentes, y se hace un estudio del coste computacional y energético que supone para el sistema la inclusión de mecanismosde seguridad.
Resumo:
En el projecte ens centrarem en el funcionament de les Xarxes mallades sense fils, es fa una breu explicació de les xarxes Ad-hoc, ja que aquestes formen un part important dintre d’una xarxa Mesh. S’explicaran diversos protocols d’enrutament com ara BATMAN, AODV, 802.11s i OLSR. Aquest últim és el que utilitzarem per a la configuració de la xarxa que realitzarem. S’explicarà el tipus de paquets que s’envien entre els nodes perquè el funcionament de la xarxa sigui òptim i tots els clients puguin tenir-hi accés i estiguin connectats entre ells. Al finalitzar l’estudi teòric d’aquest tipus de xarxa i veure les seves avantatges i desavantatges respecte altres tipus de xarxes sense fils, es muntaran dos petites xarxa. Per a muntar-les utilitzarem de Punts d’Accés els nodes Ubuquiti Nanoestation LocoM2. Una de les xarxes serà al poble Bellcaire d’Urgell, on és connectaran tres cases, on només una te accés a Internet i els nodes estaran connectats en línia, es a dir, del Node1 – Node2 – Node3. La segona serà a una casa de Barcelona on mitjançant els tres nodes obtindrem una connexió d’Internet a tota la casa, en aquest cas els tres nodes estaran connectats els tres formant un cercle, es a dir, Node1-Node2-Node3-Node1. Finalment, és mirarà una proposta de futur, on es parlaria amb l’Ajuntament del poble on s’ha fet la primera proba per realitzar una Xarxa mallada sense fils a tots els llocs públics del poble.
Resumo:
One of the major problems when using non-dedicated volunteer resources in adistributed network is the high volatility of these hosts since they can go offlineor become unavailable at any time without control. Furthermore, the use ofvolunteer resources implies some security issues due to the fact that they aregenerally anonymous entities which we know nothing about. So, how to trustin someone we do not know?.Over the last years an important number of reputation-based trust solutionshave been designed to evaluate the participants' behavior in a system.However, most of these solutions are addressed to P2P and ad-hoc mobilenetworks that may not fit well with other kinds of distributed systems thatcould take advantage of volunteer resources as recent cloud computinginfrastructures.In this paper we propose a first approach to design an anonymous reputationmechanism for CoDeS [1], a middleware for building fogs where deployingservices using volunteer resources. The participants are reputation clients(RC), a reputation authority (RA) and a certification authority (CA). Users needa valid public key certificate from the CA to register to the RA and obtain thedata needed to participate into the system, as now an opaque identifier thatwe call here pseudonym and an initial reputation value that users provide toother users when interacting together. The mechanism prevents not only themanipulation of the provided reputation values but also any disclosure of theusers' identities to any other users or authorities so the anonymity isguaranteed.
Resumo:
This paper describes the state of the art of secure ad hoc routing protocols and presents SEDYMO, a mechanism to secure a dynamic multihop ad hoc routing protocol. The proposed solution defeats internal and external attacks usinga trustworthiness model based on a distributed certification authority. Digital signatures and hash chains are used to ensure the correctness of the protocol. The protocol is compared with other alternatives in terms of security strength, energy efficiency and time delay. Both computational and transmission costs are considered and it is shown that the secure protocol overhead is not a critical factor compared to the high network interface cost.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
Selenoproteins are a diverse group of proteinsusually misidentified and misannotated in sequencedatabases. The presence of an in-frame UGA (stop)codon in the coding sequence of selenoproteingenes precludes their identification and correctannotation. The in-frame UGA codons are recodedto cotranslationally incorporate selenocysteine,a rare selenium-containing amino acid. The developmentof ad hoc experimental and, more recently,computational approaches have allowed the efficientidentification and characterization of theselenoproteomes of a growing number of species.Today, dozens of selenoprotein families have beendescribed and more are being discovered in recentlysequenced species, but the correct genomic annotationis not available for the majority of thesegenes. SelenoDB is a long-term project that aims toprovide, through the collaborative effort of experimentaland computational researchers, automaticand manually curated annotations of selenoproteingenes, proteins and SECIS elements. Version 1.0 ofthe database includes an initial set of eukaryoticgenomic annotations, with special emphasis on thehuman selenoproteome, for immediate inspectionby selenium researchers or incorporation into moregeneral databases. SelenoDB is freely available athttp://www.selenodb.org.
Resumo:
We present a generator of random networks where both the degree-dependent clustering coefficient and the degree distribution are tunable. Following the same philosophy as in the configuration model, the degree distribution and the clustering coefficient for each class of nodes of degree k are fixed ad hoc and a priori. The algorithm generates corresponding topologies by applying first a closure of triangles and second the classical closure of remaining free stubs. The procedure unveils an universal relation among clustering and degree-degree correlations for all networks, where the level of assortativity establishes an upper limit to the level of clustering. Maximum assortativity ensures no restriction on the decay of the clustering coefficient whereas disassortativity sets a stronger constraint on its behavior. Correlation measures in real networks are seen to observe this structural bound.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
JXTA define un conjunto de seis protocolos básicos especialmente adecuados para una computación ad hoc, permanente, multi-hop, peer-to-peer (P2P). Estos protocolos permiten que los iguales cooperen y formen grupos autónomos de pares. Este artículo presenta un método que proporciona servicios de seguridad en los protocolos básicos: protección de datos, autenticidad, integridad y no repudio. Los mecanismos que se presentan son totalmente distribuidos y basados ¿¿en un modelo puro peer-to-peer, que no requieren el arbitraje de un tercero de confianza o una relación de confianza establecida previamente entre pares, que es uno de los principales retos en este tipo de entornos.
Resumo:
La gestión de recursos en los procesadores multi-core ha ganado importancia con la evolución de las aplicaciones y arquitecturas. Pero esta gestión es muy compleja. Por ejemplo, una misma aplicación paralela ejecutada múltiples veces con los mismos datos de entrada, en un único nodo multi-core, puede tener tiempos de ejecución muy variables. Hay múltiples factores hardware y software que afectan al rendimiento. La forma en que los recursos hardware (cómputo y memoria) se asignan a los procesos o threads, posiblemente de varias aplicaciones que compiten entre sí, es fundamental para determinar este rendimiento. La diferencia entre hacer la asignación de recursos sin conocer la verdadera necesidad de la aplicación, frente a asignación con una meta específica es cada vez mayor. La mejor manera de realizar esta asignación és automáticamente, con una mínima intervención del programador. Es importante destacar, que la forma en que la aplicación se ejecuta en una arquitectura no necesariamente es la más adecuada, y esta situación puede mejorarse a través de la gestión adecuada de los recursos disponibles. Una apropiada gestión de recursos puede ofrecer ventajas tanto al desarrollador de las aplicaciones, como al entorno informático donde ésta se ejecuta, permitiendo un mayor número de aplicaciones en ejecución con la misma cantidad de recursos. Así mismo, esta gestión de recursos no requeriría introducir cambios a la aplicación, o a su estrategia operativa. A fin de proponer políticas para la gestión de los recursos, se analizó el comportamiento de aplicaciones intensivas de cómputo e intensivas de memoria. Este análisis se llevó a cabo a través del estudio de los parámetros de ubicación entre los cores, la necesidad de usar la memoria compartida, el tamaño de la carga de entrada, la distribución de los datos dentro del procesador y la granularidad de trabajo. Nuestro objetivo es identificar cómo estos parámetros influyen en la eficiencia de la ejecución, identificar cuellos de botella y proponer posibles mejoras. Otra propuesta es adaptar las estrategias ya utilizadas por el Scheduler con el fin de obtener mejores resultados.
Resumo:
En el entorno actual, diversas ramas de las ciencias, tienen la necesidad de auxiliarse de la computación de altas prestaciones para la obtención de resultados a relativamente corto plazo. Ello es debido fundamentalmente, al alto volumen de información que necesita ser procesada y también al costo computacional que demandan dichos cálculos. El beneficio al realizar este procesamiento de manera distribuida y paralela, logra acortar los tiempos de espera en la obtención de los resultados y de esta forma posibilita una toma decisiones con mayor anticipación. Para soportar ello, existen fundamentalmente dos modelos de programación ampliamente extendidos: el modelo de paso de mensajes a través de librerías basadas en el estándar MPI, y el de memoria compartida con la utilización de OpenMP. Las aplicaciones híbridas son aquellas que combinan ambos modelos con el fin de aprovechar en cada caso, las potencialidades específicas del paralelismo en cada uno. Lamentablemente, la práctica ha demostrado que la utilización de esta combinación de modelos, no garantiza necesariamente una mejoría en el comportamiento de las aplicaciones. Por lo tanto, un análisis de los factores que influyen en el rendimiento de las mismas, nos beneficiaría a la hora de implementarlas pero también, sería un primer paso con el fin de llegar a predecir su comportamiento. Adicionalmente, supondría una vía para determinar que parámetros de la aplicación modificar con el fin de mejorar su rendimiento. En el trabajo actual nos proponemos definir una metodología para la identificación de factores de rendimiento en aplicaciones híbridas y en congruencia, la identificación de algunos factores que influyen en el rendimiento de las mismas.