814 resultados para Peer-to-Peers Networks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology advances in hardware, software and IP-networks such as the Internet or peer-to-peer file sharing systems are threatening the music business. The result has been an increasing amount of illegal copies available on-line as well as off-line. With the emergence of digital rights management systems (DRMS), the music industry seems to have found the appropriate tool to simultaneously fight piracy and to monetize their assets. Although these systems are very powerful and include multiple technologies to prevent piracy, it is as of yet unknown to what extent such systems are currently being used by content providers. We provide empirical analyses, results, and conclusions related to digital rights management systems and the protection of digital content in the music industry. It shows that most content providers are protecting their digital content through a variety of technologies such as passwords or encryption. However, each protection technology has its own specific goal, and not all prevent piracy. The majority of the respondents are satisfied with their current protection but want to reinforce it for the future, due to fear of increasing piracy. Surprisingly, although encryption is seen as the core DRM technology, only few companies are currently using it. Finally, half of the respondents do not believe in the success of DRMS and their ability to reduce piracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Of the many state-of-the-art methods for cooperative localization in wireless sensor networks (WSN), only very few adapt well to mobile networks. The main problems of the well-known algorithms, based on nonparametric belief propagation (NBP), are the high communication cost and inefficient sampling techniques. Moreover, they either do not use smoothing or just apply it o ine. Therefore, in this article, we propose more flexible and effcient variants of NBP for cooperative localization in mobile networks. In particular, we provide: i) an optional 1-lag smoothing done almost in real-time, ii) a novel low-cost communication protocol based on package approximation and censoring, iii) higher robustness of the standard mixture importance sampling (MIS) technique, and iv) a higher amount of information in the importance densities by using the population Monte Carlo (PMC) approach, or an auxiliary variable. Through extensive simulations, we confirmed that all the proposed techniques outperform the standard NBP method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We extend the concept of eigenvector centrality to multiplex networks, and introduce several alternative parameters that quantify the importance of nodes in a multi-layered networked system, including the definition of vectorial-type centralities. In addition, we rigorously show that, under reasonable conditions, such centrality measures exist and are unique. Computer experiments and simulations demonstrate that the proposed measures provide substantially different results when applied to the same multiplex structure, and highlight the non-trivial relationships between the different measures of centrality introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Isolated electrical systems lack electrical interconnection to other networks and are usually placed in geographically isolated areas—mainly islands or locations in developing countries. Until recently, only diesel generators were able to assure a safe and reliable supply in exchange for very high costs for fuel transportation and system operation. Transmission system operators (TSOs) are increasingly seeking to replace traditional energy models based on large groups of conventional generation units with mixed solutions where diesel groups are held as backup generation and important advantages are provided by renewable energy sources. The grid codes determine the technical requirements to be fulfilled by the generators connected in any electrical network, but regulations applied to isolated grids are more demanding. In technical literature it is rather easy to find and compare grid codes for interconnected electrical systems. However, the existing literature is incomplete and sparse regarding isolated grids. This paper aims to review the current state of isolated systems and grid codes applicable to them, specifying points of comparison and defining the guidelines to be followed by the upcoming regulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El consumo energético de las Redes de Sensores Inalámbricas (WSNs por sus siglas en inglés) es un problema histórico que ha sido abordado desde diferentes niveles y visiones, ya que no solo afecta a la propia supervivencia de la red sino que el creciente uso de dispositivos inteligentes y el nuevo paradigma del Internet de las Cosas hace que las WSNs tengan cada vez una mayor influencia en la huella energética. Debido a la tendencia al alza en el uso de estas redes se añade un nuevo problema, la saturación espectral. Las WSNs operan habitualmente en bandas sin licencia como son las bandas Industrial, Científica y Médica (ISM por sus siglas en inglés). Estas bandas se comparten con otro tipo de redes como Wi-Fi o Bluetooth cuyo uso ha crecido exponencialmente en los últimos años. Para abordar este problema aparece el paradigma de la Radio Cognitiva (CR), una tecnología que permite el acceso oportunista al espectro. La introducción de capacidades cognitivas en las WSNs no solo permite optimizar su eficiencia espectral sino que también tiene un impacto positivo en parámetros como la calidad de servicio, la seguridad o el consumo energético. Sin embargo, por otra parte, este nuevo paradigma plantea algunos retos relacionados con el consumo energético. Concretamente, el sensado del espectro, la colaboración entre los nodos (que requiere comunicación adicional) y el cambio en los parámetros de transmisión aumentan el consumo respecto a las WSN clásicas. Teniendo en cuenta que la investigación en el campo del consumo energético ha sido ampliamente abordada puesto que se trata de una de sus principales limitaciones, asumimos que las nuevas estrategias deben surgir de las nuevas capacidades añadidas por las redes cognitivas. Por otro lado, a la hora de diseñar estrategias de optimización para CWSN hay que tener muy presentes las limitaciones de recursos de estas redes en cuanto a memoria, computación y consumo energético de los nodos. En esta tesis doctoral proponemos dos estrategias de reducción de consumo energético en CWSNs basadas en tres pilares fundamentales. El primero son las capacidades cognitivas añadidas a las WSNs que proporcionan la posibilidad de adaptar los parámetros de transmisión en función del espectro disponible. La segunda es la colaboración, como característica intrínseca de las CWSNs. Finalmente, el tercer pilar de este trabajo es teoría de juegos como algoritmo de soporte a la decisión, ampliamente utilizado en WSNs debido a su simplicidad. Como primer aporte de la tesis se presenta un análisis completo de las posibilidades introducidas por la radio cognitiva en materia de reducción de consumo para WSNs. Gracias a las conclusiones extraídas de este análisis, se han planteado las hipótesis de esta tesis relacionadas con la validez de usar capacidades cognitivas como herramienta para la reducción de consumo en CWSNs. Una vez presentada las hipótesis, pasamos a desarrollar las principales contribuciones de la tesis: las dos estrategias diseñadas para reducción de consumo basadas en teoría de juegos y CR. La primera de ellas hace uso de un juego no cooperativo que se juega mediante pares de jugadores. En la segunda estrategia, aunque el juego continúa siendo no cooperativo, se añade el concepto de colaboración. Para cada una de las estrategias se presenta el modelo del juego, el análisis formal de equilibrios y óptimos y la descripción de la estrategia completa donde se incluye la interacción entre nodos. Con el propósito de probar las estrategias mediante simulación e implementación en dispositivos reales hemos desarrollado un marco de pruebas compuesto por un simulador cognitivo y un banco de pruebas formado por nodos cognitivos capaces de comunicarse en tres bandas ISM desarrollados en el B105 Lab. Este marco de pruebas constituye otra de las aportaciones de la tesis que permitirá el avance en la investigación en el área de las CWSNs. Finalmente, se presentan y discuten los resultados derivados de la prueba de las estrategias desarrolladas. La primera estrategia proporciona ahorros de energía mayores al 65% comparados con una WSN sin capacidades cognitivas y alrededor del 25% si la comparamos con una estrategia cognitiva basada en el sensado periódico del espectro para el cambio de canal de acuerdo a un nivel de ruido fijado. Este algoritmo se comporta de forma similar independientemente del nivel de ruido siempre que éste sea espacialmente uniformemente. Esta estrategia, a pesar de su sencillez, nos asegura el comportamiento óptimo en cuanto a consumo energético debido a la utilización de teoría de juegos en la fase de diseño del comportamiento de los nodos. La estrategia colaborativa presenta mejoras respecto a la anterior en términos de protección frente al ruido en escenarios de ruido más complejos donde aporta una mejora del 50% comparada con la estrategia anterior. ABSTRACT Energy consumption in Wireless Sensor Networks (WSNs) is a known historical problem that has been addressed from different areas and on many levels. But this problem should not only be approached from the point of view of their own efficiency for survival. A major portion of communication traffic has migrated to mobile networks and systems. The increased use of smart devices and the introduction of the Internet of Things (IoT) give WSNs a great influence on the carbon footprint. Thus, optimizing the energy consumption of wireless networks could reduce their environmental impact considerably. In recent years, another problem has been added to the equation: spectrum saturation. Wireless Sensor Networks usually operate in unlicensed spectrum bands such as Industrial, Scientific, and Medical (ISM) bands shared with other networks (mainly Wi-Fi and Bluetooth). To address the efficient spectrum utilization problem, Cognitive Radio (CR) has emerged as the key technology that enables opportunistic access to the spectrum. Therefore, the introduction of cognitive capabilities to WSNs allows optimizing their spectral occupation. Cognitive Wireless Sensor Networks (CWSNs) do not only increase the reliability of communications, but they also have a positive impact on parameters such as the Quality of Service (QoS), network security, or energy consumption. These new opportunities introduced by CWSNs unveil a wide field in the energy consumption research area. However, this also implies some challenges. Specifically, the spectrum sensing stage, collaboration among devices (which requires extra communication), and changes in the transmission parameters increase the total energy consumption of the network. When designing CWSN optimization strategies, the fact that WSN nodes are very limited in terms of memory, computational power, or energy consumption has to be considered. Thus, light strategies that require a low computing capacity must be found. Since the field of energy conservation in WSNs has been widely explored, we assume that new strategies could emerge from the new opportunities presented by cognitive networks. In this PhD Thesis, we present two strategies for energy consumption reduction in CWSNs supported by three main pillars. The first pillar is that cognitive capabilities added to the WSN provide the ability to change the transmission parameters according to the spectrum. The second pillar is that the ability to collaborate is a basic characteristic of CWSNs. Finally, the third pillar for this work is the game theory as a decision-making algorithm, which has been widely used in WSNs due to its lightness and simplicity that make it valid to operate in CWSNs. For the development of these strategies, a complete analysis of the possibilities is first carried out by incorporating the cognitive abilities into the network. Once this analysis has been performed, we expose the hypotheses of this thesis related to the use of cognitive capabilities as a useful tool to reduce energy consumption in CWSNs. Once the analyses are exposed, we present the main contribution of this thesis: the two designed strategies for energy consumption reduction based on game theory and cognitive capabilities. The first one is based on a non-cooperative game played between two players in a simple and selfish way. In the second strategy, the concept of collaboration is introduced. Despite the fact that the game used is also a non-cooperative game, the decisions are taken through collaboration. For each strategy, we present the modeled game, the formal analysis of equilibrium and optimum, and the complete strategy describing the interaction between nodes. In order to test the strategies through simulation and implementation in real devices, we have developed a CWSN framework composed by a CWSN simulator based on Castalia and a testbed based on CWSN nodes able to communicate in three different ISM bands. We present and discuss the results derived by the energy optimization strategies. The first strategy brings energy improvement rates of over 65% compared to WSN without cognitive techniques. It also brings energy improvement rates of over 25% compared with sensing strategies for changing channels based on a decision threshold. We have also seen that the algorithm behaves similarly even with significant variations in the level of noise while working in a uniform noise scenario. The collaborative strategy presents improvements respecting the previous strategy in terms of noise protection when the noise scheme is more complex where this strategy shows improvement rates of over 50%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poder clasificar de manera precisa la aplicación o programa del que provienen los flujos que conforman el tráfico de uso de Internet dentro de una red permite tanto a empresas como a organismos una útil herramienta de gestión de los recursos de sus redes, así como la posibilidad de establecer políticas de prohibición o priorización de tráfico específico. La proliferación de nuevas aplicaciones y de nuevas técnicas han dificultado el uso de valores conocidos (well-known) en puertos de aplicaciones proporcionados por la IANA (Internet Assigned Numbers Authority) para la detección de dichas aplicaciones. Las redes P2P (Peer to Peer), el uso de puertos no conocidos o aleatorios, y el enmascaramiento de tráfico de muchas aplicaciones en tráfico HTTP y HTTPS con el fin de atravesar firewalls y NATs (Network Address Translation), entre otros, crea la necesidad de nuevos métodos de detección de tráfico. El objetivo de este estudio es desarrollar una serie de prácticas que permitan realizar dicha tarea a través de técnicas que están más allá de la observación de puertos y otros valores conocidos. Existen una serie de metodologías como Deep Packet Inspection (DPI) que se basa en la búsqueda de firmas, signatures, en base a patrones creados por el contenido de los paquetes, incluido el payload, que caracterizan cada aplicación. Otras basadas en el aprendizaje automático de parámetros de los flujos, Machine Learning, que permite determinar mediante análisis estadísticos a qué aplicación pueden pertenecer dichos flujos y, por último, técnicas de carácter más heurístico basadas en la intuición o el conocimiento propio sobre tráfico de red. En concreto, se propone el uso de alguna de las técnicas anteriormente comentadas en conjunto con técnicas de minería de datos como son el Análisis de Componentes Principales (PCA por sus siglas en inglés) y Clustering de estadísticos extraídos de los flujos procedentes de ficheros de tráfico de red. Esto implicará la configuración de diversos parámetros que precisarán de un proceso iterativo de prueba y error que permita dar con una clasificación del tráfico fiable. El resultado ideal sería aquel en el que se pudiera identificar cada aplicación presente en el tráfico en un clúster distinto, o en clusters que agrupen grupos de aplicaciones de similar naturaleza. Para ello, se crearán capturas de tráfico dentro de un entorno controlado e identificando cada tráfico con su aplicación correspondiente, a continuación se extraerán los flujos de dichas capturas. Tras esto, parámetros determinados de los paquetes pertenecientes a dichos flujos serán obtenidos, como por ejemplo la fecha y hora de llagada o la longitud en octetos del paquete IP. Estos parámetros serán cargados en una base de datos MySQL y serán usados para obtener estadísticos que ayuden, en un siguiente paso, a realizar una clasificación de los flujos mediante minería de datos. Concretamente, se usarán las técnicas de PCA y clustering haciendo uso del software RapidMiner. Por último, los resultados obtenidos serán plasmados en una matriz de confusión que nos permitirá que sean valorados correctamente. ABSTRACT. Being able to classify the applications that generate the traffic flows in an Internet network allows companies and organisms to implement efficient resource management policies such as prohibition of specific applications or prioritization of certain application traffic, looking for an optimization of the available bandwidth. The proliferation of new applications and new technics in the last years has made it more difficult to use well-known values assigned by the IANA (Internet Assigned Numbers Authority), like UDP and TCP ports, to identify the traffic. Also, P2P networks and data encapsulation over HTTP and HTTPS traffic has increased the necessity to improve these traffic analysis technics. The aim of this project is to develop a number of techniques that make us able to classify the traffic with more than the simple observation of the well-known ports. There are some proposals that have been created to cover this necessity; Deep Packet Inspection (DPI) tries to find signatures in the packets reading the information contained in them, the payload, looking for patterns that can be used to characterize the applications to which that traffic belongs; Machine Learning procedures work with statistical analysis of the flows, trying to generate an automatic process that learns from those statistical parameters and calculate the likelihood of a flow pertaining to a certain application; Heuristic Techniques, finally, are based in the intuition or the knowledge of the researcher himself about the traffic being analyzed that can help him to characterize the traffic. Specifically, the use of some of the techniques previously mentioned in combination with data mining technics such as Principal Component Analysis (PCA) and Clustering (grouping) of the flows extracted from network traffic captures are proposed. An iterative process based in success and failure will be needed to configure these data mining techniques looking for a reliable traffic classification. The perfect result would be the one in which the traffic flows of each application is grouped correctly in each cluster or in clusters that contain group of applications of similar nature. To do this, network traffic captures will be created in a controlled environment in which every capture is classified and known to pertain to a specific application. Then, for each capture, all the flows will be extracted. These flows will be used to extract from them information such as date and arrival time or the IP length of the packets inside them. This information will be then loaded to a MySQL database where all the packets defining a flow will be classified and also, each flow will be assigned to its specific application. All the information obtained from the packets will be used to generate statistical parameters in order to describe each flow in the best possible way. After that, data mining techniques previously mentioned (PCA and Clustering) will be used on these parameters making use of the software RapidMiner. Finally, the results obtained from the data mining will be compared with the real classification of the flows that can be obtained from the database. A Confusion Matrix will be used for the comparison, letting us measure the veracity of the developed classification process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A expressão emocional é um dos meios de comunicação primordiais do bebê humano, sendo sua manifestação um recurso que lhe garante a própria sobrevivência. Esta temática tem sido bastante estudada, usualmente, na relação do bebê com adultos, mais particularmente com a mãe, em condições de laboratório e ambiente doméstico. No entanto, novas configurações sociais emergiram, em que o cuidado da criança pequena tem sido cada vez mais compartilhado com instituições de educação infantil, onde os pares de idade são os parceiros mais frequentes. A revisão da literatura relacionada à expressividade emocional entre pares de bebês evidencia lacunas nesse campo, e há autores que afirmam o não reconhecimento da ocorrência mesmo da interação, já que a interação entre coetâneos nos dois primeiros anos de vida não é vista como viável. Com isso, traçamos o objetivo de verificar se ocorrem manifestações de expressividade emocional de sorriso e choro em interações de pares de bebês, no ambiente do berçário de uma creche. E, em ocorrendo, investigar como se dão. Com embasamento teórico-metodológico na Rede de Significações, realizamos um estudo longitudinal de casos múltiplos, com análise qualitativa. Participaram da pesquisa dezoito bebês de uma creche pública localizada no interior do Estado de São Paulo, e três educadoras responsáveis pela turma. Dentre os bebês, Tiago (cinco a dez meses de idade) e Bruno (oito meses a um ano e um mês de idade) foram sujeitos focais, sendo acompanhados durante cinco meses, através de videogravações semanais de trinta minutos para cada bebê. A análise do material empírico se dividiu em duas etapas: 1) mapeamento das ocorrências de sorriso e choro, discriminando os parceiros com os quais os bebês interagiram ao se expressar; e, 2) análise qualitativa dos episódios interativos nos quais Bruno e Tiago se expressaram emocionalmente com os pares. A partir das diferentes formas de expressão do bebê, tanto com base na literatura como no material empírico analisado, foram criadas as categorias de riso, sorriso, choramingo, choro e choro prolongado. Realizada a análise, não se verificou a ocorrência de risos nas interações dos bebês, pelo menos nos dias em que foram feitas as gravações. Com relação aos sorrisos de Tiago, observamos que se manifestaram em situações lúdicas, e se modificaram ao longo do tempo. Por volta dos oito / nove meses do bebê, os sorrisos passaram a ter repercussão nos parceiros, que brevemente reagiram à expressão. No caso de Bruno, também aos nove meses ele passou a manifestar alguns sorrisos que repercutiram e contagiavam os pares de idade. Os sorrisos de Bruno se manifestaram com uma riqueza de sentidos identificáveis nas interações, não sorrindo apenas aos pares, mas também dos pares e com os pares. Apesar das mudanças nos sorrisos dos bebês ao longo do tempo, o processo não se manifestou de modo linear. Nas expressões de choro dos bebês não se observou mudanças nas interações com os pares, apenas diferenças na qualidade e duração de tempo. As interações de pares em que Tiago chorou estavam relacionadas, em sua maioria, ao incômodo decorrente de invasões físicas sobre o bebê. Já no caso de Bruno, na maior parte das vezes, as interações compreendiam incômodo por situações de competição ou perda de brinquedos. Em ambas as expressões estudadas, observamos que sorrisos, apesar de menos frequentes entre os pares, contagiaram mais do que os choros, no sentido de haver alguma reação por parte dos pares. É oportuno apontar, neste momento, a relevância de estudos futuros sobre esta temática investigativa, haja vista a importância e a riqueza das manifestações de expressividade emocional nas interações de bebês que convivem em ambiente coletivo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a neural network model to simplify and 2D meshes. This model is based on the Growing Neural Gas model and is able to simplify any mesh with different topologies and sizes. A triangulation process is included with the objective to reconstruct the mesh. This model is applied to some problems related to urban networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implantation of the new Architecture Degree and the important normative changes in the building sector imply the need to use new teaching methodologies that enhance skills and competences in order to response to the increasing requirements demanded by society to the future architect. The aim of this paper is to present, analyze and discuss the development of multidisciplinary workshops as a new teaching methodology used in several Construction subjects of the Architecture Degree in the University of Alicante. Workshops conceived with the aim to synthesize and complement the technical knowledge acquired by the students during the Degree and to enhance the skills and competencies necessary for the professional practice. With that purpose, we decided to experiment on current subjects of the degree during this academic year, by applying the requirements defined in the future Architecture Degree in a practical way, through workshops between different subjects, superposing the technical knowledge with the resolution of constructive problems in the development of an architectural project. Developing these workshops between subjects we can dissolve the traditional boundaries between different areas of the Degree. This multidisciplinary workshop methodology allows the use of all the global knowledge acquired by students during their studies and at the same time, it enhances students’ ability to communicate and discuss their ideas and solutions in public. It also increases their capacity of self-criticism, and it foments their ability to undertake learning strategies and research in an autonomous way. The used methodology is based on the development of a practical work common to several subjects of different knowledge areas within the "Technology Block" of the future Architecture Degree. Thus, students work approaching the problem in a global way discussing simultaneously with teachers from different areas. By using these new workshops we stimulate an interactive class versus a traditional lecture. Work is evaluated continuously, valuing the participative pupil´s attitude, working in groups in class time, reaching weekly objectives and stimulating the individual responsibility and positive interdependence of the pupil inside the working group. The exercises are designed to improve students’ ability to transmit their ideas and solutions in public, knowing how to discuss and defend their technical resolutions to peers and teachers (Peer Reviewing), their capacity for self-criticism and their capacity to undertake strategies and autonomous learning processes at the same time they develop a personal research into new technologies, systems and materials. Students have shown their majority preference for this teaching methodology by the multidisciplinary workshops offered in the last years, with very satisfactory academic results. In conclusion, it can be verified nowadays the viability of the introduction of new contents and new teaching methodologies necessary for the acquisition of the skills in the future Architecture Degree, through workshops between several subjects that have had a great acceptance in students and positive contrasted academic results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Food policy is one the most regulated policy fields at the EU level. ‘Unholy alliances’ are collaborative patterns that temporarily bring together antagonistic stakeholders behind a common cause. This paper deals with such ‘transversal’ co-operations between citizens’ groups (NGOs, consumers associations…) and economic stakeholders (food industries, retailers…), focusing on their ambitions and consequences. This paper builds on two case studies that enable a more nuanced view on the perspectives for the development of transversal networks at the EU level. The main findings are that (i) the rationale behind the adoption of collaborative partnerships actually comes from a case-by-case cost/benefit analysis leading to hopes of improved access to institutions; (ii) membership of a collaborative network leads to a learning process closely linked to the network’s performance; and (iii) coalitions can have a better reception — rather than an automatic better access — depending on several factors independent of the stakeholders themselves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hearings held Jan. 26, 1956 - Feb. 5, 1960, pursuant to Senate resolution 18, 84th Cong. [and others] volume 8 has also special subtitle : The finale phase of the Committee's inquiry with reference to overall television allocations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional methods of R&D management are no longer sufficient for embracing innovations and leveraging complex new technologies to fully integrated positions in established systems. This paper presents the view that the technology integration process is a result of fundamental interactions embedded in inter-organisational activities. Emerging industries, high technology companies and knowledge intensive organisations owe a large part of their viability to complex networks of inter-organisational interactions and relationships. R&D organisations are the gatekeepers in the technology integration process with their initial sanction and motivation to develop technologies providing the first point of entry. Networks rely on the activities of stakeholders to provide the foundations of collaborative R&D activities, business-to-business marketing and strategic alliances. Such complex inter-organisational interactions and relationships influence value creation and organisational goals as stakeholders seek to gain investment opportunities. A theoretical model is developed here that contributes to our understanding of technology integration (adoption) as a dynamic process, which is simultaneously structured and enacted through the activities of stakeholders and organisations in complex inter-organisational networks of sanction and integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Boolean models of genetic regulatory networks (GRNs) have been shown to exhibit many of the characteristic dynamics of real GRNs, with gene expression patterns settling to point attractors or limit cycles, or displaying chaotic behaviour, depending upon the connectivity of the network and the relative proportions of excitatory and inhibitory interactions. This range of behaviours is only apparent, however, when the nodes of the GRN are updated synchronously, a biologically implausible state of affairs. In this paper we demonstrate that evolution can produce GRNs with interesting dynamics under an asynchronous update scheme. We use an Artificial Genome to generate networks which exhibit limit cycle dynamics when updated synchronously, but collapse to a point attractor when updated asynchronously. Using a hill climbing algorithm the networks are then evolved using a fitness function which rewards patterns of gene expression which revisit as many previously seen states as possible. The final networks exhibit “fuzzy limit cycle” dynamics when updated asynchronously.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arguably, the world has become one large pervasive computing environment. Our planet is growing a digital skin of a wide array of sensors, hand-held computers, mobile phones, laptops, web services and publicly accessible web-cams. Often, these devices and services are deployed in groups, forming small communities of interacting devices. Service discovery protocols allow processes executing on each device to discover services offered by other devices within the community. These communities can be linked together to form a wide-area pervasive environment, allowing processes in one p u p tu interact with services in another. However, the costs of communication and the protocols by which this communication is mediated in the wide-area differ from those of intra-group, or local-area, communication. Communication is an expensive operation for small, battery powered devices, but it is less expensive for servem and workstations, which have a constant power supply and 81'e connected to high bandwidth networks. This paper introduces Superstring, a peer to-peer service discovery protocol optimised fur use in the wide-area. Its goals are to minimise computation and memory overhead in the face of large numbers of resources. It achieves this memory and computation scalability by distributing the storage cost of service descriptions and the computation cost of queries over multiple resolvers.