869 resultados para Peer-to-peer architecture (Computer networks)
Resumo:
The Session Initiation Protocol (SIP) is an application-layer control protocol standardized by the IETF for creating, modifying and terminating multimedia sessions. With the increasing use of SIP in large deployments, the current SIP design cannot handle overload effectively, which may cause SIP networks to suffer from congestion collapse under heavy offered load. This paper introduces a distributed end-to-end overload control (DEOC) mechanism, which is deployed at the edge servers of SIP networks and is easy to implement. By applying overload control closest to the source of traf?c, DEOC can keep high throughput for SIP networks even when the offered load exceeds the capacity of the network. Besides, it responds quickly to the sudden variations of the offered load and achieves good fairness. Theoretic analysis and extensive simulations verify that DEOC is effective in controlling overload of SIP networks.
Consolidation of a wsn and minimax method to rapidly neutralise intruders in strategic installations
Resumo:
Due to the sensitive international situation caused by still-recent terrorist attacks, there is a common need to protect the safety of large spaces such as government buildings, airports and power stations. To address this problem, developments in several research fields, such as video and cognitive audio, decision support systems, human interface, computer architecture, communications networks and communications security, should be integrated with the goal of achieving advanced security systems capable of checking all of the specified requirements and spanning the gap that presently exists in the current market. This paper describes the implementation of a decision system for crisis management in infrastructural building security. Specifically, it describes the implementation of a decision system in the management of building intrusions. The positions of the unidentified persons are reported with the help of a Wireless Sensor Network (WSN). The goal is to achieve an intelligent system capable of making the best decision in real time in order to quickly neutralise one or more intruders who threaten strategic installations. It is assumed that the intruders’ behaviour is inferred through sequences of sensors’ activations and their fusion. This article presents a general approach to selecting the optimum operation from the available neutralisation strategies based on a Minimax algorithm. The distances among different scenario elements will be used to measure the risk of the scene, so a path planning technique will be integrated in order to attain a good performance. Different actions to be executed over the elements of the scene such as moving a guard, blocking a door or turning on an alarm will be used to neutralise the crisis. This set of actions executed to stop the crisis is known as the neutralisation strategy. Finally, the system has been tested in simulations of real situations, and the results have been evaluated according to the final state of the intruders. In 86.5% of the cases, the system achieved the capture of the intruders, and in 59.25% of the cases, they were intercepted before they reached their objective.
Resumo:
The networks need to provide higher speeds than those offered today. For it, considering that in the spectrum radio technologies is the scarcest resource in the development of these technologies and the new developments is essential to maximize the performance of bits per hertz transmitted. Long Term Evolution optimize spectral efficiency modulations with new air interface, and more advanced algorithms radius. These capabilities is the fact that LTE is an IPbased technology that enables end-to-end offer high transmission rates per user and very low latency, ie delay in the response times of the network around only 10 milliseconds, so you can offer any realtime application. LTE is the latest standard in mobile network technology and 3GPP ensure competitiveness in the future, may be considered a technology bridge between 3G networks - current 3.5G and future 4G networks, which are expected to reach speeds of up to 1G . LTE operators provide a simplified architecture but both robust, supporting services on IP technology. The objectives to be achieved through its implementation are ambitious, first users have a wide range of added services like capabilities that currently enjoys with residential broadband access at competitive prices, while the operator will have a network fully IP-based environment, reducing the complexity and cost of the same, which will give operators the opportunity to migrate to LTE directly. A major advantage of LTE is its ability to fuse with existing networks, ensuring interconnection with the same, increasing his current coverage and allowing a data connection established by a user in the environment continue when fade the coverage LTE. Moreover, the operator has the advantage of deploying network gradually, starting initially at areas of high demand for broadband services and expand progressively in line with this. RESUMEN. Las redes necesitan proporcionar velocidades mayores a las ofertadas a día de hoy. Para ello, teniendo en cuenta que en tecnologías radio el espectro es el recurso más escaso, en la evolución de estas tecnologías y en los nuevos desarrollos es esencial maximizar el rendimiento de bits por hercio transmitido. Long Term Evolution optimiza la eficiencia espectral con nuevas modulaciones en la interfaz aire, así como los algoritmos radio más avanzado. A estas capacidades se suma el hecho de que LTE es una tecnología basada en IP de extremo a extremo que permite ofrecer altas velocidades de transmisión por usuario y latencias muy bajas, es decir, retardos en los tiempos de respuesta de la red en torno a sólo 10 milisegundos, por lo que permite ofrecer cualquier tipo de aplicación en tiempo real. LTE es el último estándar en tecnología de redes móviles y asegurará la competitividad de 3GPP en el futuro, pudiendo ser considerada una tecnología puente entre las redes 3G – 3.5G actuales y las futuras redes 4G, de las que se esperan alcanzar velocidades de hasta 1G. LTE proporcionará a las operadoras una arquitectura simplificada pero robusta a la vez, soportando servicios sobre tecnología IP. Los objetivos que se persiguen con su implantación son ambiciosos, por una parte los usuarios dispondrá de una amplia oferta de servicios añadidos con capacidades similares a las que disfruta actualmente con accesos a banda ancha residencial y a precios competitivos, mientras que el operador dispondrá de una red basada en entorno totalmente IP, reduciendo la complejidad y el costo de la misma, lo que dará a las operadoras la oportunidad de migrar a LTE directamente. Una gran ventaja de LTE es su capacidad para fusionarse con las redes existentes, asegurando la interconexión con las mismas, aumentando su actual cobertura y permitiendo que una conexión de datos establecida por un usuario en el entorno LTE continúe cuando la cobertura LTE se desvanezca. Por otra parte el operador tiene la ventaja de desplegar la red LTE de forma gradual, comenzando inicialmente por las áreas de gran demanda de servicios de banda ancha y ampliarla progresivamente en función de ésta.
Resumo:
Abstract Mobile Edge Computing enables the deployment of services, applications, content storage and processing in close proximity to mobile end users. This highly distributed computing environment can be used to provide ultra-low latency, precise positional awareness and agile applications, which could significantly improve user experience. In order to achieve this, it is necessary to consider next-generation paradigms such as Information-Centric Networking and Cloud Computing, integrated with the upcoming 5th Generation networking access. A cohesive end-to-end architecture is proposed, fully exploiting Information-Centric Networking together with the Mobile Follow-Me Cloud approach, for enhancing the migration of content-caches located at the edge of cloudified mobile networks. The chosen content-relocation algorithm attains content-availability improvements of up to 500 when a mobile user performs a request and compared against other existing solutions. The performed evaluation considers a realistic core-network, with functional and non-functional measurements, including the deployment of the entire system, computation and allocation/migration of resources. The achieved results reveal that the proposed architecture is beneficial not only from the users’ perspective but also from the providers point-of-view, which may be able to optimize their resources and reach significant bandwidth savings.
Resumo:
A biologically realizable, unsupervised learning rule is described for the online extraction of object features, suitable for solving a range of object recognition tasks. Alterations to the basic learning rule are proposed which allow the rule to better suit the parameters of a given input space. One negative consequence of such modifications is the potential for learning instability. The criteria for such instability are modeled using digital filtering techniques and predicted regions of stability and instability tested. The result is a family of learning rules which can be tailored to the specific environment, improving both convergence times and accuracy over the standard learning rule, while simultaneously insuring learning stability.
Resumo:
Collaborative working with the aid of computers is increasing rapidly due to the widespread use of computer networks, geographic mobility of people, and small powerful personal computers. For the past ten years research has been conducted into this use of computing technology from a wide variety of perspectives and for a wide range of uses. This thesis adds to that previous work by examining the area of collaborative writing amongst groups of people. The research brings together a number of disciplines, namely sociology for examining group dynamics, psychology for understanding individual writing and learning processes, and computer science for database, networking, and programming theory. The project initially looks at groups and how they form, communicate, and work together, progressing on to look at writing and the cognitive processes it entails for both composition and retrieval. The thesis then details a set of issues which need to be addressed in a collaborative writing system. These issues are then followed by developing a model for collaborative writing, detailing an iterative process of co-ordination, writing and annotation, consolidation, and negotiation, based on a structured but extensible document model. Implementation issues for a collaborative application are then described, along with various methods of overcoming them. Finally the design and implementation of a collaborative writing system, named Collaborwriter, is described in detail, which concludes with some preliminary results from initial user trials and testing.
Resumo:
This work was funded by the RCUK Digital Economy award to the dot.rural Digital Economy Hub, University of Aberdeen; award reference: EP/G066051/1. The dataset used by this paper can be acquired by emailing the first author. We thank Matt Dennis, Kirsten A. Smith and Michael Gibson for their contributions to the research.
Resumo:
Corporations and enterprises have embraced the notion of shared experiences and collective workplaces by incorporating coworking places. A great deal of the methodology carries from the studio culture that architecture schools foster as well as think tank culture. Maker spaces and incubator spaces are prime examples of places that engender creative thought and products. This thesis seeks to explore the impact that architecture has on collaborative spaces with a focus on augmenting to their generated learning and design activities. The investigation explores the collaborative design process as a series of interactions between groups of individuals. This involves the impact of technology and its implications on those interactions. The goal of this thesis is not to further the use of a tool or systematic procedure, but to use architecture as a framing device to form places for collaborative processes.
Resumo:
The main purpose of this paper is to present architecture of automated system that allows monitoring and tracking in real time (online) the possible occurrence of faults and electromagnetic transients observed in primary power distribution networks. Through the interconnection of this automated system to the utility operation center, it will be possible to provide an efficient tool that will assist in decisionmaking by the Operation Center. In short, the desired purpose aims to have all tools necessary to identify, almost instantaneously, the occurrence of faults and transient disturbances in the primary power distribution system, as well as to determine its respective origin and probable location. The compilations of results from the application of this automated system show that the developed techniques provide accurate results, identifying and locating several occurrences of faults observed in the distribution system.
Resumo:
Swallowing dynamics involves the coordination and interaction of several muscles and nerves which allow correct food transport from mouth to stomach without laryngotracheal penetration or aspiration. Clinical swallowing assessment depends on the evaluator`s knowledge of anatomic structures and of neurophysiological processes involved in swallowing. Any alteration in those steps is denominated oropharyngeal dysphagia, which may have many causes, such as neurological or mechanical disorders. Videofluoroscopy of swallowing is presently considered to be the best exam to objectively assess the dynamics of swallowing, but the exam needs to be conducted under certain restrictions, due to patient`s exposure to radiation, which limits periodical repetition for monitoring swallowing therapy. Another method, called cervical auscultation, is a promising new diagnostic tool for the assessment of swallowing disorders. The potential to diagnose dysphagia in a noninvasive manner by assessing the sounds of swallowing is a highly attractive option for the dysphagia clinician. Even so, the captured sound has an amount of noise, which can hamper the evaluator`s decision. In that way, the present paper proposes the use of a filter to improve the quality of audible sound and facilitate the perception of examination. The wavelet denoising approach is used to decompose the noisy signal. The signal to noise ratio was evaluated to demonstrate the quantitative results of the proposed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Computer viruses are an important risk to computational systems endangering either corporations of all sizes or personal computers used for domestic applications. Here, classical epidemiological models for disease propagation are adapted to computer networks and, by using simple systems identification techniques a model called SAIC (Susceptible, Antidotal, Infectious, Contaminated) is developed. Real data about computer viruses are used to validate the model. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.
Resumo:
This paper presents experimental results of the communication performance evaluation of a prototype ZigBee-based patient monitoring system commissioned in an in-patient floor of a Portuguese hospital (HPG – Hospital Privado de Guimar~aes). Besides, it revisits relevant problems that affect the performance of nonbeacon-enabled ZigBee networks. Initially, the presence of hidden-nodes and the impact of sensor node mobility are discussed. It was observed, for instance, that the message delivery ratio in a star network consisting of six wireless electrocardiogram sensor devices may decrease from 100% when no hidden-nodes are present to 83.96% when half of the sensor devices are unable to detect the transmissions made by the other half. An additional aspect which affects the communication reliability is a deadlock condition that can occur if routers are unable to process incoming packets during the backoff part of the CSMA-CA mechanism. A simple approach to increase the message delivery ratio in this case is proposed and its effectiveness is verified. The discussion and results presented in this paper aim to contribute to the design of efficient networks,and are valid to other scenarios and environments rather than hospitals.
Resumo:
O presente trabalho enquadra-se na área das redes de computadores, fazendo referência aos protocolos e ao conjunto de equipamentos e softwares necessários para a administração, controlo e monitorização desse tipos de infra-estruturas. Para a gestão de uma rede de dados, é essencial dispor de conhecimentos e documentação de nível técnico para representar da forma mais fiel possível a configuração da rede, seguindo passo a passo a interligação entre os equipamentos existentes e oferecendo assim uma visão o mais fidedigna possível das instalações. O protocolo SNMP é utilizado em larga escala sendo praticamente um standard para a administração de redes baseadas na tecnologia TCP/IP. Este protocolo define a comunicação entre um administrador e um agente, estabelecendo o formato e o significado das mensagens trocadas entre ambos. Tem a capacidade de suportar produtos de diferentes fabricantes, permitindo ao administrador manter uma base de dados com informações relevantes da monitorização de vários equipamentos, que pode ser consultada e analisada por softwares NMS concebidos especialmente para a gestão de redes de computadores. O trabalho apresentado nesta dissertação teve como objectivo desenvolver uma ferramenta para apoiar à gestão da infra-estrutura de comunicações do Aeroporto Francisco Sá Carneiro que permitisse conhecer em tempo real o estado dos elementos de rede, ajudar no diagnóstico de possíveis problemas e ainda apoiar a tarefa de planeamento e expansão da rede instalada. A ferramenta desenvolvida utiliza as potencialidades do protocolo SNMP para adquirir dados de monitorização de equipamentos de rede presentes na rede do AFSC, disponibilizando-os numa interface gráfica para facilitar a visualização dos parâmetros e alertas de funcionamento mais importantes na administração da rede.
Resumo:
The ART-WiSe (Architecture for Real-Time communications in Wireless Sensor Networks) framework aims at the design of new communication architectures and mechanisms for time-sensitive Wireless Sensor Networks (WSNs). We adopted a two-tiered architecture where an overlay Wireless Local Area Network (Tier 2) serves as a backbone for a WSN (Tier 1), relying on existing standard communication protocols and commercial-off-the-shell (COTS) technologies – IEEE 802.15.4/ZigBee for Tier 1 and IEEE 802.11 for Tier 2. In this line, a test-bed application is being developed for assessing, validating and demonstrating the ART-WiSe architecture. A pursuit-evasion application was chosen since it fulfils a number of requirements, namely it is feasible and appealing and imposes some stress to the architecture in terms of timeliness. To develop the testbed based on the previously referred technologies, an implementation of the IEEE 8021.5.4/ZigBee protocols is being carried out, since there is no open source available to the community. This paper highlights some relevant aspects of the ART-WiSe architecture, provides some intuition on the protocol stack implementation and presents a general view over the envisaged test-bed application.