907 resultados para Distributed network protocol
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.
Resumo:
This study examined team processes and outcomes among 12 multi-university distributed project teams from 11 universities during its early and late development stages over a 14-month project period. A longitudinal model of team interaction is presented and tested at the individual level to consider the extent to which both formal and informal network connections—measured as degree centrality—relate to changes in team members’ individual perceptions of cohesion and conflict in their teams, and their individual performance as a team member over time. The study showed a negative network centrality-cohesion relationship with significant temporal patterns, indicating that as team members perceive less degree centrality in distributed project teams, they report more team cohesion during the last four months of the project. We also found that changes in team cohesion from the first three months (i.e., early development stage) to the last four months (i.e., late development stage) of the project relate positively to changes in team member performance. Although degree centrality did not relate significantly to changes in team conflict over time, a strong inverse relationship was found between changes in team conflict and cohesion, suggesting that team conflict emphasizes a different but related aspect of how individuals view their experience with the team process. Changes in team conflict, however, did not relate to changes in team member performance. Ultimately, we showed that individuals, who are less central in the network and report higher levels of team cohesion, performed better in distributed teams over time.
Resumo:
The Internet of things (IoT) is still in its infancy and has attracted much interest in many industrial sectors including medical fields, logistics tracking, smart cities and automobiles. However, as a paradigm, it is susceptible to a range of significant intrusion threats. This paper presents a threat analysis of the IoT and uses an Artificial Neural Network (ANN) to combat these threats. A multi-level perceptron, a type of supervised ANN, is trained using internet packet traces, then is assessed on its ability to thwart Distributed Denial of Service (DDoS/DoS) attacks. This paper focuses on the classification of normal and threat patterns on an IoT Network. The ANN procedure is validated against a simulated IoT network. The experimental results demonstrate 99.4% accuracy and can successfully detect various DDoS/DoS attacks.
Resumo:
The wide adaptation of Internet Protocol (IP) as de facto protocol for most communication networks has established a need for developing IP capable data link layer protocol solutions for Machine to machine (M2M) and Internet of Things (IoT) networks. However, the wireless networks used for M2M and IoT applications usually lack the resources commonly associated with modern wireless communication networks. The existing IP capable data link layer solutions for wireless IoT networks provide the necessary overhead minimising and frame optimising features, but are often built to be compatible only with IPv6 and specific radio platforms. The objective of this thesis is to design IPv4 compatible data link layer for Netcontrol Oy's narrow band half-duplex packet data radio system. Based on extensive literature research, system modelling and solution concept testing, this thesis proposes the usage of tunslip protocol as the basis for the system data link layer protocol development. In addition to the functionality of tunslip, this thesis discusses the additional network, routing, compression, security and collision avoidance changes required to be made to the radio platform in order for it to be IP compatible while still being able to maintain the point-to-multipoint and multi-hop network characteristics. The data link layer design consists of the radio application, dynamic Maximum Transmission Unit (MTU) optimisation daemon and the tunslip interface. The proposed design uses tunslip for creating an IP capable data link protocol interface. The radio application receives data from tunslip and compresses the packets and uses the IP addressing information for radio network addressing and routing before forwarding the message to radio network. The dynamic MTU size optimisation daemon controls the tunslip interface maximum MTU size according to the link quality assessment calculated from the radio network diagnostic data received from the radio application. For determining the usability of tunslip as the basis for data link layer protocol, testing of the tunslip interface is conducted with both IEEE 802.15.4 radios and packet data radios. The test cases measure the radio network usability for User Datagram Protocol (UDP) based applications without applying any header or content compression. The test results for the packet data radios reveal that the typical success rate for packet reception through a single-hop link is above 99% with a round-trip-delay of 0.315s for 63B packets.
Resumo:
Abstract-The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an artificial immune system (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by collaborative filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen - antibody interaction for matching and antibody - antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.
Resumo:
The production of artistic prints in the sixteenth- and seventeenth-century Netherlands was an inherently social process. Turning out prints at any reasonable scale depended on the fluid coordination between designers, platecutters, and publishers; roles that, by the sixteenth century, were considered distinguished enough to merit distinct credits engraved on the plates themselves: invenit, fecit/sculpsit, and excudit. While any one designer, plate cutter, and publisher could potentially exercise a great deal of influence over the production of a single print, their individual decisions (Whom to select as an engraver? What subjects to create for a print design? What market to sell to?) would have been variously constrained or encouraged by their position in this larger network (Who do they already know? And who, in turn, do their contacts know?) This dissertation addresses the impact of these constraints and affordances through the novel application of computational social network analysis to major databases of surviving prints from this period. This approach is used to evaluate several questions about trends in early modern print production practices that have not been satisfactorily addressed by traditional literature based on case studies alone: Did the social capital demanded by print production result in centralized, or distributed production of prints? When, and to what extent, did printmakers and publishers in the Low countries favor international versus domestic collaborators? And were printmakers under the same pressure as painters to specialize in particular artistic genres? This dissertation ultimately suggests how simple professional incentives endemic to the practice of printmaking may, at large scales, have resulted in quite complex patterns of collaboration and production. The framework of network analysis surfaces the role of certain printmakers who tend to be neglected in aesthetically-focused histories of art. This approach also highlights important issues concerning art historians’ balancing of individual influence versus the impact of longue durée trends. Finally, this dissertation also raises questions about the current limitations and future possibilities of combining computational methods with cultural heritage datasets in the pursuit of historical research.
Resumo:
Securing e-health applications in the context of Internet of Things (IoT) is challenging. Indeed, resources scarcity in such environment hinders the implementation of existing standard based protocols. Among these protocols, MIKEY (Multimedia Internet KEYing) aims at establishing security credentials between two communicating entities. However, the existing MIKEY modes fail to meet IoT specificities. In particular, the pre-shared key mode is energy efficient, but suffers from severe scalability issues. On the other hand, asymmetric modes such as the public key mode are scalable, but are highly resource consuming. To address this issue, we combine two previously proposed approaches to introduce a new hybrid MIKEY mode. Indeed, relying on a cooperative approach, a set of third parties is used to discharge the constrained nodes from heavy computational operations. Doing so, the pre-shared mode is used in the constrained part of the network, while the public key mode is used in the unconstrained part of the network. Preliminary results show that our proposed mode is energy preserving whereas its security properties are kept safe.
Resumo:
Os mecanismos e técnicas do domínio de Tempo-Real são utilizados quando existe a necessidade de um sistema, seja este um sistema embutido ou de grandes dimensões, possuir determinadas características que assegurem a qualidade de serviço do sistema. Os Sistemas de Tempo-Real definem-se assim como sistemas que possuem restrições temporais rigorosas, que necessitam de apresentar altos níveis de fiabilidade de forma a garantir em todas as instâncias o funcionamento atempado do sistema. Devido à crescente complexidade dos sistemas embutidos, empregam-se frequentemente arquiteturas distribuídas, onde cada módulo é normalmente responsável por uma única função. Nestes casos existe a necessidade de haver um meio de comunicação entre estes, de forma a poderem comunicar entre si e cumprir a funcionalidade desejadas. Devido à sua elevada capacidade e baixo custo a tecnologia Ethernet tem vindo a ser alvo de estudo, com o objetivo de a tornar num meio de comunicação com a qualidade de serviço característica dos sistemas de tempo-real. Como resposta a esta necessidade surgiu na Universidade de Aveiro, o Switch HaRTES, o qual possui a capacidade de gerir os seus recursos dinamicamente, de modo a fornecer à rede onde é aplicado garantias de Tempo-Real. No entanto, para uma arquitetura de rede ser capaz de fornecer aos seus nós garantias de qualidade serviço, é necessário que exista uma especificação do fluxo, um correto encaminhamento de tráfego, reserva de recursos, controlo de admissão e um escalonamento de pacotes. Infelizmente, o Switch HaRTES apesar de possuir todas estas características, não suporta protocolos standards. Neste documento é apresentado então o trabalho que foi desenvolvido para a integração do protocolo SRP no Switch HaRTES.
Resumo:
Introduction and background: Survival following critical illness is associated with a significant burden of physical, emotional and psychosocial morbidity. Recovery can be protracted and incomplete, with important and sustained effects upon everyday life, including family life, social participation and return to work. In stark contrast with other critically ill patient groups (eg, those following cardiothoracic surgery), there are comparatively few interventional studies of rehabilitation among the general intensive care unit patient population. This paper outlines the protocol for a sub study of the RECOVER study: a randomised controlled trial evaluating a complex intervention of enhanced ward-based rehabilitation for patients following discharge from intensive care. Methods and analysis: The RELINQUISH study is a nested longitudinal, qualitative study of family support and perceived healthcare needs among RECOVER participants at key stages of the recovery process and at up to 1 year following hospital discharge. Its central premise is that recovery is a dynamic process wherein patients’ needs evolve over time. RELINQUISH is novel in that we will incorporate two parallel strategies into our data analysis: (1) a pragmatic health services-oriented approach, using an a priori analytical construct, the ‘Timing it Right’ framework and (2) a constructivist grounded theory approach which allows the emergence of new themes and theoretical understandings from the data. We will subsequently use Qualitative Health Needs Assessment methodology to inform the development of timely and responsive healthcare interventions throughout the recovery process. Ethics and dissemination: The protocol has been approved by the Lothian Research Ethics Committee (protocol number HSRU011). The study has been added to the UK Clinical Research Network Database (study ID. 9986). The authors will disseminate the findings in peer reviewed publications and to relevant critical care stakeholder groups.
Resumo:
Social networks are a recent phenomenon of communication, with a high prevalence of young users. This concept serves as a motto for a multidisciplinary project, which aims to create a simple communication network, using light as the transmission medium. Mixed team, composed by students from secondary and higher education schools, are partners on the development of an optical transceiver. A LED lamp array and a small photodiode are the optical transmitter and receiver, respectively. Using several transceivers aligned with each other, this con guration creates a ring communication network, enabling the exchange of messages between users. Through this project, some concepts addressed in physics classes from secondary schools (e.g. photoelectric phenomena and the properties of light) are experimentally veri ed and used to communicate, in a classroom or a laboratory.
Resumo:
Abstract-The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an artificial immune system (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by collaborative filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen - antibody interaction for matching and antibody - antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.
Resumo:
Wydział Matematyki i Informatyki UAM
Resumo:
Esta dissertação desenvolve uma plataforma de controlo interactiva para edifícios inteligentes através de um sistema SCADA (Supervisory Control And Data Acquisition). Este sistema SCADA integra diferentes tipos de informações provenientes das várias tecnologias presentes em edifícios modernos (controlo da ventilação, temperatura, iluminação, etc.). A estratégia de controlo desenvolvida implementa um controlador em cascada hierárquica onde os "loops" interiores são executados pelos PLC's locais (Programmable Logic Controller), e o "loop" exterior é gerido pelo sistema SCADA centralizado, que interage com a rede local de PLC's. Nesta dissertação é implementado um controlador preditivo na plataforma SCADA centralizada. São apresentados testes efectuados para o controlo da temperatura e luminosidade de salas com uma grande área. O controlador preditivo desenvolvido tenta optimizar a satisfação dos utilizadores, com base nas preferências introduzidas em várias interfaces distribuídas, sujeito às restrições de minimização do desperdício de energia. De forma a executar o controlador preditivo na plataforma SCADA foi desenvolvido um canal de comunicação para permitir a comunicação entre a aplicação SCADA e a aplicação MATLAB, onde o controlador preditivo é executado. ABSTRACT: This dissertation develops an operational control platform for intelligent buildings using a SCADA system (Supervisory Control And Data Acquisition). This SCADA system integrates different types of information coming from the several technologies present in modem buildings (control of ventilation, temperature, illumination, etc.). The developed control strategy implements a hierarchical cascade controller where inner loops are performed by local PLCs (Programmable Logic Controller), and the outer loop is managed by the centralized SCADA system, which interacts with the entire local PLC network. ln this dissertation a Predictive Controller is implemented at the centralized SCADA platform. Tests applied to the control of temperature and luminosity in hugearea rooms are presented. The developed Predictive Controller tries to optimize the satisfaction of user explicit preferences coming from several distributed user-interfaces, subjected to the constraints of energy waste minimization. ln order to run the Predictive Controller at the SCADA platform a communication channel was developed to allow communication between the SCADA application and the MATLAB application where the Predictive Controller runs.
Resumo:
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.