963 resultados para DATA LINK LAYER
Resumo:
The wide adaptation of Internet Protocol (IP) as de facto protocol for most communication networks has established a need for developing IP capable data link layer protocol solutions for Machine to machine (M2M) and Internet of Things (IoT) networks. However, the wireless networks used for M2M and IoT applications usually lack the resources commonly associated with modern wireless communication networks. The existing IP capable data link layer solutions for wireless IoT networks provide the necessary overhead minimising and frame optimising features, but are often built to be compatible only with IPv6 and specific radio platforms. The objective of this thesis is to design IPv4 compatible data link layer for Netcontrol Oy's narrow band half-duplex packet data radio system. Based on extensive literature research, system modelling and solution concept testing, this thesis proposes the usage of tunslip protocol as the basis for the system data link layer protocol development. In addition to the functionality of tunslip, this thesis discusses the additional network, routing, compression, security and collision avoidance changes required to be made to the radio platform in order for it to be IP compatible while still being able to maintain the point-to-multipoint and multi-hop network characteristics. The data link layer design consists of the radio application, dynamic Maximum Transmission Unit (MTU) optimisation daemon and the tunslip interface. The proposed design uses tunslip for creating an IP capable data link protocol interface. The radio application receives data from tunslip and compresses the packets and uses the IP addressing information for radio network addressing and routing before forwarding the message to radio network. The dynamic MTU size optimisation daemon controls the tunslip interface maximum MTU size according to the link quality assessment calculated from the radio network diagnostic data received from the radio application. For determining the usability of tunslip as the basis for data link layer protocol, testing of the tunslip interface is conducted with both IEEE 802.15.4 radios and packet data radios. The test cases measure the radio network usability for User Datagram Protocol (UDP) based applications without applying any header or content compression. The test results for the packet data radios reveal that the typical success rate for packet reception through a single-hop link is above 99% with a round-trip-delay of 0.315s for 63B packets.
Resumo:
The paper presents a link layer stack for wireless sensor networks, which consists of the Burst-aware Energy-efficient Adaptive Medium access control (BEAM) and the Hop-to-Hop Reliability (H2HR) protocol. BEAM can operate with short beacons to announce data transmissions or include data within the beacons. Duty cycles can be adapted by a traffic prediction mechanism indicating pending packets destined for a node and by estimating its wake-up times. H2HR takes advantage of information provided by BEAM such as neighbour information and transmission information to perform per-hop congestion control. We justify the design decisions by measurements in a real-world wireless sensor network testbed and compare the performance with other link layer protocols.
Resumo:
Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
The whole research of the current Master Thesis project is related to Big Data transfer over Parallel Data Link and my main objective is to assist the Saint-Petersburg National Research University ITMO research team to accomplish this project and apply Green IT methods for the data transfer system. The goal of the team is to transfer Big Data by using parallel data links with SDN Openflow approach. My task as a team member was to compare existing data transfer applications in case to verify which results the highest data transfer speed in which occasions and explain the reasons. In the context of this thesis work a comparison between 5 different utilities was done, which including Fast Data Transfer (FDT), BBCP, BBFTP, GridFTP, and FTS3. A number of scripts where developed which consist of creating random binary data to be incompressible to have fair comparison between utilities, execute the Utilities with specified parameters, create log files, results, system parameters, and plot graphs to compare the results. Transferring such an enormous variety of data can take a long time, and hence, the necessity appears to reduce the energy consumption to make them greener. In the context of Green IT approach, our team used Cloud Computing infrastructure called OpenStack. It’s more efficient to allocated specific amount of hardware resources to test different scenarios rather than using the whole resources from our testbed. Testing our implementation with OpenStack infrastructure results that the virtual channel does not consist of any traffic and we can achieve the highest possible throughput. After receiving the final results we are in place to identify which utilities produce faster data transfer in different scenarios with specific TCP parameters and we can use them in real network data links.
Resumo:
Written for communications and electronic engineers, technicians and students, this book begins with an introduction to data communications, and goes on to explain the concept of layered communications. Other chapters deal with physical communications channels, baseband digital transmission, analog data transmission, error control and data compression codes, physical layer standards, the data link layer, the higher layers of the protocol hierarchy, and local are networks (LANS). Finally, the book explores some likely future developments.
Resumo:
National Highway Traffic Safety Administration, East Liberty, Ohio
Resumo:
"30 July 1982."
Resumo:
"September 6, 1989."
Resumo:
This article presents a fieldbus simulation platform and its remote access interface that enables a wide range of experiments, where users can configure operation sequences and procedures typical of Foundation Fieldbus systems. The simulation system was developed using LabVIEW, with requisites of deterministic execution, and a course management work frame web server called Moodle. The results were obtained through three different evaluations: schedule table execution, simulator functionality and finally, simulator productivity and achievement. The evaluation attests that this new tool is feasible, and can be applied for fieldbus automation systems training purposes, considering the robustness and stability in tests and the positive feedback from users. (C) 2008 ISA. Published by Elsevier Ltd. All rights reserved.
Resumo:
The overall goal of the REMPLI project is to design and implement a communication infrastructure for distributed data acquisition and remote control operations using the power grid as the communication medium. The primary target application is remote meter reading with high time resolution, where the meters can be energy, heat, gas, or water meters. The users of the system (e.g. utility companies) will benefit from the REMPLI system by gaining more detailed information about how energy is consumed by the end-users. In this context, the power-line communication (PLC) is deployed to cover the distance between utility company’s Private Network and the end user. This document specifies a protocol for real-time PLC, in the framework of the REMPLI project. It mainly comprises the Network Layer and Data Link Layer. The protocol was designed having into consideration the specific aspects of the network: different network typologies (star, tree, ring, multiple paths), dynamic changes in network topology (due to network maintenance, hazards, etc.), communication lines strongly affected by noise.
Resumo:
In this paper we address the real-time capabilities of P-NET, which is a multi-master fieldbus standard based on a virtual token passing scheme. We show how P-NET’s medium access control (MAC) protocol is able to guarantee a bounded access time to message requests. We then propose a model for implementing fixed prioritybased dispatching mechanisms at each master’s application level. In this way, we diminish the impact of the first-come-first-served (FCFS) policy that P-NET uses at the data link layer. The proposed model rises several issues well known within the real-time systems community: message release jitter; pre-run-time schedulability analysis in non pre-emptive contexts; non-independence of tasks at the application level. We identify these issues in the proposed model and show how results available for priority-based task dispatching can be adapted to encompass priority-based message dispatching in P-NET networks.
Resumo:
The integration of wired and wireless technologies in modern manufacturing plants is now of paramount importance for the competitiveness of any industry. Being PROFIBUS the most widely used technology in use for industrial communications, several solutions have been proposed to provide PROFIBUS networks with wireless communications. One of them, the bridge-based hybrid wired/wireless PROFIBUS network approach, proposes an architecture in which the Intermediate Systems operate at Data Link Layer level, as bridges. In this paper, we propose an architecture for the implementation of such a bridge and the required protocols to handle communication between stations in different domains and the mobility of wireless stations.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
This paper describes an experiment in designing, implementing and testing a Transport layer cluster scheduling and dispatching architecture. The motivation for the experiment was the hypothesis that a Transport layer clustering solution may offer advantantages over the existing industry-standard Network layer and Data Link Layer approaches. The critical success factors initially established to guide and evaluate the experiment were reduced dispatcher work load, reduced dispatcher internal state memory requirements, distributed denial of service resilience, and cluster software design simplicity. The functional design stage of the experiment produced a Transport layer strategy for scheduling and load balancing based on the specification of two new TCP options. Implementation required the introduction of the newly specified TCP options into the Linux (2.4) kernel. The implementation produced an extended Linux Socket API to facilitate user-process access to the additional TCP capability. The testing stage of the experiment confirmed the operational efficiency of the solution.