773 resultados para Data transmission systems.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

An increasing number of applications, such as distributed interactive simulation, live auctions, distributed games and collaborative systems, require the network to provide a reliable multicast service. This service enables one sender to reliably transmit data to multiple receivers. Reliability is traditionally achieved by having receivers send negative acknowledgments (NACKs) to request from the sender the retransmission of lost (or missing) data packets. However, this Automatic Repeat reQuest (ARQ) approach results in the well-known NACK implosion problem at the sender. Many reliable multicast protocols have been recently proposed to reduce NACK implosion. But, the message overhead due to NACK requests remains significant. Another approach, based on Forward Error Correction (FEC), requires the sender to encode additional redundant information so that a receiver can independently recover from losses. However, due to the lack of feedback from receivers, it is impossible for the sender to determine how much redundancy is needed. In this paper, we propose a new reliable multicast protocol, called ARM for Adaptive Reliable Multicast. Our protocol integrates ARQ and FEC techniques. The objectives of ARM are (1) reduce the message overhead due to NACK requests, (2) reduce the amount of data transmission, and (3) reduce the time it takes for all receivers to receive the data intact (without loss). During data transmission, the sender periodically informs the receivers of the number of packets that are yet to be transmitted. Based on this information, each receiver predicts whether this amount is enough to recover its losses. Only if it is not enough, that the receiver requests the sender to encode additional redundant packets. Using ns simulations, we show the superiority of our hybrid ARQ-FEC protocol over the well-known Scalable Reliable Multicast (SRM) protocol.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The demand for optical bandwidth continues to increase year on year and is being driven primarily by entertainment services and video streaming to the home. Current photonic systems are coping with this demand by increasing data rates through faster modulation techniques, spectrally efficient transmission systems and by increasing the number of modulated optical channels per fibre strand. Such photonic systems are large and power hungry due to the high number of discrete components required in their operation. Photonic integration offers excellent potential for combining otherwise discrete system components together on a single device to provide robust, power efficient and cost effective solutions. In particular, the design of optical modulators has been an area of immense interest in recent times. Not only has research been aimed at developing modulators with faster data rates, but there has also a push towards making modulators as compact as possible. Mach-Zehnder modulators (MZM) have proven to be highly successful in many optical communication applications. However, due to the relatively weak electro-optic effect on which they are based, they remain large with typical device lengths of 4 to 7 mm while requiring a travelling wave structure for high-speed operation. Nested MZMs have been extensively used in the generation of advanced modulation formats, where multi-symbol transmission can be used to increase data rates at a given modulation frequency. Such nested structures have high losses and require both complex fabrication and packaging. In recent times, it has been shown that Electro-absorption modulators (EAMs) can be used in a specific arrangement to generate Quadrature Phase Shift Keying (QPSK) modulation. EAM based QPSK modulators have increased potential for integration and can be made significantly more compact than MZM based modulators. Such modulator designs suffer from losses in excess of 40 dB, which limits their use in practical applications. The work in this thesis has focused on how these losses can be reduced by using photonic integration. In particular, the integration of multiple lasers with the modulator structure was considered as an excellent means of reducing fibre coupling losses while maximising the optical power on chip. A significant difficultly when using multiple integrated lasers in such an arrangement was to ensure coherence between the integrated lasers. The work investigated in this thesis demonstrates for the first time how optical injection locking between discrete lasers on a single photonic integrated circuit (PIC) can be used in the generation of coherent optical signals. This was done by first considering the monolithic integration of lasers and optical couplers to form an on chip optical power splitter, before then examining the behaviour of a mutually coupled system of integrated lasers. By operating the system in a highly asymmetric coupling regime, a stable phase locking region was found between the integrated lasers. It was then shown that in this stable phase locked region the optical outputs of each laser were coherent with each other and phase locked to a common master laser.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this note is to discuss the role of high frequency data in ecological modelling and to identify some of the data requirements for the further development of ecological models for operational oceanography. There is a pressing requirement for the establishment of data acquisition systems for key ecological variables with a high spatial and temporal coverage. Such a system will facilitate the development of operational models. It is envisaged that both in-situ and remotely sensed measurements will need to combined to achieve this aim.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Noise is one of the main factors degrading the quality of original multichannel remote sensing data and its presence influences classification efficiency, object detection, etc. Thus, pre-filtering is often used to remove noise and improve the solving of final tasks of multichannel remote sensing. Recent studies indicate that a classical model of additive noise is not adequate enough for images formed by modern multichannel sensors operating in visible and infrared bands. However, this fact is often ignored by researchers designing noise removal methods and algorithms. Because of this, we focus on the classification of multichannel remote sensing images in the case of signal-dependent noise present in component images. Three approaches to filtering of multichannel images for the considered noise model are analysed, all based on discrete cosine transform in blocks. The study is carried out not only in terms of conventional efficiency metrics used in filtering (MSE) but also in terms of multichannel data classification accuracy (probability of correct classification, confusion matrix). The proposed classification system combines the pre-processing stage where a DCT-based filter processes the blocks of the multichannel remote sensing image and the classification stage. Two modern classifiers are employed, radial basis function neural network and support vector machines. Simulations are carried out for three-channel image of Landsat TM sensor. Different cases of learning are considered: using noise-free samples of the test multichannel image, the noisy multichannel image and the pre-filtered one. It is shown that the use of the pre-filtered image for training produces better classification in comparison to the case of learning for the noisy image. It is demonstrated that the best results for both groups of quantitative criteria are provided if a proposed 3D discrete cosine transform filter equipped by variance stabilizing transform is applied. The classification results obtained for data pre-filtered in different ways are in agreement for both considered classifiers. Comparison of classifier performance is carried out as well. The radial basis neural network classifier is less sensitive to noise in original images, but after pre-filtering the performance of both classifiers is approximately the same.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multicore computational accelerators such as GPUs are now commodity components for highperformance computing at scale. While such accelerators have been studied in some detail as stand-alone computational engines, their integration in large-scale distributed systems raises new challenges and trade-offs. In this paper, we present an exploration of resource management alternatives for building asymmetric accelerator-based distributed systems. We present these alternatives in the context of a capabilities-aware framework for data-intensive computing, which uses an enhanced implementation of the MapReduce programming model for accelerator-based clusters, compared to the state of the art. The framework can transparently utilize heterogeneous accelerators for deriving high performance with low programming effort. Our work is the first to compare heterogeneous types of accelerators, GPUs and a Cell processors, in the same environment and the first to explore the trade-offs between compute-efficient and control-efficient accelerators on data-intensive systems. Our investigation shows that our framework scales well with the number of different compute nodes. Furthermore, it runs simultaneously on two different types of accelerators, successfully adapts to the resource capabilities, and performs 26.9% better on average than a static execution approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Intrusion Detection System (IDS) is a common means of protecting networked systems from attack or malicious misuse. The deployment of an IDS can take many different forms dependent on protocols, usage and cost. This is particularly true of Wireless Intrusion Detection Systems (WIDS) which have many detection challenges associated with data transmission through an open, shared medium, facilitated by fundamental changes at the Physical and MAC layers. WIDS need to be considered in more detail at these lower layers than their wired counterparts as they face unique challenges. The remainder of this chapter will investigate three of these challenges where WiFi deviates significantly from that of wired counterparts:

• Attacks Specific to WiFi Networks: Outlining the additional threats which WIDS must account for: Denial of Service, Encryption Bypass and AP Masquerading attacks.

• The Effect of Deployment Architecture on WIDS Performance: Demonstrating that the deployment environment of a network protected by a WIDS can influence the prioritisation of attacks.

• The Importance of Live Data in WiFi Research: Investigating the different choices for research data sources with an emphasis on encouraging live network data collection for future WiFi research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern wireless systems are expected to operate in multiple frequency bands and support diverse communications standards to provide the high volume and speed of data transmission. Today's major limitations of their performance are imposed by interference, spurious emission and noise generated by high-power carriers in antennas and passive components of the RF front-end. Passive Intermodulation (PIM), which causes the combinatorial frequency generation in the operational bands, presents a primary challenge to signal integrity, system efficacy and data throughput. © 2013 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In wireless networks, the broadcast nature of the propagation medium makes the communication process vulnerable to malicious nodes (e.g. eavesdroppers) which are in the coverage area of the transmission. Thus, security issues play a vital role in wireless systems. Traditionally, information security has been addressed in the upper layers (e.g. the network layer) through the design of cryptographic protocols. Cryptography-based security aims to design a protocol such that it is computationally prohibitive for the eavesdropper to decode the information. The idea behind this approach relies on the limited computational power of the eavesdroppers. However, with advances in emerging hardware technologies, achieving secure communications relying on protocol-based mechanisms alone become insufficient. Owing to this fact, a new paradigm of secure communications has been shifted to implement the security at the physical layer. The key principle behind this strategy is to exploit the spatial-temporal characteristics of the wireless channel to guarantee secure data transmission without the need of cryptographic protocols.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Urothelial pathogenesis is a complex process driven by an underlying network of interconnected genes. The identification of novel genomic target regions and gene targets that drive urothelial carcinogenesis is crucial in order to improve our current limited understanding of urothelial cancer (UC) on the molecular level. The inference of genome-wide gene regulatory networks (GRN) from large-scale gene expression data provides a promising approach for a detailed investigation of the underlying network structure associated to urothelial carcinogenesis.

METHODS: In our study we inferred and compared three GRNs by the application of the BC3Net inference algorithm to large-scale transitional cell carcinoma gene expression data sets from Illumina RNAseq (179 samples), Illumina Bead arrays (165 samples) and Affymetrix Oligo microarrays (188 samples). We investigated the structural and functional properties of GRNs for the identification of molecular targets associated to urothelial cancer.

RESULTS: We found that the urothelial cancer (UC) GRNs show a significant enrichment of subnetworks that are associated with known cancer hallmarks including cell cycle, immune response, signaling, differentiation and translation. Interestingly, the most prominent subnetworks of co-located genes were found on chromosome regions 5q31.3 (RNAseq), 8q24.3 (Oligo) and 1q23.3 (Bead), which all represent known genomic regions frequently deregulated or aberated in urothelial cancer and other cancer types. Furthermore, the identified hub genes of the individual GRNs, e.g., HID1/DMC1 (tumor development), RNF17/TDRD4 (cancer antigen) and CYP4A11 (angiogenesis/ metastasis) are known cancer associated markers. The GRNs were highly dataset specific on the interaction level between individual genes, but showed large similarities on the biological function level represented by subnetworks. Remarkably, the RNAseq UC GRN showed twice the proportion of significant functional subnetworks. Based on our analysis of inferential and experimental networks the Bead UC GRN showed the lowest performance compared to the RNAseq and Oligo UC GRNs.

CONCLUSION: To our knowledge, this is the first study investigating genome-scale UC GRNs. RNAseq based gene expression data is the data platform of choice for a GRN inference. Our study offers new avenues for the identification of novel putative diagnostic targets for subsequent studies in bladder tumors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bridge weigh-in-motion (B-WIM), a system that uses strain sensors to calculate the weights of trucks passing on bridges overhead, requires accurate axle location and speed information for effective performance. The success of a B-WIM system is dependent upon the accuracy of the axle detection method. It is widely recognised that any form of axle detector on the road surface is not ideal for B-WIM applications as it can cause disruption to the traffic (Ojio & Yamada 2002; Zhao et al. 2005; Chatterjee et al. 2006). Sensors under the bridge, that is Nothing-on-Road (NOR) B-WIM, can perform axle detection via data acquisition systems which can detect a peak in strain as the axle passes. The method is often successful, although not all bridges are suitable for NOR B-WIM due to limitations of the system. Significant research has been carried out to further develop the method and the NOR algorithms, but beam-and-slab bridges with deep beams still present a challenge. With these bridges, the slabs are used for axle detection, but peaks in the slab strains are sensitive to the transverse position of wheels on the beam. This next generation B-WIM research project extends the current B-WIM algorithm to the problem of axle detection and safety, thus overcoming the existing limitations in current state-of–the-art technology. Finite Element Analysis was used to determine the critical locations for axle detecting sensors and the findings were then tested in the field. In this paper, alternative strategies for axle detection were determined using Finite Element analysis and the findings were then tested in the field. The site selected for testing was in Loughbrickland, Northern Ireland, along the A1 corridor connecting the two cities of Belfast and Dublin. The structure is on a central route through the island of Ireland and has a high traffic volume which made it an optimum location for the study. Another huge benefit of the chosen location was its close proximity to a nearby self-operated weigh station. To determine the accuracy of the proposed B-WIM system and develop a knowledge base of the traffic load on the structure, a pavement WIM system was also installed on the northbound lane on the approach to the structure. The bridge structure selected for this B-WIM research comprised of 27 pre-cast prestressed concrete Y4-beams, and a cast in-situ concrete deck. The structure, a newly constructed integral bridge, spans 19 m and has an angle of skew of 22.7°.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O presente trabalho teve por objectivo global o estudo e desenvolvimento de sensores baseados em fibra óptica polimérica. O crescimento da tecnologia polimérica nos últimos anos permitiu a introdução deste tipo de fibras ópticas na área das telecomunicações e no desenvolvimento de sensores. As vantagens associadas à metrologia óptica com fibra polimérica têm vindo a atrair as atenções da comunidade científica dado que permitem o desenvolvimento de sistemas de baixo-custo ou custo competitivo face às tecnologias convencionais. Dada a actualidade do tema proposto, descreve-se, numa primeira fase, a tecnologia em fibra óptica polimérica existente no mercado e o estado de arte de sensores em fibra óptica polimérica. Segue-se a descrição de dois tipos de sensores baseados em modulação de intensidade. Projectou-se um sensor extrínseco capaz de avaliar a quantidade de luz dispersa e absorvida por partículas suspensas num líquido. Foi efectuada a caracterização do sensor quanto à concentração de partículas suspensas, tamanho e reflectividade. O sensor foi testado no âmbito da monitorização ambiental, designadamente, na análise de turbidez em amostras de sedimentos recolhidos em áreas ardidas. O sistema desenvolvido foi comparado com um sistema comercial. Um sensor intrínseco, baseado no polimento lateral de fibra óptica polimérica, foi analisado analiticamente. O modelo teórico avalia o sensor em diferentes condições de macroencurvamento e de índice de refracção do meio envolvente. O modelo teórico foi validado positivamente através de resultados experimentais. Foi avaliada a sensibilidade à temperatura e os conhecimentos adquiridos foram aplicados no desenvolvimento de um sistema capaz de monitorizar a cura de diferentes materiais. É ainda apresentada uma técnica para melhorar a sensibilidade do sensor de curvatura através da aplicação de um revestimento na zona sensível. A dependência na curvatura da potência transmitida por uma fibra óptica polida lateralmente serviu de base ao desenvolvimento de uma joelheira e de uma cotoveleira instrumentada, capazes de avaliar quantitativamente o movimento articular. A necessidade de portabilidade levou ao desenvolvimento de um sistema sem fios para aquisição e transmissão de dados. Espera-se que os protótipos desenvolvidos venham a ter um impacto significativo em sistemas futuros aplicados à medicina física e reabilitação.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless communication technologies have become widely adopted, appearing in heterogeneous applications ranging from tracking victims, responders and equipments in disaster scenarios to machine health monitoring in networked manufacturing systems. Very often, applications demand a strictly bounded timing response, which, in distributed systems, is generally highly dependent on the performance of the underlying communication technology. These systems are said to have real-time timeliness requirements since data communication must be conducted within predefined temporal bounds, whose unfulfillment may compromise the correct behavior of the system and cause economic losses or endanger human lives. The potential adoption of wireless technologies for an increasingly broad range of application scenarios has made the operational requirements more complex and heterogeneous than before for wired technologies. On par with this trend, there is an increasing demand for the provision of cost-effective distributed systems with improved deployment, maintenance and adaptation features. These systems tend to require operational flexibility, which can only be ensured if the underlying communication technology provides both time and event triggered data transmission services while supporting on-line, on-the-fly parameter modification. Generally, wireless enabled applications have deployment requirements that can only be addressed through the use of batteries and/or energy harvesting mechanisms for power supply. These applications usually have stringent autonomy requirements and demand a small form factor, which hinders the use of large batteries. As the communication support may represent a significant part of the energy requirements of a station, the use of power-hungry technologies is not adequate. Hence, in such applications, low-range technologies have been widely adopted. In fact, although low range technologies provide smaller data rates, they spend just a fraction of the energy of their higher-power counterparts. The timeliness requirements of data communications, in general, can be met by ensuring the availability of the medium for any station initiating a transmission. In controlled (close) environments this can be guaranteed, as there is a strict regulation of which stations are installed in the area and for which purpose. Nevertheless, in open environments, this is hard to control because no a priori abstract knowledge is available of which stations and technologies may contend for the medium at any given instant. Hence, the support of wireless real-time communications in unmanaged scenarios is a highly challenging task. Wireless low-power technologies have been the focus of a large research effort, for example, in the Wireless Sensor Network domain. Although bringing extended autonomy to battery powered stations, such technologies are known to be negatively influenced by similar technologies contending for the medium and, especially, by technologies using higher power transmissions over the same frequency bands. A frequency band that is becoming increasingly crowded with competing technologies is the 2.4 GHz Industrial, Scientific and Medical band, encompassing, for example, Bluetooth and ZigBee, two lowpower communication standards which are the base of several real-time protocols. Although these technologies employ mechanisms to improve their coexistence, they are still vulnerable to transmissions from uncoordinated stations with similar technologies or to higher power technologies such as Wi- Fi, which hinders the support of wireless dependable real-time communications in open environments. The Wireless Flexible Time-Triggered Protocol (WFTT) is a master/multi-slave protocol that builds on the flexibility and timeliness provided by the FTT paradigm and on the deterministic medium capture and maintenance provided by the bandjacking technique. This dissertation presents the WFTT protocol and argues that it allows supporting wireless real-time communication services with high dependability requirements in open environments where multiple contention-based technologies may dispute the medium access. Besides, it claims that it is feasible to provide flexible and timely wireless communications at the same time in open environments. The WFTT protocol was inspired on the FTT paradigm, from which higher layer services such as, for example, admission control has been ported. After realizing that bandjacking was an effective technique to ensure the medium access and maintenance in open environments crowded with contention-based communication technologies, it was recognized that the mechanism could be used to devise a wireless medium access protocol that could bring the features offered by the FTT paradigm to the wireless domain. The performance of the WFTT protocol is reported in this dissertation with a description of the implemented devices, the test-bed and a discussion of the obtained results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work is dedicated to comparison of open source as well as proprietary transport protocols for highspeed data transmission via IP networks. The contemporary common TCP needs significant improvement since it was developed as general-purpose transport protocol and firstly introduced four decades ago. In nowadays networks, TCP fits not all communication needs that society has. Caused of it another transport protocols have been developed and successfully used for e.g. Big Data movement. In scope of this research the following protocols have been investigated for its efficiency on 10Gbps links: UDT, RBUDP, MTP and RWTP. The protocols were tested under different impairments such as Round Trip Time up to 400 ms and packet losses up to 2%. Investigated parameters are the data rate under different conditions of the network, the CPU load by sender andreceiver during the experiments, size of feedback data, CPU usage per Gbps and the amount of feedback data per GiByte of effectively transmitted data. The best performance and fair resources consumption was observed by RWTP. From the opensource projects, the best behavior is showed by RBUDP.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the major applications of underwater acoustic sensor networks (UWASN) is ocean environment monitoring. Employing data mules is an energy efficient way of data collection from the underwater sensor nodes in such a network. A data mule node such as an autonomous underwater vehicle (AUV) periodically visits the stationary nodes to download data. By conserving the power required for data transmission over long distances to a remote data sink, this approach extends the network life time. In this paper we propose a new MAC protocol to support a single mobile data mule node to collect the data sensed by the sensor nodes in periodic runs through the network. In this approach, the nodes need to perform only short distance, single hop transmission to the data mule. The protocol design discussed in this paper is motivated to support such an application. The proposed protocol is a hybrid protocol, which employs a combination of schedule based access among the stationary nodes along with handshake based access to support mobile data mules. The new protocol, RMAC-M is developed as an extension to the energy efficient MAC protocol R-MAC by extending the slot time of R-MAC to include a contention part for a hand shake based data transfer. The mobile node makes use of a beacon to signal its presence to all the nearby nodes, which can then hand-shake with the mobile node for data transfer. Simulation results show that the new protocol provides efficient support for a mobile data mule node while preserving the advantages of R-MAC such as energy efficiency and fairness.