54 resultados para Physical layer security
Resumo:
We propose a simple and energy efficient distributed change detection scheme for sensor networks based on Page's parametric CUSUM algorithm. The sensor observations are IID over time and across the sensors conditioned on the change variable. Each sensor runs CUSUM and transmits only when the CUSUM is above some threshold. The transmissions from the sensors are fused at the physical layer. The channel is modeled as a multiple access channel (MAC) corrupted with IID noise. The fusion center which is the global decision maker, performs another CUSUM to detect the change. We provide the analysis and simulation results for our scheme and compare the performance with an existing scheme which ensures energy efficiency via optimal power selection.
Resumo:
We study the problem of decentralized sequential change detection with conditionally independent observations. The sensors form a star topology with a central node called fusion center as the hub. The sensors transmit a simple function of their observations in an analog fashion over a wireless Gaussian multiple access channel and operate under either a power constraint or an energy constraint. Simulations demonstrate that the proposed techniques have lower detection delays when compared with existing schemes. Moreover we demonstrate that the energy-constrained formulation enables better use of the total available energy than a power-constrained formulation.
Resumo:
The performance analysis of adaptive physical layer network-coded two-way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. The deep channel fade conditions which occur at the relay referred as the singular fade states fall in the following two classes: (i) removable and (ii) non-removable singular fade states. With every singular fade state, we associate an error probability that the relay transmits a wrong network-coded symbol during the BC phase. It is shown that adaptive network coding provides a coding gain over fixed network coding, by making the error probabilities associated with the removable singular fade states contributing to the average Symbol Error Rate (SER) fall as SNR-2 instead of SNR-1. A high SNR upper-bound on the average end-to-end SER for the adaptive network coding scheme is derived, for a Rician fading scenario, which is found to be tight through simulations. Specifically, it is shown that for the adaptive network coding scheme, the probability that the relay node transmits a wrong network-coded symbol is upper-bounded by twice the average SER of a point-to-point fading channel, at high SNR. Also, it is shown that in a Rician fading scenario, it suffices to remove the effect of only those singular fade states which contribute dominantly to the average SER.
Resumo:
We propose a Physical layer Network Coding (PNC) scheme for the K-user wireless Multiple Access Relay Channel, in which K source nodes want to transmit messages to a destination node D with the help of a relay node R. The proposed scheme involves (i) Phase 1 during which the source nodes alone transmit and (ii) Phase 2 during which the source nodes and the relay node transmit. At the end of Phase 1, the relay node decodes the messages of the source nodes and during Phase 2 transmits a many-to-one function of the decoded messages. To counter the error propagation from the relay node, we propose a novel decoder which takes into account the possibility of error events at R. It is shown that if certain parameters are chosen properly and if the network coding map used at R forms a Latin Hypercube, the proposed decoder offers the maximum diversity order of two. Also, it is shown that for a proper choice of the parameters, the proposed decoder admits fast decoding, with the same decoding complexity order as that of the reference scheme based on Complex Field Network Coding (CFNC). Simulation results indicate that the proposed PNC scheme offers a large gain over the CFNC scheme.
Resumo:
In the design of modulation schemes for the physical layer network-coded two way relaying scenario with two phases (Multiple access (MA) Phase and Broadcast (BC) Phase), it was observed by Koike-Akino et al. that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of multiple access interference and all these network coding maps should satisfy a requirement called the exclusive law. In [11] the case in which the end nodes use M-PSK signal sets is extensively studied using Latin Squares. This paper deals with the case in which the end nodes use square M-QAM signal sets. In a fading scenario, for certain channel conditions, termed singular fade states, the MA phase performance is greatly reduced. We show that the square QAM signal sets lead to lesser number of singular fade states compared to PSK signal sets. Because of this, the complexity at the relay is enormously reduced. Moreover lesser number of overhead bits are required in the BC phase. We find the number of singular fade states for PAM and QAM signal sets used at the end nodes. The fade state γejθ = 1 is a singular fade state for M-QAM for all values of M and it is shown that certain block circulant Latin Squares remove this singular fade state. Simulation results are presented to show that QAM signal set perform better than PSK.
Resumo:
The analysis of modulation schemes for the physical layer network-coded two way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. Depending on the signal set used at the end nodes, the minimum distance of the effective constellation seen at the relay becomes zero for a finite number of channel fade states referred as the singular fade states. The singular fade states fall into the following two classes: (i) the ones which are caused due to channel outage and whose harmful effect cannot be mitigated by adaptive network coding called the non-removable singular fade states and (ii) the ones which occur due to the choice of the signal set and whose harmful effects can be removed called the removable singular fade states. In this paper, we derive an upper bound on the average end-to-end Symbol Error Rate (SER), with and without adaptive network coding at the relay, for a Rician fading scenario. It is shown that without adaptive network coding, at high Signal to Noise Ratio (SNR), the contribution to the end-to-end SER comes from the following error events which fall as SNR-1: the error events associated with the removable and nonremovable singular fade states and the error event during the BC phase. In contrast, for the adaptive network coding scheme, the error events associated with the removable singular fade states fall as SNR-2, thereby providing a coding gain over the case when adaptive network coding is not used. Also, it is shown that for a Rician fading channel, the error during the MA phase dominates over the error during the BC phase. Hence, adaptive network coding, which improves the performance during the MA phase provides more gain in a Rician fading scenario than in a Rayleigh fading scenario. Furthermore, it is shown that for large Rician factors, among those removable singular fade states which have the same magnitude, those which have the least absolute value of the phase - ngle alone contribute dominantly to the end-to-end SER and it is sufficient to remove the effect of only such singular fade states.
Resumo:
The design of modulation schemes for the physical layer network-coded two way relaying scenario is considered with the protocol which employs two phases: Multiple access (MA) Phase and Broadcast (BC) phase. It was observed by Koike-Akino et al. that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of multiple access interference which occurs at the relay during the MA phase. In other words, the set of all possible channel realizations (the complex plane) is quantized into a finite number of regions, with a specific network coding map giving the best performance in a particular region. We obtain such a quantization analytically for the case when M-PSK (for M any power of 2) is the signal set used during the MA phase. We show that the complex plane can be classified into two regions: a region in which any network coding map which satisfies the so called exclusive law gives the same best performance and a region in which the choice of the network coding map affects the performance, which is further quantized based on the choice of the network coding map which optimizes the performance. The quantization thus obtained analytically, leads to the same as the one obtained using computer search for 4-PSK signal set by Koike-Akino et al., for the specific value of M = 4.
Resumo:
Using the spatial modulation approach, where only one transmit antenna is active at a time, we propose two transmission schemes for two-way relay channel using physical layer network coding with space time coding using coordinate interleaved orthogonal designs (CIODs). It is shown that using two uncorrelated transmit antennas at the nodes, but using only one RF transmit chain and space-time coding across these antennas can give a better performance without using any extra resources and without increasing the hardware implementation cost and complexity. In the first transmission scheme, two antennas are used only at the relay, adaptive network coding (ANC) is employed at the relay and the relay transmits a CIOD space time block code (STBC). This gives a better performance compared to an existing ANC scheme for two-way relay channel which uses one antenna each at all the three nodes. It is shown that for this scheme at high SNR the average end-to-end symbol error probability (SEP) is upper bounded by twice the SEP of a point-to-point fading channel. In the second transmission scheme, two transmit antennas are used at all the three nodes, CIOD STBCs are transmitted in multiple access and broadcast phases. This scheme provides a diversity order of two for the average end-to-end SEP with an increased decoding complexity of O(M-3) for an arbitrary signal set and O(M-2 root M) for square QAM signal set. Simulation results show that the proposed schemes performs better than the existing ANC schemes under perfect and imperfect channel state information.
Resumo:
This paper studies a pilot-assisted physical layer data fusion technique known as Distributed Co-Phasing (DCP). In this two-phase scheme, the sensors first estimate the channel to the fusion center (FC) using pilots sent by the latter; and then they simultaneously transmit their common data by pre-rotating them by the estimated channel phase, thereby achieving physical layer data fusion. First, by analyzing the symmetric mutual information of the system, it is shown that the use of higher order constellations (HOC) can improve the throughput of DCP compared to the binary signaling considered heretofore. Using an HOC in the DCP setting requires the estimation of the composite DCP channel at the FC for data decoding. To this end, two blind algorithms are proposed: 1) power method, and 2) modified K-means algorithm. The latter algorithm is shown to be computationally efficient and converges significantly faster than the conventional K-means algorithm. Analytical expressions for the probability of error are derived, and it is found that even at moderate to low SNRs, the modified K-means algorithm achieves a probability of error comparable to that achievable with a perfect channel estimate at the FC, while requiring no pilot symbols to be transmitted from the sensor nodes. Also, the problem of signal corruption due to imperfect DCP is investigated, and constellation shaping to minimize the probability of signal corruption is proposed and analyzed. The analysis is validated, and the promising performance of DCP for energy-efficient physical layer data fusion is illustrated, using Monte Carlo simulations.
Resumo:
The broadcast nature of the wireless medium jeopardizes secure transmissions. Cryptographic measures fail to ensure security when eavesdroppers have superior computational capability; however, it can be assured from information theoretic security approaches. We use physical layer security to guarantee non-zero secrecy rate in single source, single destination multi-hop networks with eavesdroppers for two cases: when eavesdropper locations and channel gains are known and when their positions are unknown. We propose a two-phase solution which consists of finding activation sets and then obtaining transmit powers subject to SINR constraints for the case when eavesdropper locations are known. We introduce methods to find activation sets and compare their performance. Necessary but reasonable approximations are made in power minimization formulations for tractability reasons. For scenarios with no eavesdropper location information, we suggest vulnerability region (the area having zero secrecy rate) minimization over the network. Our results show that in the absence of location information average number of eavesdroppers who have access to data is reduced.
Resumo:
Many wireless applications demand a fast mechanism to detect the packet from a node with the highest priority ("best node") only, while packets from nodes with lower priority are irrelevant. In this paper, we introduce an extremely fast contention-based multiple access algorithm that selects the best node and requires only local information of the priorities of the nodes. The algorithm, which we call Variable Power Multiple Access Selection (VP-MAS), uses the local channel state information from the accessing nodes to the receiver, and maps the priorities onto the receive power. It is based on a key result that shows that mapping onto a set of discrete receive power levels is optimal, when the power levels are chosen to exploit packet capture that inherently occurs in a wireless physical layer. The VP-MAS algorithm adjusts the expected number of users that contend in each step and their respective transmission powers, depending on whether previous transmission attempts resulted in capture, idle channel, or collision. We also show how reliable information regarding the total received power at the receiver can be used to improve the algorithm by enhancing the feedback mechanism. The algorithm detects the packet from the best node in 1.5 to 2.1 slots, which is considerably lower than the 2.43 slot average achieved by the best algorithm known to date.
Resumo:
The WiFiRe (WiFi Rural Extension) proposal for rural broadband access is being developed under the aegis of CEWIT. The system leverages the widely available, and highly cost-reduced, WiFi chipsets. However, only the physical layer from these chipsets is retained. A single base station carries several WiFi transceivers, each serving one sector of the cell, and all operating on the same WiFi channel in a time division duplex (TDD) manner. We replace the contention based WiFi MAC with a single-channel TDD multisector TDM MAC similar to the WiMax MAC. In this paper we discuss in detail the issues in designing such a MAC for the purpose of carrying packet voice telephony and for Internet access. The problem of determining the optimal spatial reuse is formulated and the optimal spatial reuse and the corresponding cell size is derived. Then the voice and data scheduler is designed. It is shown how throughput fairness can be implemented in the data scheduler. A capacity assessment of the system is also provided.
Resumo:
With the increasing adoption of wireless technology, it is reasonable to expect an increase in file demand for supporting both real-time multimedia and high rate reliable data services. Next generation wireless systems employ Orthogonal Frequency Division Multiplexing (OFDM) physical layer owing, to the high data rate transmissions that are possible without increase in bandwidth. Towards improving file performance of these systems, we look at the design of resource allocation algorithms at medium-access layer, and their impact on higher layers. While TCP-based clastic traffic needs reliable transport, UDP-based real-time applications have stringent delay and rate requirements. The MAC algorithms while catering to the heterogeneous service needs of these higher layers, tradeoff between maximizing the system capacity and providing fairness among users. The novelly of this work is the proposal of various channel-aware resource allocation algorithms at the MAC layer. which call result in significant performance gains in an OFDM based wireless system.
Resumo:
Next generation wireless systems employ Orthogonal frequency division multiplexing (OFDM) physical layer owing to the high data rate transmissions that are possible without increase in bandwidth. While TCP performance has been extensively studied for interaction with link layer ARQ, little attention has been given to the interaction of TCP with MAC layer. In this work, we explore cross-layer interactions in an OFDM based wireless system, specifically focusing on channel-aware resource allocation strategies at the MAC layer and its impact on TCP congestion control. Both efficiency and fairness oriented MAC resource allocation strategies were designed for evaluating the performance of TCP. The former schemes try to exploit the channel diversity to maximize the system throughput, while the latter schemes try to provide a fair resource allocation over sufficiently long time duration. From a TCP goodput standpoint, we show that the class of MAC algorithms that incorporate a fairness metric and consider the backlog outperform the channel diversity exploiting schemes.
Resumo:
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.