962 resultados para wireless connectivity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Orthogonal Frequency Division Multiplexing and Discrete Multitone transceivers, a guard interval called Cyclic Prefix (CP) is inserted to avoid inter-symbol interference. The length of the CP is usually greater than the impulse response of the channel resulting in a loss of useful data carriers. In order to avoid long CP, a time domain equalizer is used to shorten the channel. In this paper, we propose a method to include a delay in the zero-forcing equalizer and obtain an optimal value of the delay, based on the location of zeros of the channel. The performance of the algorithms is studied using numerical simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we have proposed a centralized multicast authentication protocol (MAP) for dynamic multicast groups in wireless networks. In our protocol, a multicast group is defined only at the time of the multicasting. The authentication server (AS) in the network generates a session key and authenticates it to each of the members of a multicast group using the computationally inexpensive least common multiple (LCM) method. In addition, a pseudo random function (PRF) is used to bind the secret keys of the network members with their identities. By doing this, the AS is relieved from storing per member secrets in its memory, making the scheme completely storage scalable. The protocol minimizes the load on the network members by shifting the computational tasks towards the AS node as far as possible. The protocol possesses a membership revocation mechanism and is protected against replay attack and brute force attack. Analytical and simulation results confirm the effectiveness of the proposed protocol.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy harvesting sensor (EHS) nodes provide an attractive and green solution to the problem of limited lifetime of wireless sensor networks (WSNs). Unlike a conventional node that uses a non-rechargeable battery and dies once it runs out of energy, an EHS node can harvest energy from the environment and replenish its rechargeable battery. We consider hybrid WSNs that comprise of both EHS and conventional nodes; these arise when legacy WSNs are upgraded or due to EHS deployment cost issues. We compare conventional and hybrid WSNs on the basis of a new and insightful performance metric called k-outage duration, which captures the inability of the nodes to transmit data either due to lack of sufficient battery energy or wireless fading. The metric overcomes the problem of defining lifetime in networks with EHS nodes, which never die but are occasionally unable to transmit due to lack of sufficient battery energy. It also accounts for the effect of wireless channel fading on the ability of the WSN to transmit data. We develop two novel, tight, and computationally simple bounds for evaluating the k-outage duration. Our results show that increasing the number of EHS nodes has a markedly different effect on the k-outage duration than increasing the number of conventional nodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of modulation schemes for the physical layer network-coded two way relaying scenario is considered with the protocol which employs two phases: Multiple access (MA) Phase and Broadcast (BC) Phase. It was observed by Koike-Akino et al. that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of multiple access interference which occurs at the relay during the MA Phase and all these network coding maps should satisfy a requirement called the exclusive law. We show that every network coding map that satisfies the exclusive law is representable by a Latin Square and conversely, and this relationship can be used to get the network coding maps satisfying the exclusive law. Using the structural properties of the Latin Squares for a given set of parameters, the problem of finding all the required maps is reduced to finding a small set of maps for M-PSK constellations. This is achieved using the notions of isotopic and transposed Latin Squares. Furthermore, the channel conditions for which the bit-wise XOR will perform well is analytically obtained which holds for all values of M (for M any power of 2). We illustrate these results for the case where both the end users use QPSK constellation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The broadcast nature of the wireless medium jeopardizes secure transmissions. Cryptographic measures fail to ensure security when eavesdroppers have superior computational capability; however, it can be assured from information theoretic security approaches. We use physical layer security to guarantee non-zero secrecy rate in single source, single destination multi-hop networks with eavesdroppers for two cases: when eavesdropper locations and channel gains are known and when their positions are unknown. We propose a two-phase solution which consists of finding activation sets and then obtaining transmit powers subject to SINR constraints for the case when eavesdropper locations are known. We introduce methods to find activation sets and compare their performance. Necessary but reasonable approximations are made in power minimization formulations for tractability reasons. For scenarios with no eavesdropper location information, we suggest vulnerability region (the area having zero secrecy rate) minimization over the network. Our results show that in the absence of location information average number of eavesdroppers who have access to data is reduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of optimal sequential (''as-you-go'') deployment of wireless relay nodes, as a person walks along a line of random length (with a known distribution). The objective is to create an impromptu multihop wireless network for connecting a packet source to be placed at the end of the line with a sink node located at the starting point, to operate in the light traffic regime. In walking from the sink towards the source, at every step, measurements yield the transmit powers required to establish links to one or more previously placed nodes. Based on these measurements, at every step, a decision is made to place a relay node, the overall system objective being to minimize a linear combination of the expected sum power (or the expected maximum power) required to deliver a packet from the source to the sink node and the expected number of relay nodes deployed. For each of these two objectives, two different relay selection strategies are considered: (i) each relay communicates with the sink via its immediate previous relay, (ii) the communication path can skip some of the deployed relays. With appropriate modeling assumptions, we formulate each of these problems as a Markov decision process (MDP). We provide the optimal policy structures for all these cases, and provide illustrations of the policies and their performance, via numerical results, for some typical parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider a single discrete time queue with infinite buffer. The channel may experience fading. The transmission rate is a linear function of power used for transmission. In this scenario we explicitly obtain power control policies which minimize mean power and/or mean delay. There may also be peak power constraint.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real world biological systems such as the human brain are inherently nonlinear and difficult to model. However, most of the previous studies have either employed linear models or parametric nonlinear models for investigating brain function. In this paper, a novel application of a nonlinear measure of phase synchronization based on recurrences, correlation between probabilities of recurrence (CPR), to study connectivity in the brain has been proposed. Being non-parametric, this method makes very few assumptions, making it suitable for investigating brain function in a data-driven way. CPR's utility with application to multichannel electroencephalographic (EEG) signals has been demonstrated. Brain connectivity obtained using thresholded CPR matrix of multichannel EEG signals showed clear differences in the number and pattern of connections in brain connectivity between (a) epileptic seizure and pre-seizure and (b) eyes open and eyes closed states. Corresponding brain headmaps provide meaningful insights about synchronization in the brain in those states. K-means clustering of connectivity parameters of CPR and linear correlation obtained from global epileptic seizure and pre-seizure showed significantly larger cluster centroid distances for CPR as opposed to linear correlation, thereby demonstrating the superior ability of CPR for discriminating seizure from pre-seizure. The headmap in the case of focal epilepsy clearly enables us to identify the focus of the epilepsy which provides certain diagnostic value. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Different medium access control (MAC) layer protocols, for example, IEEE 802.11 series and others are used in wireless local area networks. They have limitation in handling bulk data transfer applications, like video-on-demand, videoconference, etc. To avoid this problem a cooperative MAC protocol environment has been introduced, which enables the MAC protocol of a node to use its nearby nodes MAC protocol as and when required. We have found on various occasions that specified cooperative MAC establishes cooperative transmissions to send the specified data to the destination. In this paper we propose cooperative MAC priority (CoopMACPri) protocol which exploits the advantages of priority value given by the upper layers for selection of different paths to nodes running heterogeneous applications in a wireless ad hoc network environment. The CoopMACPri protocol improves the system throughput and minimizes energy consumption. Using a Markov chain model, we developed a model to analyse the performance of CoopMACPri protocol; and also derived closed-form expression of saturated system throughput and energy consumption. Performance evaluations validate the accuracy of the theoretical analysis, and also show that the performance of CoopMACPri protocol varies with the number of nodes. We observed that the simulation results and analysis reflects the effectiveness of the proposed protocol as per the specifications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A link level reliable multicast requires a channel access protocol to resolve the collision of feedback messages sent by multicast data receivers. Several deterministic media access control protocols have been proposed to attain high reliability, but with large delay. Besides, there are also protocols which can only give probabilistic guarantee about reliability, but have the least delay. In this paper, we propose a virtual token-based channel access and feedback protocol (VTCAF) for link level reliable multicasting. The VTCAF protocol introduces a virtual (implicit) token passing mechanism based on carrier sensing to avoid the collision between feedback messages. The delay performance is improved in VTCAF protocol by reducing the number of feedback messages. Besides, the VTCAF protocol is parametric in nature and can easily trade off reliability with the delay as per the requirement of the underlying application. Such a cross layer design approach would be useful for a variety of multicast applications which require reliable communication with different levels of reliability and delay performance. We have analyzed our protocol to evaluate various performance parameters at different packet loss rate and compared its performance with those of others. Our protocol has also been simulated using Castalia network simulator to evaluate the same performance parameters. Simulation and analytical results together show that the VTCAF protocol is able to considerably reduce average access delay while ensuring very high reliability at the same time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider an intrusion detection application for Wireless Sensor Networks. We study the problem of scheduling the sleep times of the individual sensors, where the objective is to maximize the network lifetime while keeping the tracking error to a minimum. We formulate this problem as a partially-observable Markov decision process (POMDP) with continuous stateaction spaces, in a manner similar to Fuemmeler and Veeravalli (IEEE Trans Signal Process 56(5), 2091-2101, 2008). However, unlike their formulation, we consider infinite horizon discounted and average cost objectives as performance criteria. For each criterion, we propose a convergent on-policy Q-learning algorithm that operates on two timescales, while employing function approximation. Feature-based representations and function approximation is necessary to handle the curse of dimensionality associated with the underlying POMDP. Our proposed algorithm incorporates a policy gradient update using a one-simulation simultaneous perturbation stochastic approximation estimate on the faster timescale, while the Q-value parameter (arising from a linear function approximation architecture for the Q-values) is updated in an on-policy temporal difference algorithm-like fashion on the slower timescale. The feature selection scheme employed in each of our algorithms manages the energy and tracking components in a manner that assists the search for the optimal sleep-scheduling policy. For the sake of comparison, in both discounted and average settings, we also develop a function approximation analogue of the Q-learning algorithm. This algorithm, unlike the two-timescale variant, does not possess theoretical convergence guarantees. Finally, we also adapt our algorithms to include a stochastic iterative estimation scheme for the intruder's mobility model and this is useful in settings where the latter is not known. Our simulation results on a synthetic 2-dimensional network setting suggest that our algorithms result in better tracking accuracy at the cost of only a few additional sensors, in comparison to a recent prior work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose an eigen framework for transmit beamforming for single-hop and dual-hop network models with single antenna receivers. In cases where number of receivers is not more than three, the proposed Eigen approach is vastly superior in terms of ease of implementation and computational complexity compared with the existing convex-relaxation-based approaches. The essential premise is that the precoding problems can be posed as equivalent optimization problems of searching for an optimal vector in the joint numerical range of Hermitian matrices. We show that the latter problem has two convex approximations: the first one is a semi-definite program that yields a lower bound on the solution, and the second one is a linear matrix inequality that yields an upper bound on the solution. We study the performance of the proposed and existing techniques using numerical simulations.