63 resultados para wireless communication systems
Resumo:
In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.
Resumo:
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.
Resumo:
We consider the simplest IEEE 802.11 WLAN networks for which analytical models are available and seek to provide an experimental validation of these models. Our experiments include the following cases: (i) two nodes with saturated queues, sending fixed-length UDP packets to each other, and (ii) a TCP-controlled transfer between two nodes. Our experiments are based entirely on Aruba AP-70 access points operating under Linux. We report our observations on certain non-standard behavior of the devices. In cases where the devices adhere to the standards, we find that the results from the analytical models estimate the experimental data with a mean error of 3-5%.
Resumo:
We develop analytical models for estimating the energy spent by stations (STAs) in infrastructure WLANs when performing TCP controlled file downloads. We focus on the energy spent in radio communication when the STAs are in the Continuously Active Mode (CAM), or in the static Power Save Mode (PSM). Our approach is to develop accurate models for obtaining the fraction of times the STA radios spend in idling, receiving and transmitting. We discuss two traffic models for each mode of operation: (i) each STA performs one large file download, and (ii) the STAs perform short file transfers. We evaluate the rate of STA energy expenditure with long file downloads, and show that static PSM is worse than just using CAM. For short file downloads we compute the number of file downloads that can be completed with given battery capacity, and show that PSM performs better than CAM for this case. We provide a validation of our analytical models using the NS-2 simulator.
Resumo:
In this paper we are concerned with finding the maximum throughput that a mobile ad hoc network can support. Even when nodes are stationary, the problem of determining the capacity region has long been known to be NP-hard. Mobility introduces an additional dimension of complexity because nodes now also have to decide when they should initiate route discovery. Since route discovery involves communication and computation overhead, it should not be invoked very often. On the other hand, mobility implies that routes are bound to become stale resulting in sub-optimal performance if routes are not updated. We attempt to gain some understanding of these effects by considering a simple one-dimensional network model. The simplicity of our model allows us to use stochastic dynamic programming (SDP) to find the maximum possible network throughput with ideal routing and medium access control (MAC) scheduling. Using the optimal value as a benchmark, we also propose and evaluate the performance of a simple threshold-based heuristic. Unlike the optimal policy which requires considerable state information, the heuristic is very simple to implement and is not overly sensitive to the threshold value used. We find empirical conditions for our heuristic to be near-optimal as well as network scenarios when our simple heuristic does not perform very well. We provide extensive numerical and simulation results for different parameter settings of our model.
Resumo:
An automated geo-hazard warning system is the need of the hour. It is integration of automation in hazard evaluation and warning communication. The primary objective of this paper is to explain a geo-hazard warning system based on Internet-resident concept and available cellular mobile infrastructure that makes use of geo-spatial data. The functionality of the system is modular in architecture having input, understanding, expert, output and warning modules. Thus, the system provides flexibility in integration between different types of hazard evaluation and communication systems leading to a generalized hazard warning system. The developed system has been validated for landslide hazard in Indian conditions. It has been realized through utilization of landslide causative factors, rainfall forecast from NASA's TRMM (Tropical Rainfall Measuring Mission) and knowledge base of landslide hazard intensity map and invokes the warning as warranted. The system evaluated hazard commensurate with expert evaluation within 5-6 % variability, and the warning message permeability has been found to be virtually instantaneous, with a maximum time lag recorded as 50 s, minimum of 10 s. So it could be concluded that a novel and stand-alone system for dynamic hazard warning has been developed and implemented. Such a handy system could be very useful in a densely populated country where people are unaware of the impending hazard.
Resumo:
In this paper, we investigate the achievable rate region of Gaussian multiple access channels (MAC) with finite input alphabet and quantized output. With finite input alphabet and an unquantized receiver, the two-user Gaussian MAC rate region was studied. In most high throughput communication systems based on digital signal processing, the analog received signal is quantized using a low precision quantizer. In this paper, we first derive the expressions for the achievable rate region of a two-user Gaussian MAC with finite input alphabet and quantized output. We show that, with finite input alphabet, the achievable rate region with the commonly used uniform receiver quantizer has a significant loss in the rate region compared. It is observed that this degradation is due to the fact that the received analog signal is densely distributed around the origin, and is therefore not efficiently quantized with a uniform quantizer which has equally spaced quantization intervals. It is also observed that the density of the received analog signal around the origin increases with increasing number of users. Hence, the loss in the achievable rate region due to uniform receiver quantization is expected to increase with increasing number of users. We, therefore, propose a novel non-uniform quantizer with finely spaced quantization intervals near the origin. For a two-user Gaussian MAC with a given finite input alphabet and low precision receiver quantization, we show that the proposed non-uniform quantizer has a significantly larger rate region compared to what is achieved with a uniform quantizer.
Resumo:
In this paper we consider the downlink of an OFDM cellular system. The objective is to maximise the system utility by means of fractional frequency reuse and interference planning. The problem is a joint scheduling and power allocation problem. Using gradient scheduling scheme, the above problem is transformed to a problem of maximising weighted sum-rate at each time slot. At each slot, an iterative scheduling and power allocation algorithm is employed to address the weighted sum-rate maximisation problem. The power allocation problem in the above algorithm is a nonconvex optimisation problem. We study several algorithms that can tackle this part of the problem. We propose two modifications to the above algorithms to address practical and computational feasibility. Finally, we compare the performance of our algorithm with some existing algorithms based on certain achieved system utility metrics. We show that the practical considerations do not affect the system performance adversely.
Resumo:
Amplify-and-forward (AF) relay based cooperation has been investigated in the literature given its simplicity and practicality. Two models for AF, namely, fixed gain and fixed power relaying, have been extensively studied. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay (SR) channel gain. In fixed power relaying, the relay's instantaneous transmit power is fixed, but its gain varies. We propose a general AF cooperation model in which an average transmit power constrained relay jointly adapts its gain and transmit power as a function of the channel gains. We derive the optimal AF gain policy that minimizes the fading- averaged symbol error probability (SEP) of MPSK and present insightful and tractable lower and upper bounds for it. We then analyze the SEP of the optimal policy. Our results show that the optimal scheme is up to 39.7% and 47.5% more energy-efficient than fixed power relaying and fixed gain relaying, respectively. Further, the weaker the direct source-destination link, the greater are the energy-efficiency gains.
Resumo:
We consider secrecy obtained when one transmits on a Gaussian Wiretap channel above the secrecy capacity. Instead of equivocation, we consider probability of error as the criterion of secrecy. The usual channel codes are considered for transmission. The rates obtained can reach the channel capacity. We show that the “confusion” caused to the Eve when the rate of transmission is above capacity of the Eve's channel is similar to the confusion caused by using the wiretap channel codes used below the secrecy capacity.
Resumo:
Quadrature phase shift keying (QPSK) is one of the most popular modulation schemes in coherent optical communication systems for data rates in excess of 40 Gbps because of its high spectral efficiency. This paper proposes a simple method of implementing a QPSK modulator in integrated optic (IO) domain. The QPSK modulator is realized using standard IO components, such as Y-branches and electro-optic modulators (EOMs). Design optimization of EOM is carried out considering the fabrication constraints, miniaturization aspects, and simplicity. Also, the interdependency between electrode length, operating voltage, and electrode gap of an EOM has been captured in the form of a family of curves. These plots enable designing of EOMs for custom requirements. An innovative approach has been adopted in demonstrating the operation of IO QPSK modulator in terms of phase data extracted from beam propagation model. The results obtained by this approach have been verified using the conventional interferometric approach. The operation of the proposed IO QPSK modulator is experimentally demonstrated. The design of IO QPSK modulator is taken up as a part of a broader scheme that aims at generation of QPSK modulated microwave signal based on optical heterodyning. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
A design methodology based on the Minimum Bit Error Ratio (MBER) framework is proposed for a non-regenerative Multiple-Input Multiple-Output (MIMO) relay-aided system to determine various linear parameters. We consider both the Relay-Destination (RD) as well as the Source-Relay-Destination (SRD) link design based on this MBER framework, including the pre-coder, the Amplify-and-Forward (AF) matrix and the equalizer matrix of our system. It has been shown in the previous literature that MBER based communication systems are capable of reducing the Bit-Error-Ratio (BER) compared to their Linear Minimum Mean Square Error (LMMSE) based counterparts. We design a novel relay-aided system using various signal constellations, ranging from QPSK to the general M-QAM and M-PSK constellations. Finally, we propose its sub-optimal versions for reducing the computational complexity imposed. Our simulation results demonstrate that the proposed scheme indeed achieves a significant BER reduction over the existing LMMSE scheme.
Resumo:
The aim in this paper is to allocate the `sleep time' of the individual sensors in an intrusion detection application so that the energy consumption from the sensors is reduced, while keeping the tracking error to a minimum. We propose two novel reinforcement learning (RL) based algorithms that attempt to minimize a certain long-run average cost objective. Both our algorithms incorporate feature-based representations to handle the curse of dimensionality associated with the underlying partially-observable Markov decision process (POMDP). Further, the feature selection scheme used in our algorithms intelligently manages the energy cost and tracking cost factors, which in turn assists the search for the optimal sleeping policy. We also extend these algorithms to a setting where the intruder's mobility model is not known by incorporating a stochastic iterative scheme for estimating the mobility model. The simulation results on a synthetic 2-d network setting are encouraging.
Resumo:
This work considers the identification of the available whitespace, i.e., the regions that do not contain any existing transmitter within a given geographical area. To this end, n sensors are deployed at random locations within the area. These sensors detect for the presence of a transmitter within their radio range r(s) using a binary sensing model, and their individual decisions are combined to estimate the available whitespace. The limiting behavior of the recovered whitespace as a function of n and r(s) is analyzed. It is shown that both the fraction of the available whitespace that the nodes fail to recover as well as their radio range optimally scale as log(n)/n as n gets large. The problem of minimizing the sum absolute error in transmitter localization is also analyzed, and the corresponding optimal scaling of the radio range and the necessary minimum transmitter separation is determined.
Resumo:
We revisit a problem studied by Padakandla and Sundaresan SIAM J. Optim., August 2009] on the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation problems in wireless communication settings. It is also a special case of an optimization of a separable convex function over the bases of a specially structured polymatroid. We give an alternative proof of the correctness of the algorithm of Padakandla and Sundaresan. In the process we relax some of their restrictions placed on the objective function.