233 resultados para Padrão IEEE 802.16
Resumo:
A Radio Frequency (RF) based digital data transmission scheme with 8 channel encoder/decoder ICs is proposed for surface electrode switching of a 16-electrode wireless Electrical Impedance Tomography (EIT) system. A RF based wireless digital data transmission module (WDDTM) is developed and the electrode switching of a EIT system is studied by analyzing the boundary data collected and the resistivity images of practical phantoms. An analog multiplexers based electrode switching module (ESM) is developed with analog multiplexers and switched with parallel digital data transmitted by a wireless transmitter/receiver (T-x/R-x) module working with radio frequency technology. Parallel digital bits are generated using NI USB 6251 card working in LabVIEW platform and sent to transmission module to transmit the digital data to the receiver end. The transmitter/receiver module developed is properly interfaced with the personal computer (PC) and practical phantoms through the ESM and USB based DAQ system respectively. It is observed that the digital bits required for multiplexer operation are sequentially generated by the digital output (D/O) ports of the DAQ card. Parallel to serial and serial to parallel conversion of digital data are suitably done by encoder and decoder ICs. Wireless digital data transmission module successfully transmitted and received the parallel data required for switching the current and voltage electrodes wirelessly. 1 mA, 50 kHz sinusoidal constant current is injected at the phantom boundary using common ground current injection protocol and the boundary potentials developed at the voltage electrodes are measured. Resistivity images of the practical phantoms are reconstructed from boundary data using EIDORS. Boundary data and the resistivity images reconstructed from the surface potentials are studied to assess the wireless digital data transmission system. Boundary data profiles of the practical phantom with different configurations show that the multiplexers are operating in the required sequence for common ground current injection protocol. The voltage peaks obtained at the proper positions in the boundary data profiles proved the sequential operation of multiplexers and successful wireless transmission of digital bits. Reconstructed images and their image parameters proved that the boundary data are successfully acquired by the DAQ system which in turn again indicates a sequential and proper operation of multiplexers as well as the successful wireless transmission of digital bits. Hence the developed RF based wireless digital data transmission module (WDDTM) is found suitable for transmitting digital bits required for electrode switching in wireless EIT data acquisition system. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In the two-user Gaussian Strong Interference Channel (GSIC) with finite constellation inputs, it is known that relative rotation between the constellations of the two users enlarges the Constellation Constrained (CC) capacity region. In this paper, a metric for finding the approximate angle of rotation to maximally enlarge the CC capacity is presented. It is shown that for some portion of the Strong Interference (SI) regime, with Gaussian input alphabets, the FDMA rate curve touches the capacity curve of the GSIC. Even as the Gaussian alphabet FDMA rate curve touches the capacity curve of the GSIC, at high powers, with both the users using the same finite constellation, we show that the CC FDMA rate curve lies strictly inside the CC capacity curve for the constellations BPSK, QPSK, 8-PSK, 16-QAM and 64-QAM. It is known that, with Gaussian input alphabets, the FDMA inner-bound at the optimum sum-rate point is always better than the simultaneous-decoding inner-bound throughout the Weak Interference (WI) regime. For a portion of the WI regime, it is shown that, with identical finite constellation inputs for both the users, the simultaneous-decoding inner-bound enlarged by relative rotation between the constellations can be strictly better than the FDMA inner-bound.
Resumo:
In this letter, we analyze the Diversity Multiplexinggain Tradeoff (DMT) performance of a training-based reciprocal Single Input Multiple Output (SIMO) system. Assuming Channel State Information (CSI) is available at the Receiver (CSIR), we propose a channel-dependent power-controlled Reverse Channel Training (RCT) scheme that enables the transmitter to directly estimate the power control parameter to be used for the forwardlink data transmission. We show that, with an RCT power of (P) over bar (gamma), gamma > 0 and a forward data transmission power of (P) over bar, our proposed scheme achieves an infinite diversity order for 0 <= g(m) < L-c-L-B,L-tau/L-c min(gamma, 1) and r > 2, where g(m) is the multiplexing gain, L-c is the channel coherence time, L-B,L-tau is the RCT duration and r is the number of receive antennas. We also derive an upper bound on the outage probability and show that it goes to zero asymptotically as exp(-(P) over bar (E)), where E (sic) (gamma - g(m)L(c)/L-c-L-B,L-tau), at high (P) over bar. Thus, the proposed scheme achieves a significantly better DMT performance compared to the finite diversity order achieved by channel-agnostic, fixed-power RCT schemes.
Resumo:
In this letter, we characterize the extrinsic information transfer (EXIT) behavior of a factor graph based message passing algorithm for detection in large multiple-input multiple-output (MIMO) systems with tens to hundreds of antennas. The EXIT curves of a joint detection-decoding receiver are obtained for low density parity check (LDPC) codes of given degree distributions. From the obtained EXIT curves, an optimization of the LDPC code degree profiles is carried out to design irregular LDPC codes matched to the large-MIMO channel and joint message passing receiver. With low complexity joint detection-decoding, these codes are shown to perform better than off-the-shelf irregular codes in the literature by about 1 to 1.5 dB at a coded BER of 10(-5) in 16 x 16, 64 x 64 and 256 x 256 MIMO systems.
Resumo:
The effectiveness of the last-level shared cache is crucial to the performance of a multi-core system. In this paper, we observe and make use of the DelinquentPC - Next-Use characteristic to improve shared cache performance. We propose a new PC-centric cache organization, NUcache, for the shared last level cache of multi-cores. NUcache logically partitions the associative ways of a cache set into MainWays and DeliWays. While all lines have access to the MainWays, only lines brought in by a subset of delinquent PCs, selected by a PC selection mechanism, are allowed to enter the DeliWays. The PC selection mechanism is an intelligent cost-benefit analysis based algorithm that utilizes Next-Use information to select the set of PCs that can maximize the hits experienced in DeliWays. Performance evaluation reveals that NUcache improves the performance over a baseline design by 9.6%, 30% and 33% respectively for dual, quad and eight core workloads comprised of SPEC benchmarks. We also show that NUcache is more effective than other well-known cache-partitioning algorithms.
Resumo:
Context-aware computing is useful in providing individualized services focusing mainly on acquiring surrounding context of user. By comparison, only very little research has been completed in integrating context from different environments, despite of its usefulness in diverse applications such as healthcare, M-commerce and tourist guide applications. In particular, one of the most important criteria in providing personalized service in a highly dynamic environment and constantly changing user environment, is to develop a context model which aggregates context from different domains to infer context of an entity at the more abstract level. Hence, the purpose of this paper is to propose a context model based on cognitive aspects to relate contextual information that better captures the observation of certain worlds of interest for a more sophisticated context-aware service. We developed a C-IOB (Context-Information, Observation, Belief) conceptual model to analyze the context data from physical, system, application, and social domains to infer context at the more abstract level. The beliefs developed about an entity (person, place, things) are primitive in most theories of decision making so that applications can use these beliefs in addition to history of transaction for providing intelligent service. We enhance our proposed context model by further classifying context information into three categories: a well-defined, a qualitative and credible context information to make the system more realistic towards real world implementation. The proposed model is deployed to assist a M-commerce application. The simulation results show that the service selection and service delivery of the system are high compared to traditional system.
Resumo:
In this paper, we study duty cycling and power management in a network of energy harvesting sensor (EHS) nodes. We consider a one-hop network, where K EHS nodes send data to a destination over a wireless fading channel. The goal is to find the optimum duty cycling and power scheduling across the nodes that maximizes the average sum data rate, subject to energy neutrality at each node. We adopt a two-stage approach to simplify the problem. In the inner stage, we solve the problem of optimal duty cycling of the nodes, subject to the short-term power constraint set by the outer stage. The outer stage sets the short-term power constraints on the inner stage to maximize the long-term expected sum data rate, subject to long-term energy neutrality at each node. Albeit suboptimal, our solutions turn out to have a surprisingly simple form: the duty cycle allotted to each node by the inner stage is simply the fractional allotted power of that node relative to the total allotted power. The sum power allotted is a clipped version of the sum harvested power across all the nodes. The average sum throughput thus ultimately depends only on the sum harvested power and its statistics. We illustrate the performance improvement offered by the proposed solution compared to other naive schemes via Monte-Carlo simulations.
Resumo:
In this paper optical code-division multiple-access (O-CDMA) packet network is considered, which offers inherent security in the access networks. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and two-dimensional (2-D) wavelength/time single-pulse-per-row (W/T SPR) codes are analyzed. The main advantage of using 2-D codes instead of one-dimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. In this paper, correlation receiver and chip-level receiver are considered in the analysis. Using analytical model, we compute packet-success probability, throughput and compare for OOC and SPR codes in an O-CDMA network and the analysis shows improved performance with SPR codes as compared to OOC codes.
Resumo:
This paper presents a new approach for Optical Beam steering using 1-D linear arrays of curved wave guides as delay line. The basic structure for generating delay is the curved/bent waveguide and hence its Analytical modelling involves evaluation of mode profiles, propagation constants and losses become important. This was done by solving the dispersion equation of a bent waveguide with specific refractive index profiles. The phase shifts due to S-bends are obtained and results are compared with theoretical values. Simulations in 2-D are done using BPM and Matlab.
Resumo:
In this paper, we are interested in high spectral efficiency multicode CDMA systems with large number of users employing single/multiple transmit antennas and higher-order modulation. In particular, we consider a local neighborhood search based multiuser detection algorithm which offers very good performance and complexity, suited for systems with large number of users employing M-QAM/M-PSK. We apply the algorithm on the chip matched filter output vector. We demonstrate near-single user (SU) performance of the algorithm in CDMA systems with large number of users using 4-QAM/16-QAM/64-QAM/8-PSK on AWGN, frequency-flat, and frequency-selective fading channels. We further show that the algorithm performs very well in multicode multiple-input multiple-output (MIMO) CDMA systems as well, outperforming other linear detectors and interference cancelers reported in the literature for such systems. The per-symbol complexity of the search algorithm is O(K2n2tn2cM), K: number of users, nt: number of transmit antennas at each user, nc: number of spreading codes multiplexed on each transmit antenna, M: modulation alphabet size, making the algorithm attractive for multiuser detection in large-dimension multicode MIMO-CDMA systems with M-QAM.
Resumo:
Constellation Constrained (CC) capacity regions of two-user Gaussian Multiple Access Channels (GMAC) have been recently reported, wherein introducing appropriate rotation between the constellations of the two users is shown to maximally enlarge the CC capacity region. Such a Non-Orthogonal Multiple Access (NO-MA) method of enlarging the CC capacity region is referred to as Constellation Rotation (CR) scheme. In this paper, we propose a novel NO-MA technique called Constellation Power Allocation (CPA) scheme to enlarge the CC capacity region of two-user GMAC. We show that the CPA scheme offers CC sum capacities equal (at low SNR values) or close (at high SNR values) to those offered by the CR scheme with reduced ML decoding complexity for some QAM constellations. For the CR scheme, code pairs approaching the CC sum capacity are known only for the class of PSK and PAM constellations but not for QAM constellations. In this paper, we design code pairs with the CPA scheme to approach the CC sum capacity for 16-QAM constellations. Further, the CPA scheme used for two-user GMAC with random phase offsets is shown to provide larger CC sum capacities at high SNR values compared to the CR scheme.
Resumo:
Low-complexity near-optimal detection of large-MIMO signals has attracted recent research. Recently, we proposed a local neighborhood search algorithm, namely reactive tabu search (RTS) algorithm, as well as a factor-graph based belief propagation (BP) algorithm for low-complexity large-MIMO detection. The motivation for the present work arises from the following two observations on the above two algorithms: i) Although RTS achieved close to optimal performance for 4-QAM in large dimensions, significant performance improvement was still possible for higher-order QAM (e.g., 16-, 64-QAM). ii) BP also achieved near-optimal performance for large dimensions, but only for {±1} alphabet. In this paper, we improve the large-MIMO detection performance of higher-order QAM signals by using a hybrid algorithm that employs RTS and BP. In particular, motivated by the observation that when a detection error occurs at the RTS output, the least significant bits (LSB) of the symbols are mostly in error, we propose to first reconstruct and cancel the interference due to bits other than LSBs at the RTS output and feed the interference cancelled received signal to the BP algorithm to improve the reliability of the LSBs. The output of the BP is then fed back to RTS for the next iteration. Simulation results show that the proposed algorithm performs better than the RTS algorithm, and semi-definite relaxation (SDR) and Gaussian tree approximation (GTA) algorithms.
Resumo:
A new multi-sensor image registration technique is proposed based on detecting the feature corner points using modified Harris Corner Detector (HDC). These feature points are matched using multi-objective optimization (distance condition and angle criterion) based on Discrete Particle Swarm Optimization (DPSO). This optimization process is more efficient as it considers both the distance and angle criteria to incorporate multi-objective switching in the fitness function. This optimization process helps in picking up three corresponding corner points detected in the sensed and base image and thereby using the affine transformation, the sensed image is aligned with the base image. Further, the results show that the new approach can provide a new dimension in solving multi-sensor image registration problems. From the obtained results, the performance of image registration is evaluated and is concluded that the proposed approach is efficient.
Resumo:
This paper presents a new hierarchical clustering algorithm for crop stage classification using hyperspectral satellite image. Amongst the multiple benefits and uses of remote sensing, one of the important application is to solve the problem of crop stage classification. Modern commercial imaging satellites, owing to their large volume of satellite imagery, offer greater opportunities for automated image analysis. Hence, we propose a unsupervised algorithm namely Hierarchical Artificial Immune System (HAIS) of two steps: splitting the cluster centers and merging them. The high dimensionality of the data has been reduced with the help of Principal Component Analysis (PCA). The classification results have been compared with K-means and Artificial Immune System algorithms. From the results obtained, we conclude that the proposed hierarchical clustering algorithm is accurate.
Resumo:
Effective conservation and management of natural resources requires up-to-date information of the land cover (LC) types and their dynamics. The LC dynamics are being captured using multi-resolution remote sensing (RS) data with appropriate classification strategies. RS data with important environmental layers (either remotely acquired or derived from ground measurements) would however be more effective in addressing LC dynamics and associated changes. These ancillary layers provide additional information for delineating LC classes' decision boundaries compared to the conventional classification techniques. This communication ascertains the possibility of improved classification accuracy of RS data with ancillary and derived geographical layers such as vegetation index, temperature, digital elevation model (DEM), aspect, slope and texture. This has been implemented in three terrains of varying topography. The study would help in the selection of appropriate ancillary data depending on the terrain for better classified information.