295 resultados para Distributed sensing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Opportunistic selection is a practically appealing technique that is used in multi-node wireless systems to maximize throughput, implement proportional fairness, etc. However, selection is challenging since the information about a node's channel gains is often available only locally at each node and not centrally. We propose a novel multiple access-based distributed selection scheme that generalizes the best features of the timer scheme, which requires minimal feedback but does not always guarantee successful selection, and the fast splitting scheme, which requires more feedback but guarantees successful selection. The proposed scheme's design explicitly accounts for feedback time overheads unlike the conventional splitting scheme and guarantees selection of the user with the highest metric unlike the timer scheme. We analyze and minimize the average time including feedback required by the scheme to select. With feedback overheads, the proposed scheme is scalable and considerably faster than several schemes proposed in the literature. Furthermore, the gains increase as the feedback overhead increases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers sequential hypothesis testing in a decentralized framework. We start with two simple decentralized sequential hypothesis testing algorithms. One of which is later proved to be asymptotically Bayes optimal. We also consider composite versions of decentralized sequential hypothesis testing. A novel nonparametric version for decentralized sequential hypothesis testing using universal source coding theory is developed. Finally we design a simple decentralized multihypothesis sequential detection algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the simulation and analytical results obtained for homogenous or bulk sensing of protein on Siliconon- insulator strip waveguide based microring resonator. The radii of the rings considered are 5 μm and 20 μm; the waveguide dimensions are 300 × 300 nm. A gap of (i) 200 nm and (ii) 300 nm exists between the ring and the bus waveguide. The biomaterial is uniformly distributed over a thickness which exceeds the evanescent field penetration depth of 150 nm. The sensitivities of the resonators are 32.5 nm/RIU and 17.5 nm/RIU (RIU - Refractive index unit) respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, Guo and Xia introduced low complexity decoders called Partial Interference Cancellation (PIC) and PIC with Successive Interference Cancellation (PIC-SIC), which include the Zero Forcing (ZF) and ZF-SIC receivers as special cases, for point-to-point MIMO channels. In this paper, we show that PIC and PIC-SIC decoders are capable of achieving the full cooperative diversity available in wireless relay networks. We give sufficient conditions for a Distributed Space-Time Block Code (DSTBC) to achieve full diversity with PIC and PIC-SIC decoders and construct a new class of DSTBCs with low complexity full-diversity PIC-SIC decoding using complex orthogonal designs. The new class of codes includes a number of known full-diversity PIC/PIC-SIC decodable Space-Time Block Codes (STBCs) constructed for point-to-point channels as special cases. The proposed DSTBCs achieve higher rates (in complex symbols per channel use) than the multigroup ML decodable DSTBCs available in the literature. Simulation results show that the proposed codes have better bit error rate performance than the best known low complexity, full-diversity DSTBCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compressive Sensing (CS) is a new sensing paradigm which permits sampling of a signal at its intrinsic information rate which could be much lower than Nyquist rate, while guaranteeing good quality reconstruction for signals sparse in a linear transform domain. We explore the application of CS formulation to music signals. Since music signals comprise of both tonal and transient nature, we examine several transforms such as discrete cosine transform (DCT), discrete wavelet transform (DWT), Fourier basis and also non-orthogonal warped transforms to explore the effectiveness of CS theory and the reconstruction algorithms. We show that for a given sparsity level, DCT, overcomplete, and warped Fourier dictionaries result in better reconstruction, and warped Fourier dictionary gives perceptually better reconstruction. “MUSHRA” test results show that a moderate quality reconstruction is possible with about half the Nyquist sampling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal acquisition under a compressed sensing scheme offers the possibility of acquisition and reconstruction of signals sparse on some basis incoherent with measurement kernel with sub-Nyquist number of measurements. In particular when the sole objective of the acquisition is the detection of the frequency of a signal rather than exact reconstruction, then an undersampling framework like CS is able to perform the task. In this paper we explore the possibility of acquisition and detection of frequency of multiple analog signals, heavily corrupted with additive white Gaussian noise. We improvise upon the MOSAICS architecture proposed by us in our previous work to include a wider class of signals having non-integral frequency components. This makes it possible to perform multiplexed compressed sensing for general frequency sparse signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective conservation and management of natural resources requires up-to-date information of the land cover (LC) types and their dynamics. The LC dynamics are being captured using multi-resolution remote sensing (RS) data with appropriate classification strategies. RS data with important environmental layers (either remotely acquired or derived from ground measurements) would however be more effective in addressing LC dynamics and associated changes. These ancillary layers provide additional information for delineating LC classes' decision boundaries compared to the conventional classification techniques. This communication ascertains the possibility of improved classification accuracy of RS data with ancillary and derived geographical layers such as vegetation index, temperature, digital elevation model (DEM), aspect, slope and texture. This has been implemented in three terrains of varying topography. The study would help in the selection of appropriate ancillary data depending on the terrain for better classified information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural health monitoring of existing infrastructure is currently an active field of research, where elaborate experimental programs and advanced analytical methods are used in identifying the current state of health of critical structures. Change of static deflection as the indicator of damage is the simplest tool in a structural health monitoring scenario of bridges that is least exploited in damage identification strategies. In this paper, some simple and elegant equations based on loss of symmetry due to damage are derived and presented for identification of damage in a bridge girder modeled as a simply supported beam using changes in static deflections and dynamic parameters. A single contiguous and distributed damage, typical of reinforced or prestressed concrete structures, is assumed for the structure. The methodology is extended for a base-line-free as well as base-line-inclusive measurement. Measurement strategy involves application of loads only at two symmetric points one at a time and deflection measurements at those symmetric points as well as at the midspan of the beam. A laboratory-based experiment is used to validate the approach. Copyright (c) 2012 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several experimental studies have shown that fracture surfaces in brittle metallic glasses (MGs) generally exhibit nanoscale corrugations which may be attributed to the nucleation and coalescence of nanovoids during crack propagation. Recent atomistic simulations suggest that this phenomenon is due to large spatial fluctuations in material properties in a brittle MG, which leads to void nucleation in regions of low atomic density and then catastrophic fracture through void coalescence. To explain this behavior, we propose a model of a heterogeneous solid containing a distribution of weak zones to represent a brittle MG. Plane strain continuum finite element analysis of cavitation in such an elastic-plastic solid is performed with the weak zones idealized as periodically distributed regions having lower yield strength than the background material. It is found that the presence of weak zones can significantly reduce the critical hydrostatic stress for the onset of cavitation which is controlled uniquely by the local yield properties of these zones. Also, the presence of weak zones diminishes the sensitivity of the cavitation stress to the volume fraction of a preexisting void. These results provide plausible explanations for the observations reported in recent atomistic simulations of brittle MGs. An analytical solution for a composite, incompressible elastic-plastic solid with a weak inner core is used to investigate the effect of volume fraction and yield strength of the core on the nature of cavitation bifurcation. It is shown that snap-cavitation may occur, giving rise to sudden formation of voids with finite size, which does not happen in a homogeneous plastic solid. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed compressed sensing exploits information redundancy, inbuilt in multi-signal ensembles with interas well as intra-signal correlations, to reconstruct undersampled signals. In this paper we revisit this problem, albeit from a different perspective, of taking streaming data, from several correlated sources, as input to a real time system which, without any a priori information, incrementally learns and admits each source into the system.