120 resultados para average complexity
em Indian Institute of Science - Bangalore - Índia
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
"Extended Clifford algebras" are introduced as a means to obtain low ML decoding complexity space-time block codes. Using left regular matrix representations of two specific classes of extended Clifford algebras, two systematic algebraic constructions of full diversity Distributed Space-Time Codes (DSTCs) are provided for any power of two number of relays. The left regular matrix representation has been shown to naturally result in space-time codes meeting the additional constraints required for DSTCs. The DSTCs so constructed have the salient feature of reduced Maximum Likelihood (ML) decoding complexity. In particular, the ML decoding of these codes can be performed by applying the lattice decoder algorithm on a lattice of four times lesser dimension than what is required in general. Moreover these codes have a uniform distribution of power among the relays and in time, thus leading to a low Peak to Average Power Ratio at the relays.
Resumo:
It is known that in an OFDM system using Hadamard transform or phase alteration before the IDFT operation can reduce the Peak-to-Average Power Ratio (PAPR). Both these techniques can be viewed as constellation precoding for PAPR reduction. In general, using non-diagonal transforms, like Hadamard transform, increases the ML decoding complexity. In this paper we propose the use of block-IDFT matrices and show that appropriate block-IDFT matrices give lower PAPR as well as lower decoding complexity compared to using Hadamard transform. Moreover, we present a detailed study of the tradeoff between PAPR reduction and the ML decoding complexity when using block-IDFT matrices with various sizes of the blocks.
Resumo:
In this paper, Space-Time Block Codes (STBCs) with reduced Sphere Decoding Complexity (SDC) are constructed for two-user Multiple-Input Multiple-Output (MIMO) fading multiple access channels. In this set-up, both the users employ identical STBCs and the destination performs sphere decoding for the symbols of the two users. First, we identify the positions of the zeros in the R matrix arising out of the Q-R decomposition of the lattice generator such that (i) the worst case SDC (WSDC) and (ii) the average SDC (ASDC) are reduced. Then, a set of necessary and sufficient conditions on the lattice generator is provided such that the R matrix has zeros at the identified positions. Subsequently, explicit constructions of STBCs which results in the reduced ASDC are presented. The rate (in complex symbols per channel use) of the proposed designs is at most 2/N-t where N-t denotes the number of transmit antennas for each user. We also show that the class of STBCs from complex orthogonal designs (other than the Alamouti design) reduce the WSDC but not the ASDC.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
We consider near-optimal policies for a single user transmitting on a wireless channel which minimize average queue length under average power constraint. The power is consumed in transmission of data only. We consider the case when the power used in transmission is a linear function of the data transmitted. The transmission channel may experience multipath fading. Later, we also extend these results to the multiuser case. We show that our policies can be used in a system with energy harvesting sources at the transmitter. Next we consider data users which require minimum rate guarantees. Finally we consider the system which has both data and real time users. Our policies have low computational complexity, closed form expression for mean delays and require only the mean arrival rate with no queue length information.
Resumo:
A detailed investigation of Y0.5Ca0.5MnO3 with a very small radius of the A-site cations ([r(A)] approximate to 1.13 Angstrom reveals the occurrence of a charge-ordering transition in the paramagnetic state, at a relatively high temperature of 260 K. The orthorhombic lattice distortion, as measured by the dimensionless index D, is large (similar to 1.75%) over the entire 300-100 K range, but the antiferromagnetic interactions become prominent only at low temperatures (< 160 K). The charge-ordering gap in Y0.5Ca0.5MnO3, measured by low-temperature vacuum tunnelling spectroscopy, is large (similar to 0.5 eV) and the charge-ordered state is unaffected by the application of a magnetic field of 6 T. The study indicates that the nature of charge-ordering in Y0.5Ca0.5MnO3 which is dominated by the cooperative Jahn-Teller effect and the associated lattice distortion is distinctly different from analogous manganates with larger [r(A)].
Resumo:
We consider the problem of deciding whether the output of a boolean circuit is determined by a partial assignment to its inputs. This problem is easily shown to be hard, i.e., co-Image Image -complete. However, many of the consequences of a partial input assignment may be determined in linear time, by iterating the following step: if we know the values of some inputs to a gate, we can deduce the values of some outputs of that gate. This process of iteratively deducing some of the consequences of a partial assignment is called propagation. This paper explores the parallel complexity of propagation, i.e., the complexity of determining whether the output of a given boolean circuit is determined by propagating a given partial input assignment. We give a complete classification of the problem into those cases that are Image -complete and those that are unlikely to be Image complete.
Resumo:
A variety of data structures such as inverted file, multi-lists, quad tree, k-d tree, range tree, polygon tree, quintary tree, multidimensional tries, segment tree, doubly chained tree, the grid file, d-fold tree. super B-tree, Multiple Attribute Tree (MAT), etc. have been studied for multidimensional searching and related problems. Physical data base organization, which is an important application of multidimensional searching, is traditionally and mostly handled by employing inverted file. This study proposes MAT data structure for bibliographic file systems, by illustrating the superiority of MAT data structure over inverted file. Both the methods are compared in terms of preprocessing, storage and query costs. Worst-case complexity analysis of both the methods, for a partial match query, is carried out in two cases: (a) when directory resides in main memory, (b) when directory resides in secondary memory. In both cases, MAT data structure is shown to be more efficient than the inverted file method. Arguments are given to illustrate the superiority of MAT data structure in an average case also. An efficient adaptation of MAT data structure, that exploits the special features of MAT structure and bibliographic files, is proposed for bibliographic file systems. In this adaptation, suitable techniques for fixing and ranking of the attributes for MAT data structure are proposed. Conclusions and proposals for future research are presented.
Resumo:
We address the issue of complexity for vector quantization (VQ) of wide-band speech LSF (line spectrum frequency) parameters. The recently proposed switched split VQ (SSVQ) method provides better rate-distortion (R/D) performance than the traditional split VQ (SVQ) method, even at the requirement of lower computational complexity. but at the expense of much higher memory. We develop the two stage SVQ (TsSVQ) method, by which we gain both the memory and computational advantages and still retain good R/D performance. The proposed TsSVQ method uses a full dimensional quantizer in its first stage for exploiting all the higher dimensional coding advantages and then, uses an SVQ method for quantizing the residual vector in the second stage so as to reduce the complexity. We also develop a transform domain residual coding method in this two stage architecture such that it further reduces the computational complexity. To design an effective residual codebook in the second stage, variance normalization of Voronoi regions is carried out which leads to the design of two new methods, referred to as normalized two stage SVQ (NTsSVQ) and normalized two stage transform domain SVQ (NTsTrSVQ). These two new methods have complimentary strengths and hence, they are combined in a switched VQ mode which leads to the further improvement in R/D performance, but retaining the low complexity requirement. We evaluate the performances of new methods for wide-band speech LSF parameter quantization and show their advantages over established SVQ and SSVQ methods.
Resumo:
Space-time codes from complex orthogonal designs (CODs) with no zero entries offer low Peak to Average Power Ratio (PAPR) and avoid the problem of switching off antennas. But square CODs for 2(a) antennas with a + 1. complex variables, with no zero entries were discovered only for a <= 3 and if a + 1 = 2(k), for k >= 4. In this paper, a method of obtaining no zero entry (NZE) square designs, called Complex Partial-Orthogonal Designs (CPODs), for 2(a+1) antennas whenever a certain type of NZE code exists for 2(a) antennas is presented. Then, starting from a so constructed NZE CPOD for n = 2(a+1) antennas, a construction procedure is given to obtain NZE CPODs for 2n antennas, successively. Compared to the CODs, CPODs have slightly more ML decoding complexity for rectangular QAM constellations and the same ML decoding complexity for other complex constellations. Using the recently constructed NZE CODs for 8 antennas our method leads to NZE CPODs for 16 antennas. The class of CPODs do not offer full-diversity for all complex constellations. For the NZE CPODs presented in the paper, conditions on the signal sets which will guarantee full-diversity are identified. Simulation results show that bit error performance of our codes is same as that of the CODs under average power constraint and superior to CODs under peak power constraint.
Resumo:
The research in software science has so far been concentrated on three measures of program complexity: (a) software effort; (b) cyclomatic complexity; and (c) program knots. In this paper we propose a measure of the logical complexity of programs in terms of the variable dependency of sequence of computations, inductive effort in writing loops and complexity of data structures. The proposed complexity mensure is described with the aid of a graph which exhibits diagrammatically the dependence of a computation at a node upon the computation of other (earlier) nodes. Complexity measures of several example programs have been computed and the related issues have been discussed. The paper also describes the role played by data structures in deciding the program complexity.
Resumo:
Following an invariant-imbedding approach, we obtain analytical expressions for the ensemble-averaged resistance (ρ) and its Sinai’s fluctuations for a one-dimensional disordered conductor in the presence of a finite electric field F. The mean resistance shows a crossover from the exponential to the power-law length dependence with increasing field strength in agreement with known numerical results. More importantly, unlike the zero-field case the resistance distribution saturates to a Poissonian-limiting form proportional to A‖F‖exp(-A‖F‖ρ) for large sample lengths, where A is constant.
Resumo:
This paper review the some of the recent developments in Complexity theory as applied to telephone-switching. Some of these techniques are suitable for practical implementation in India.
Resumo:
We computed Higuchi's fractal dimension (FD) of resting, eyes closed EEG recorded from 30 scalp locations in 18 male neuroleptic-naive, recent-onset schizophrenia (NRS) subjects and 15 male healthy control (HC) subjects, who were group-matched for age. Schizophrenia patients showed a diffuse reduction of FD except in the bilateral temporal and occipital regions, with the reduction being most prominent bifrontally. The positive symptom (PS) schizophrenia subjects showed FD values similar to or even higher than HC in the bilateral temporo-occipital regions, along with a co-existent bifrontal FD reduction as noted in the overall sample of NRS. In contrast, this increase in FD values in the bilateral temporo-occipital region was absent in the negative symptom (NS) subgroup. The regional differences in complexity suggested by these findings may reflect the aberrant brain dynamics underlying the pathophysiology of schizophrenia and its symptom dimensions. Higuchi's method of measuring FD directly in the time domain provides an alternative for the more computationally intensive nonlinear methods of estimating EEG complexity.