205 resultados para Computational Complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a low-complexity, near maximum-likelihood (ML) performance achieving detector for large MIMO systems having tens of transmit and receive antennas. Such large MIMO systems are of interest because of the high spectral efficiencies possible in such systems. The proposed detection algorithm, termed as multistage likelihood-ascent search (M-LAS) algorithm, is rooted in Hopfield neural networks, and is shown to possess excellent performance as well as complexity attributes. In terms of performance, in a 64 x 64 V-BLAST system with 4-QAM, the proposed algorithm achieves an uncoded BER of 10(-3) at an SNR of just about 1 dB away from AWGN-only SISO performance given by Q(root SNR). In terms of coded BER, with a rate-3/4 turbo code at a spectral efficiency of 96 bps/Hz the algorithm performs close to within about 4.5 dB from theoretical capacity, which is remarkable in terms of both high spectral efficiency as well as nearness to theoretical capacity. Our simulation results show that the above performance is achieved with a complexity of just O(NtNt) per symbol, where N-t and N-tau denote the number of transmit and receive antennas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Extended Clifford algebras" are introduced as a means to obtain low ML decoding complexity space-time block codes. Using left regular matrix representations of two specific classes of extended Clifford algebras, two systematic algebraic constructions of full diversity Distributed Space-Time Codes (DSTCs) are provided for any power of two number of relays. The left regular matrix representation has been shown to naturally result in space-time codes meeting the additional constraints required for DSTCs. The DSTCs so constructed have the salient feature of reduced Maximum Likelihood (ML) decoding complexity. In particular, the ML decoding of these codes can be performed by applying the lattice decoder algorithm on a lattice of four times lesser dimension than what is required in general. Moreover these codes have a uniform distribution of power among the relays and in time, thus leading to a low Peak to Average Power Ratio at the relays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A half-duplex constrained non-orthogonal cooperative multiple access (NCMA) protocol suitable for transmission of information from N users to a single destination in a wireless fading channel is proposed. Transmission in this protocol comprises of a broadcast phase and a cooperation phase. In the broadcast phase, each user takes turn broadcasting its data to all other users and the destination in an orthogonal fashion in time. In the cooperation phase, each user transmits a linear function of what it received from all other users as well as its own data. In contrast to the orthogonal extension of cooperative relay protocols to the cooperative multiple access channels wherein at any point of time, only one user is considered as a source and all the other users behave as relays and do not transmit their own data, the NCMA protocol relaxes the orthogonality built into the protocols and hence allows for a more spectrally efficient usage of resources. Code design criteria for achieving full diversity of N in the NCMA protocol is derived using pair wise error probability (PEP) analysis and it is shown that this can be achieved with a minimum total time duration of 2N - 1 channel uses. Explicit construction of full diversity codes is then provided for arbitrary number of users. Since the Maximum Likelihood decoding complexity grows exponentially with the number of users, the notion of g-group decodable codes is introduced for our setup and a set of necesary and sufficient conditions is also obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two decision versions of a combinatorial power minimization problem for scheduling in a time-slotted Gaussian multiple-access channel (GMAC) are studied in this paper. If the number of slots per second is a variable, the problem is shown to be NP-complete. If the number of time-slots per second is fixed, an algorithm that terminates in O (Length (I)N+1) steps is provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Here, we present the synthesis, photochemical, and DNA binding properties of three photoisomerizable azobenzene−distamycin conjugates in which two distamycin units were linked via electron-rich alkoxy or electron-withdrawing carboxamido moieties with the azobenzene core. Like parent distamycin A, these molecules also demonstrated AT-specific DNA binding. Duplex DNA binding abilities of these conjugates were found to depend upon the nature and length of the spacer, the location of protonatable residues, and the isomeric state of the conjugate. The changes in the duplex DNA binding efficiency of the individual conjugates in the dark and with their respective photoirradiated forms were examined by circular dichroism, thermal denaturation of DNA, and Hoechst displacement assay with poly[d(A-T).d(T-A)] DNA in 150 mM NaCl buffer. Computational structural analyses of the uncomplexed ligands using ab initio HF and MP2 theory and molecular docking studies involving the conjugates with duplex d[(GC(AT)10CG)]2 DNA were performed to rationalize the nature of binding of these conjugates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The existing vaccines against influenza are based on the generation of neutralizing antibody primarily directed against surface proteins-hernagglutinin and neuraminidase. In this work, we have computationally defined conserved T cell epitopes of proteins of influenza virus H5N1 to help in the design of a vaccine with haplotype specificity for a target population. The peptides from the proteome of H5NI irus which are predicted to bind to different HLAs, do not show similarity with peptides of human proteorne and are also identified to be generated by proteolytic cleavage. These peptides could be made use of in the design of either a DNA vaccine or a subunit vaccine against V influenza. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The polyamidoamide (PAMAM) class of dendrimers was one of the first dendrimers synthesized by Tomalia and co-workers at Dow. Since its discovery the PAMAMs have stimulated many discussions on the structure and dynamics of such hyperbranched polymers. Many questions remain open because the huge conformation disorder combined with very similar local symmetries have made it difficult to characterize experimentally at the atomistic level the structure and dynamics of PAMAM dendrimers. The higher generation dendrimers have also been difficult to characterize computationally because of the large size (294852 atoms for generation 11) and the huge number of conformations. To help provide a practical means of atomistic computational studies, we have developed an atomistically informed coarse-grained description for the PAMAM dendrimer. We find that a two-bead per monomer representation retains the accuracy of atomistic simulations for predicting size and conformational complexity, while reducing the degrees of freedom by tenfold. This mesoscale description has allowed us to study the structural properties of PAMAM dendrimer up to generation 11 for time scale of up to several nanoseconds. The gross properties such as the radius of gyration compare very well with those from full atomistic simulation and with available small angle x-ray experiment and small angle neutron scattering data. The radial monomer density shows very similar behavior with those obtained from the fully atomistic simulation. Our approach to deriving the coarse-grain model is general and straightforward to apply to other classes of dendrimers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The differential encoding/decoding setup introduced by Kiran et at, Oggier et al and Jing et al for wireless relay networks that use codebooks consisting of unitary matrices is extended to allow codebooks consisting of scaled unitary matrices. For such codebooks to be used in the Jing-Hassibi protocol for cooperative diversity, the conditions that need to be satisfied by the relay matrices and the codebook are identified. A class of previously known rate one, full diversity, four-group encodable and four-group decodable Differential Space-Time Codes (DSTCs) is proposed for use as Distributed DSTCs (DDSTCs) in the proposed set up. To the best of our knowledge, this is the first known low decoding complexity DDSTC scheme for cooperative wireless networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of designing high rate, full diversity noncoherent space-time block codes (STBCs) with low encoding and decoding complexity is addressed. First, the notion of g-group encodable and g-group decodable linear STBCs is introduced. Then for a known class of rate-1 linear designs, an explicit construction of fully-diverse signal sets that lead to four-group encodable and four-group decodable differential scaled unitary STBCs for any power of two number of antennas is provided. Previous works on differential STBCs either sacrifice decoding complexity for higher rate or sacrifice rate for lower decoding complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is known that by employing space-time-frequency codes (STFCs) to frequency selective MIMO-OFDM systems, all the three diversity viz spatial, temporal and multipath can be exploited. There exists space-time-frequency block codes (STFBCs) designed using orthogonal designs with constellation precoder to get full diversity (Z.Liu, Y.Xin and G.Giannakis IEEE Trans. Signal Processing, Oct. 2002). Since orthogonal designs of rate one exists only for two transmit antennas, for more than two transmit antennas STFBCs of rate-one and full-diversity cannot be constructed using orthogonal designs. This paper presents a STFBC scheme of rate one for four transmit antennas designed using quasi-orthogonal designs along with co-ordinate interleaved orthogonal designs (Zafar Ali Khan and B. Sundar Rajan Proc: ISIT 2002). Conditions on the signal sets that give full-diversity are identified. Simulation results are presented to show the superiority of our codes over the existing ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of designing high rate, full diversity noncoherent space-time block codes (STBCs) with low encoding and decoding complexity is addressed. First, the notion of g-group encodable and g-group decodable linear STBCs is introduced. Then for a known class of rate-1 linear designs, an explicit construction of fully-diverse signal sets that lead to four-group encodable and four-group decodable differential scaled unitary STBCs for any power of two number of antennas is provided. Previous works on differential STBCs either sacrifice decoding complexity for higher rate or sacrifice rate for lower decoding complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is known that in an OFDM system using Hadamard transform or phase alteration before the IDFT operation can reduce the Peak-to-Average Power Ratio (PAPR). Both these techniques can be viewed as constellation precoding for PAPR reduction. In general, using non-diagonal transforms, like Hadamard transform, increases the ML decoding complexity. In this paper we propose the use of block-IDFT matrices and show that appropriate block-IDFT matrices give lower PAPR as well as lower decoding complexity compared to using Hadamard transform. Moreover, we present a detailed study of the tradeoff between PAPR reduction and the ML decoding complexity when using block-IDFT matrices with various sizes of the blocks.