925 resultados para vector error correction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of mutagenic drugs to drive HIV-1 past its error threshold presents a novel intervention strategy, as suggested by the quasispecies theory, that may be less susceptible to failure via viral mutation-induced emergence of drug resistance than current strategies. The error threshold of HIV-1, mu(c), however, is not known. Application of the quasispecies theory to determine mu(c) poses significant challenges: Whereas the quasispecies theory considers the asexual reproduction of an infinitely large population of haploid individuals, HIV-1 is diploid, undergoes recombination, and is estimated to have a small effective population size in vivo. We performed population genetics-based stochastic simulations of the within-host evolution of HIV-1 and estimated the structure of the HIV-1 quasispecies and mu(c). We found that with small mutation rates, the quasispecies was dominated by genomes with few mutations. Upon increasing the mutation rate, a sharp error catastrophe occurred where the quasispecies became delocalized in sequence space. Using parameter values that quantitatively captured data of viral diversification in HIV-1 patients, we estimated mu(c) to be 7 x 10(-5) -1 x 10(-4) substitutions/site/replication, similar to 2-6 fold higher than the natural mutation rate of HIV-1, suggesting that HIV-1 survives close to its error threshold and may be readily susceptible to mutagenic drugs. The latter estimate was weakly dependent on the within-host effective population size of HIV-1. With large population sizes and in the absence of recombination, our simulations converged to the quasispecies theory, bridging the gap between quasispecies theory and population genetics-based approaches to describing HIV-1 evolution. Further, mu(c) increased with the recombination rate, rendering HIV-1 less susceptible to error catastrophe, thus elucidating an added benefit of recombination to HIV-1. Our estimate of mu(c) may serve as a quantitative guideline for the use of mutagenic drugs against HIV-1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we discuss SU(N) Chern-Simons theories at level k with both fermionic and bosonic vector matter. In particular we present an exact calculation of the free energy of the N = 2 supersymmetric model (with one chiral field) for all values of the `t Hooft coupling in the large N limit. This is done by using a generalization of the standard Hubbard-Stratanovich method because the SUSY model contains higher order polynomial interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a decentralized/peer-to-peer architecture-based parallel version of the vector evaluated particle swarm optimization (VEPSO) algorithm for multi-objective design optimization of laminated composite plates using message passing interface (MPI). The design optimization of laminated composite plates being a combinatorially explosive constrained non-linear optimization problem (CNOP), with many design variables and a vast solution space, warrants the use of non-parametric and heuristic optimization algorithms like PSO. Optimization requires minimizing both the weight and cost of these composite plates, simultaneously, which renders the problem multi-objective. Hence VEPSO, a multi-objective variant of the PSO algorithm, is used. Despite the use of such a heuristic, the application problem, being computationally intensive, suffers from long execution times due to sequential computation. Hence, a parallel version of the PSO algorithm for the problem has been developed to run on several nodes of an IBM P720 cluster. The proposed parallel algorithm, using MPI's collective communication directives, establishes a peer-to-peer relationship between the constituent parallel processes, deviating from the more common master-slave approach, in achieving reduction of computation time by factor of up to 10. Finally we show the effectiveness of the proposed parallel algorithm by comparing it with a serial implementation of VEPSO and a parallel implementation of the vector evaluated genetic algorithm (VEGA) for the same design problem. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study proposes an inverter circuit topology capable of generating multilevel dodecagonal (12-sided polygon) voltage space vectors by the cascaded connection of two-level and three-level inverters. By the proper selection of DC-link voltages and resultant switching states for the inverters, voltage space vectors whose tips lie on three concentric dodecagons, are obtained. A rectifier circuit for the inverter is also proposed, which significantly improves the power factor. The topology offers advantages such as the complete elimination of the fifth and seventh harmonics in phase voltages and an extension of the linear modulation range. In this study, a simple method for the calculation of pulse width modulation timing was presented along with extensive simulation and experimental results in order to validate the proposed concept.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pulse width modulation (PWM) techniques involving different switching sequences are used in space vector-based PWM generation for reducing line current ripple in induction motor drives. This study proposes a hybrid PWM technique employing five switching sequences. The proposed technique is a combination of continuous PWM, discontinuous PWM (DPWM) and advanced bus clamping PWM methods. Performance of the proposed PWM technique is evaluated and compared with those of the existing techniques on a constant volts per hertz induction motor drive. In terms of total harmonic distortion in the line current, the proposed method is shown to be superior to both conventional space vector PWM (CSVPWM) and DPWM over a fundamental frequency range of 32-50 Hz at a given average switching frequency. The reduction in harmonic distortion is about 42% over CSVPWM at the rated speed of the drive.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bubble size in a gas liquid ejector has been measured using the image technique and analysed for estimation of Sauter mean diameter. The individual bubble diameter is estimated by considering the two dimensional contour of the ellipse, for the actual three dimensional ellipsoid in the system by equating the volume of the ellipsoid to that of the sphere. It is observed that the bubbles are of oblate and prolate shaped ellipsoid in this air water system. The bubble diameter is calculated based on this concept and the Sauter mean diameter is estimated. The error between these considerations is reported. The bubble size at different locations from the nozzle of the ejector is presented along with their percentage error which is around 18%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a multilevel flying capacitor inverter topology suitable for generating multilevel dodecagonal space vectors for an induction motor drive, is proposed. Because of the dodecagonal space vectors, it has increased modulation range with the absence of all 6n +/- 1, (n=odd) harmonics in the phase voltage and currents. The topology, realized by flying capacitor three level inverters feeding an open-end winding induction motor, does not suffer the neutral point voltage imbalance issues seen in NPC inverters and the capacitors have inherent charge-balancing capability with PWM control using switching state redundancies. Furthermore, the proposed technique uses lesser number of power supplies compared to cascaded H-bridge or NPC based dodecagonal schemes and has better ride-through capability. Finally, the voltage control is obtained through a simple carrier-based space vector PWM scheme implemented on a DSP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A current-error space phasor based hysteresis controller with nearly constant switching frequency is proposed for a general n-level voltage source inverter fed three-phase induction motor drive. Like voltage-controlled space vector PWM (SVPWM), the proposed controller can precisely detect sub-sector changes and for switching it selects only the nearest switching voltage vectors using the information of the estimated fundamental stator voltages along α and β axes. It provides smooth transition between voltage levels, including operation in over modulation region. Due to adjacent switching amongst the nearest switching vectors forming a triangular sub-sector, in which tip of the fundamental stator voltage vector of the machine lies, switching loss is reduced while keeping the current-error space phasor within the varying parabolic boundary. Appropriate dimension and orientation of this parabolic boundary ensures similar switching frequency spectrum like constant switching frequency SVPWM-based induction motor (IM) drive. Inherent advantages of multi-level inverter and space phasor based current hysteresis controller are retained. The proposed controller is simulated as well as implemented on a 5-level inverter fed 7.5 kW open-end winding IM drive.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we derive Hybrid, Bayesian and Marginalized Cramer-Rao lower bounds (HCRB, BCRB and MCRB) for the single and multiple measurement vector Sparse Bayesian Learning (SBL) problem of estimating compressible vectors and their prior distribution parameters. We assume the unknown vector to be drawn from a compressible Student-prior distribution. We derive CRBs that encompass the deterministic or random nature of the unknown parameters of the prior distribution and the regression noise variance. We extend the MCRB to the case where the compressible vector is distributed according to a general compressible prior distribution, of which the generalized Pareto distribution is a special case. We use the derived bounds to uncover the relationship between the compressibility and Mean Square Error (MSE) in the estimates. Further, we illustrate the tightness and utility of the bounds through simulations, by comparing them with the MSE performance of two popular SBL-based estimators. We find that the MCRB is generally the tightest among the bounds derived and that the MSE performance of the Expectation-Maximization (EM) algorithm coincides with the MCRB for the compressible vector. We also illustrate the dependence of the MSE performance of SBL based estimators on the compressibility of the vector for several values of the number of observations and at different signal powers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ranking problems have become increasingly important in machine learning and data mining in recent years, with applications ranging from information retrieval and recommender systems to computational biology and drug discovery. In this paper, we describe a new ranking algorithm that directly maximizes the number of relevant objects retrieved at the absolute top of the list. The algorithm is a support vector style algorithm, but due to the different objective, it no longer leads to a quadratic programming problem. Instead, the dual optimization problem involves l1, ∞ constraints; we solve this dual problem using the recent l1, ∞ projection method of Quattoni et al (2009). Our algorithm can be viewed as an l∞-norm extreme of the lp-norm based algorithm of Rudin (2009) (albeit in a support vector setting rather than a boosting setting); thus we refer to the algorithm as the ‘Infinite Push’. Experiments on real-world data sets confirm the algorithm’s focus on accuracy at the absolute top of the list.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the tradeoff between the average error probability and the average queueing delay of messages which randomly arrive to the transmitter of a point-to-point discrete memoryless channel that uses variable rate fixed codeword length random coding. Bounds to the exponential decay rate of the average error probability with average queueing delay in the regime of large average delay are obtained. Upper and lower bounds to the optimal average delay for a given average error probability constraint are presented. We then formulate a constrained Markov decision problem for characterizing the rate of transmission as a function of queue size given an average error probability constraint. Using a Lagrange multiplier the constrained Markov decision problem is then converted to a problem of minimizing the average cost for a Markov decision problem. A simple heuristic policy is proposed which approximately achieves the optimal average cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the design of a linear equalizer with a finite number of coefficients in the context of a classical linear intersymbol-interference channel with additive Gaussian noise for channel estimation. Previous literature has shown that Minimum Bit Error Rate(MBER) based detection has outperformed Minimum Mean Squared Error (MMSE) based detection. We pose the channel estimation problem as a detection problem and propose a novel algorithm to estimate the channel based on the MBER framework for BPSK signals. It is shown that the proposed algorithm reduces BER compared to an MMSE based channel estimation when used in MMSE or MBER detection.