270 resultados para Minimization algorithm
Resumo:
A simple procedure for the state minimization of an incompletely specified sequential machine whose number of internal states is not very large is presented. It introduces the concept of a compatibility graph from which the set of maximal compatibles of the machine can be very conveniently derived. Primary and secondary implication trees associated with each maximal compatible are then constructed. The minimal state machine covering the incompletely specified machine is then obtained from these implication trees.
Resumo:
Scan circuit generally causes excessive switching activity compared to normal circuit operation. The higher switching activity in turn causes higher peak power supply current which results into supply, voltage droop and eventually yield loss. This paper proposes an efficient methodology for test vector re-ordering to achieve minimum peak power supported by the given test vector set. The proposed methodology also minimizes average power under the minimum peak power constraint. A methodology to further reduce the peak power below the minimum supported peak power, by inclusion of minimum additional vectors is also discussed. The paper defines the lower bound on peak power for a given test set. The results on several benchmarks shows that it can reduce peak power by up to 27%.
Resumo:
This is a continuation of earlier studies on the evolution of infinite populations of haploid genotypes within a genetic algorithm framework. We had previously explored the evolutionary consequences of the existence of indeterminate—“plastic”—loci, where a plastic locus had a finite probability in each generation of functioning (being switched “on”) or not functioning (being switched “off”). The relative probabilities of the two outcomes were assigned on a stochastic basis. The present paper examines what happens when the transition probabilities are biased by the presence of regulatory genes. We find that under certain conditions regulatory genes can improve the adaptation of the population and speed up the rate of evolution (on occasion at the cost of lowering the degree of adaptation). Also, the existence of regulatory loci potentiates selection in favour of plasticity. There is a synergistic effect of regulatory genes on plastic alleles: the frequency of such alleles increases when regulatory loci are present. Thus, phenotypic selection alone can be a potentiating factor in a favour of better adaptation.
Resumo:
Conformational studies have been carried out on hydrogenbonded all-trans cyclic pentapeptide backbone. Application of a combination of grid search and energy minimization on this system has resulted in obtaining 23 minimum energy conformations, which are characterized by unique patterns of hydrogen bonding comprising of β- and γ-turns. A study of the minimum energy conformationsvis-a-vis non-planar deviation of the peptide units reveals that non-planarity is an inherent feature in many cases. A study on conformational clustering of minimum energy conformations shows that the minimum energy conformations fall into 6 distinct conformational families. Preliminary comparison with available X-ray structures of cyclic pentapeptide indicates that only some of the minimum energy conformations have formed crystal structures. The set of minimum energy conformations worked out in the present study can form a consolidated database of prototypes for hydrogen bonded backbone and be useful for modelling cyclic pentapeptides both synthetic and bioactive in nature.
Resumo:
The estimation of the frequency of a sinusoidal signal is a well researched problem. In this work we propose an initialization scheme to the popular dichotomous search of the periodogram peak algorithm(DSPA) that is used to estimate the frequency of a sinusoid in white gaussian noise. Our initialization is computationally low cost and gives the same performance as the DSPA, while reducing the number of iterations needed for the fine search stage. We show that our algorithm remains stable as we reduce the number of iterations in the fine search stage. We also compare the performance of our modification to a previous modification of the DSPA and show that we enhance the performance of the algorithm with our initialization technique.
Resumo:
In this note, the fallacy in the method given by Sharma and Swarup, in their paper on time minimising transportation problem, to determine the setS hkof all nonbasic cells which when introduced into the basis, either would eliminate a given basic cell (h, k) from the basis or reduce the amountx hkis pointed out.
Resumo:
Rate-constrained power minimization (PMIN) over a code division multiple-access (CDMA) channel with correlated noise is studied. PMIN is. shown to be an instance of a separable convex optimization problem subject to linear ascending constraints. PMIN is further reduced to a dual problem of sum-rate maximization (RMAX). The results highlight the underlying unity between PMIN, RMAX, and a problem closely related to PMIN but with linear receiver constraints. Subsequently, conceptually simple sequence design algorithms are proposed to explicitly identify an assignment of sequences and powers that solve PMIN. The algorithms yield an upper bound of 2N - 1 on the number of distinct sequences where N is the processing gain. The sequences generated using the proposed algorithms are in general real-valued. If a rate-splitting and multi-dimensional CDMA approach is allowed, the upper bound reduces to N distinct sequences, in which case the sequences can form an orthogonal set and be binary +/- 1-valued.
Resumo:
In recent years, identification of sequence patterns has been given immense importance to understand better their significance with respect to genomic organization and evolutionary processes. To this end, an algorithm has been derived to identify all similar sequence repeats present in a protein sequence. The proposed algorithm is useful to correlate the three-dimensional structure of various similar sequence repeats available in the Protein Data Bank against the same sequence repeats present in other databases like SWISS-PROT, PIR and Genome databases.
Resumo:
By “phenotypic plasticity” we refer to the capacity of a genotype to exhibit different phenotypes, whether in the same or in different environments. We have previously demonstrated that phenotypic plasticity can improve the degree of adaptation achieved via natural selection (Behera & Nanjundiah, 1995). That result was obtained from a genetic algorithm model of haploid genotypes (idealized as one-dimensional strings of genes) evolving in a fixed environment. Here, the dynamics of evolution is examined under conditions of a cyclically varying environment. We find that the rate of evolution, as well as the extent of adaptation (as measured by mean population fitness) is lowered because of environmental cycling. The decrease is adaptation caused by a varying environment can, however, be partly or wholly compensated by an increase in the degree of plasticity that a genotype is capable of. Also, the reduction of population fitness caused by a variable environment can be partially offset by decreasing the total number of genetic loci. We conjecture that an increase in genome size may have been among the factors responsible for the evolution of phenotypic plasticity.
Resumo:
Purpose: A computationally efficient algorithm (linear iterative type) based on singular value decomposition (SVD) of the Jacobian has been developed that can be used in rapid dynamic near-infrared (NIR) diffuse optical tomography. Methods: Numerical and experimental studies have been conducted to prove the computational efficacy of this SVD-based algorithm over conventional optical image reconstruction algorithms. Results: These studies indicate that the performance of linear iterative algorithms in terms of contrast recovery (quantitation of optical images) is better compared to nonlinear iterative (conventional) algorithms, provided the initial guess is close to the actual solution. The nonlinear algorithms can provide better quality images compared to the linear iterative type algorithms. Moreover, the analytical and numerical equivalence of the SVD-based algorithm to linear iterative algorithms was also established as a part of this work. It is also demonstrated that the SVD-based image reconstruction typically requires O(NN2) operations per iteration, as contrasted with linear and nonlinear iterative methods that, respectively, requir O(NN3) and O(NN6) operations, with ``NN'' being the number of unknown parameters in the optical image reconstruction procedure. Conclusions: This SVD-based computationally efficient algorithm can make the integration of image reconstruction procedure with the data acquisition feasible, in turn making the rapid dynamic NIR tomography viable in the clinic to continuously monitor hemodynamic changes in the tissue pathophysiology.
Resumo:
Flexible objects such as a rope or snake move in a way such that their axial length remains almost constant. To simulate the motion of such an object, one strategy is to discretize the object into large number of small rigid links connected by joints. However, the resulting discretised system is highly redundant and the joint rotations for a desired Cartesian motion of any point on the object cannot be solved uniquely. In this paper, we revisit an algorithm, based on the classical tractrix curve, to resolve the redundancy in such hyper-redundant systems. For a desired motion of the `head' of a link, the `tail' is moved along a tractrix, and recursively all links of the discretised objects are moved along different tractrix curves. The algorithm is illustrated by simulations of a moving snake, tying of knots with a rope and a solution of the inverse kinematics of a planar hyper-redundant manipulator. The simulations show that the tractrix based algorithm leads to a more `natural' motion since the motion is distributed uniformly along the entire object with the displacements diminishing from the `head' to the `tail'.
Resumo:
Partitional clustering algorithms, which partition the dataset into a pre-defined number of clusters, can be broadly classified into two types: algorithms which explicitly take the number of clusters as input and algorithms that take the expected size of a cluster as input. In this paper, we propose a variant of the k-means algorithm and prove that it is more efficient than standard k-means algorithms. An important contribution of this paper is the establishment of a relation between the number of clusters and the size of the clusters in a dataset through the analysis of our algorithm. We also demonstrate that the integration of this algorithm as a pre-processing step in classification algorithms reduces their running-time complexity.
Resumo:
The application of computer-aided inspection integrated with the coordinate measuring machine and laser scanners to inspect manufactured aircraft parts using robust registration of two-point datasets is a subject of active research in computational metrology. This paper presents a novel approach to automated inspection by matching shapes based on the modified iterative closest point (ICP) method to define a criterion for the acceptance or rejection of a part. This procedure improves upon existing methods by doing away with the following, viz., the need for constructing either a tessellated or smooth representation of the inspected part and requirements for an a priori knowledge of approximate registration and correspondence between the points representing the computer-aided design datasets and the part to be inspected. In addition, this procedure establishes a better measure for error between the two matched datasets. The use of localized region-based triangulation is proposed for tracking the error. The approach described improves the convergence of the ICP technique with a dramatic decrease in computational effort. Experimental results obtained by implementing this proposed approach using both synthetic and practical data show that the present method is efficient and robust. This method thereby validates the algorithm, and the examples demonstrate its potential to be used in engineering applications.
Resumo:
We propose a self-regularized pseudo-time marching scheme to solve the ill-posed, nonlinear inverse problem associated with diffuse propagation of coherent light in a tissuelike object. In particular, in the context of diffuse correlation tomography (DCT), we consider the recovery of mechanical property distributions from partial and noisy boundary measurements of light intensity autocorrelation. We prove the existence of a minimizer for the Newton algorithm after establishing the existence of weak solutions for the forward equation of light amplitude autocorrelation and its Frechet derivative and adjoint. The asymptotic stability of the solution of the ordinary differential equation obtained through the introduction of the pseudo-time is also analyzed. We show that the asymptotic solution obtained through the pseudo-time marching converges to that optimal solution provided the Hessian of the forward equation is positive definite in the neighborhood of optimal solution. The superior noise tolerance and regularization-insensitive nature of pseudo-dynamic strategy are proved through numerical simulations in the context of both DCT and diffuse optical tomography. (C) 2010 Optical Society of America.
Resumo:
In this paper, we present a low-complexity, near maximum-likelihood (ML) performance achieving detector for large MIMO systems having tens of transmit and receive antennas. Such large MIMO systems are of interest because of the high spectral efficiencies possible in such systems. The proposed detection algorithm, termed as multistage likelihood-ascent search (M-LAS) algorithm, is rooted in Hopfield neural networks, and is shown to possess excellent performance as well as complexity attributes. In terms of performance, in a 64 x 64 V-BLAST system with 4-QAM, the proposed algorithm achieves an uncoded BER of 10(-3) at an SNR of just about 1 dB away from AWGN-only SISO performance given by Q(root SNR). In terms of coded BER, with a rate-3/4 turbo code at a spectral efficiency of 96 bps/Hz the algorithm performs close to within about 4.5 dB from theoretical capacity, which is remarkable in terms of both high spectral efficiency as well as nearness to theoretical capacity. Our simulation results show that the above performance is achieved with a complexity of just O(NtNt) per symbol, where N-t and N-tau denote the number of transmit and receive antennas.