49 resultados para Notion of code

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article, a non-autonomous (time-varying) semilinear system is considered and its approximate controllability is investigated. The notion of 'bounded integral contractor', introduced by Altman, has been exploited to obtain sufficient conditions for approximate controllability. This condition is weaker than Lipschitz condition. The main theorems of Naito [11, 12] are obtained as corollaries of our main results. An example is also given to show how our results weaken the conditions assumed by Sukavanam[17].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computation of the dependency basis is the fundamental step in solving the membership problem for functional dependencies (FDs) and multivalued dependencies (MVDs) in relational database theory. We examine this problem from an algebraic perspective. We introduce the notion of the inference basis of a set M of MVDs and show that it contains the maximum information about the logical consequences of M. We propose the notion of a dependency-lattice and develop an algebraic characterization of inference basis using simple notions from lattice theory. We also establish several interesting properties of dependency-lattices related to the implication problem. Founded on our characterization, we synthesize efficient algorithms for (a): computing the inference basis of a given set M of MVDs; (b): computing the dependency basis of a given attribute set w.r.t. M; and (c): solving the membership problem for MVDs. We also show that our results naturally extend to incorporate FDs also in a way that enables the solution of the membership problem for both FDs and MVDs put together. We finally show that our algorithms are more efficient than existing ones, when used to solve what we term the ‘generalized membership problem’.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sets of multivalued dependencies (MVDs) having conflict-free covers are important to the theory and design of relational databases [2,12,15,16]. Their desirable properties motivate the problem of testing a set M of MVDs for the existence of a confiict-free cover. In [8] Goodman and Tay have proposed an approach based on the possible equivalence of M to a single (acyclic) join dependency (JD). We remark that their characterization does not lend an insight into the nature of such sets of MVDs. Here, we use notions that are intrinsic to MVDs to develop a new characterization. Our approach proceeds in two stages. In the first stage, we use the notion of “split-free” sets of MVDs and obtain a characterization of sets M of MVDs having split-free covers. In the second, we use the notion of “intersection” of MVDs to arrive at a necessary and sufficient condition for a split-free set of MVDs to be conflict-free. Based on our characterizations, we also give polynomial-time algorithms for testing whether M has split-free and conflict-free covers. The highlight of our approach is the clear insight it provides into the nature of sets of MVDs having conflict-free covers. Less emphasis is given in this paper to the actual efficiency of the algorthms. Finally, as a bonus, we derive a desirable property of split-free sets of MVDs,thereby showing that they are interesting in their own right.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

he notion of the gravity-induced electric field has been applied to an entire self-gravitating massive body. The resulting electric polarization of the otherwise neutral body, when taken in conjunction with the latter's rotation, is shown to generate an axial-magnetic field of the right type and order of magnitude for certain astrophysical objects. In the present treatment the electric polarization is calculated in the ion-continuum Thomas-Fermi approximation while the electrodynamics of the continuous medium is treated in the nonrelativistic approximation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

By applying the theory of the asymptotic distribution of extremes and a certain stability criterion to the question of the domain of convergence in the probability sense, of the renormalized perturbation expansion (RPE) for the site self-energy in a cellularly disordered system, an expression has been obtained in closed form for the probability of nonconvergence of the RPE on the real-energy axis. Hence, the intrinsic mobility mu (E) as a function of the carrier energy E is deduced to be given by mu (E)= mu 0exp(-exp( mod E mod -Ec) Delta ), where Ec is a nominal 'mobility edge' and Delta is the width of the random site-energy distribution. Thus mobility falls off sharply but continuously for mod E mod >Ec, in contradistinction with the notion of an abrupt 'mobility edge' proposed by Cohen et al. and Mott. Also, the calculated electrical conductivity shows a temperature dependence in qualitative agreement with experiments on disordered semiconductors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The number of available structures of large multi-protein assemblies is quite small. Such structures provide phenomenal insights on the organization, mechanism of formation and functional properties of the assembly. Hence detailed analysis of such structures is highly rewarding. However, the common problem in such analyses is the low resolution of these structures. In the recent times a number of attempts that combine low resolution cryo-EM data with higher resolution structures determined using X-ray analysis or NMR or generated using comparative modeling have been reported. Even in such attempts the best result one arrives at is the very course idea about the assembly structure in terms of trace of the C alpha atoms which are modeled with modest accuracy. Methodology/Principal Findings: In this paper first we present an objective approach to identify potentially solvent exposed and buried residues solely from the position of C alpha atoms and amino acid sequence using residue type-dependent thresholds for accessible surface areas of C alpha. We extend the method further to recognize potential protein-protein interface residues. Conclusion/Significance: Our approach to identify buried and exposed residues solely from the positions of C alpha atoms resulted in an accuracy of 84%, sensitivity of 83-89% and specificity of 67-94% while recognition of interfacial residues corresponded to an accuracy of 94%, sensitivity of 70-96% and specificity of 58-94%. Interestingly, detailed analysis of cases of mismatch between recognition of interface residues from C alpha positions and all-atom models suggested that, recognition of interfacial residues using C alpha atoms only correspond better with intuitive notion of what is an interfacial residue. Our method should be useful in the objective analysis of structures of protein assemblies when positions of only C alpha positions are available as, for example, in the cases of integration of cryo-EM data and high resolution structures of the components of the assembly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present numerical evidence that supports the notion of minimization in the sequence space of proteins for a target conformation. We use the conformations of the real proteins in the Protein Data Bank (PDB) and present computationally efficient methods to identify the sequences with minimum energy. We use edge-weighted connectivity graph for ranking the residue sites with reduced amino acid alphabet and then use continuous optimization to obtain the energy-minimizing sequences. Our methods enable the computation of a lower bound as well as a tight upper bound for the energy of a given conformation. We validate our results by using three different inter-residue energy matrices for five proteins from protein data bank (PDB), and by comparing our energy-minimizing sequences with 80 million diverse sequences that are generated based on different considerations in each case. When we submitted some of our chosen energy-minimizing sequences to Basic Local Alignment Search Tool (BLAST), we obtained some sequences from non-redundant protein sequence database that are similar to ours with an E-value of the order of 10(-7). In summary, we conclude that proteins show a trend towards minimizing energy in the sequence space but do not seem to adopt the global energy-minimizing sequence. The reason for this could be either that the existing energy matrices are not able to accurately represent the inter-residue interactions in the context of the protein environment or that Nature does not push the optimization in the sequence space, once it is able to perform the function.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A half-duplex constrained non-orthogonal cooperative multiple access (NCMA) protocol suitable for transmission of information from N users to a single destination in a wireless fading channel is proposed. Transmission in this protocol comprises of a broadcast phase and a cooperation phase. In the broadcast phase, each user takes turn broadcasting its data to all other users and the destination in an orthogonal fashion in time. In the cooperation phase, each user transmits a linear function of what it received from all other users as well as its own data. In contrast to the orthogonal extension of cooperative relay protocols to the cooperative multiple access channels wherein at any point of time, only one user is considered as a source and all the other users behave as relays and do not transmit their own data, the NCMA protocol relaxes the orthogonality built into the protocols and hence allows for a more spectrally efficient usage of resources. Code design criteria for achieving full diversity of N in the NCMA protocol is derived using pair wise error probability (PEP) analysis and it is shown that this can be achieved with a minimum total time duration of 2N - 1 channel uses. Explicit construction of full diversity codes is then provided for arbitrary number of users. Since the Maximum Likelihood decoding complexity grows exponentially with the number of users, the notion of g-group decodable codes is introduced for our setup and a set of necesary and sufficient conditions is also obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A major concern of embedded system architects is the design for low power. We address one aspect of the problem in this paper, namely the effect of executable code compression. There are two benefits of code compression – firstly, a reduction in the memory footprint of embedded software, and secondly, potential reduction in memory bus traffic and power consumption. Since decompression has to be performed at run time it is achieved by hardware. We describe a tool called COMPASS which can evaluate a range of strategies for any given set of benchmarks and display compression ratios. Also, given an execution trace, it can compute the effect on bus toggles, and cache misses for a range of compression strategies. The tool is interactive and allows the user to vary a set of parameters, and observe their effect on performance. We describe an implementation of the tool and demonstrate its effectiveness. To the best of our knowledge this is the first tool proposed for such a purpose.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The keyword based search technique suffers from the problem of synonymic and polysemic queries. Current approaches address only theproblem of synonymic queries in which different queries might have the same information requirement. But the problem of polysemic queries,i.e., same query having different intentions, still remains unaddressed. In this paper, we propose the notion of intent clusters, the members of which will have the same intention. We develop a clustering algorithm that uses the user session information in query logs in addition to query URL entries to identify cluster of queries having the same intention. The proposed approach has been studied through case examples from the actual log data from AOL, and the clustering algorithm is shown to be successful in discerning the user intentions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An analytical treatment of performance analysis of guidance laws is possible only in simplistic scenarios. As the complexity of the guidance system increases, a search for analytical solutions becomes quite impractical. In this paper, a new performance measure, based upon the notion of a timescale gap that can be computed through numerical simulations, is developed for performance analysis of guidance laws. Finite time Lyapunov exponents are used to define the timescale gap. It is shown that the timescale gap can be used for quantification of the rate of convergence of trajectories to the collision course. Comparisonbetween several guidance laws, based on the timescale gap, is presented. Realistic simulations to study the effect of aerodynamicsand atmospheric variations on the timescale gap of these guidance laws are also presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to study the elastic behaviour of matter when subjected to very large pressures, such as occur for example in the interior of the earth, and to provide an explanation for phenomena like earthquakes, it is essential to be able to calculate the values of the elastic constants of a substance under a state of large initial stress in terms of the elastic constants of a natural or stress-free state. An attempt has been made in this paper to derive expressions for these quantities for a substance of cubic symmetry on the basis of non-linear theory of elasticity and including up to cubic powers of the strain components in the strain energy function. A simple method of deriving them directly from the energy function itself has been indicated for any general case and the same has been applied to the case of hydrostatic compression. The notion of an effective elastic energy-the energy require to effect an infinitesimal deformation over a state of finite strain-has been introduced, the coefficients in this expression being the effective elastic constants. A separation of this effective energy function into normal co-ordinates has been given for the particular case of cubic symmetry and it has been pointed out, that when any of such coefficients in this normal form becomes negative, elastic instability will set in, with associated release of energy.