92 resultados para Incremental discretization

em Indian Institute of Science - Bangalore - Índia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents a new adaptive delta modulator, called the hybrid constant factor incremental delta modulator (HCFIDM), which uses instantaneous as well as syllabic adaptation of the step size. Three instantaneous algorithms have been used: two new instantaneous algorithms (CFIDM-3 and CFIDM-2) and the third, Song's voice ADM (SVADM). The quantisers have been simulated on a digital computer and their performances studied. The figure of merit used is the SNR with correlated, /?C-shaped Gaussian signals and real speech as the input. The results indicate that the hybrid technique is superior to the nonhybrid adaptive quantisers. Also, the two new instantaneous algorithms developed have improved SNR and fast response to step inputs as compared to the earlier systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new incremental algorithm for layout compaction is proposed. In addition to its linear time performance in terms of the number of rectangles in the layout, we also describe how incremental compaction can form a good feature in the design of a layout editor. The design of such an editor is also described. In the design of the editor, we describe how arrays can be used to implement quadtrees that represent VLSI layouts. Such a representation provides speed of data access and low storage requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new algorithm for the step-size change of instantaneous adaptive delta modulator. The present strategy is such that the step-size at any sampling instant can increase or decrease by either of the two constant factors or can remain the same, depending upon the combination of three or four most recent output bits. The quantizer has been simulated on a digital computer, and its performance compared with other quantizers. The figure of merit used is the SNR with gaussian signals as the input. The results indicate that the new design can give an improved SNR over a wider dynamic range and fast response to step inputs, as compared to the earlier systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Incremental semantic analysis in a programming environment based on Attribute Grammars is performed by an Incremental Attribute Evaluator (IAE). Current IAEs are either table-driven or make extensive use of graph structures to schedule reevaluation of attributes. A method of compiling an Ordered Attribute Grammar into mutually recursive procedures is proposed. These procedures form an optimal time Incremental Attribute Evaluator for the attribute grammar, which does not require any graphs or tables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method is described for estimating the incremental angle and angular velocity of a spacecraft using integrated rate parameters with the help of a star sensor alone. The chief advantage of this method is that the measured stars need not be identified, whereas the identification of the stars is necessary in earlier methods. This proposed estimation can be carried out with all of the available measurements by a simple linear Kalman filter, albeit with a time-varying sensitivity matrix. The residuals of estimated angular velocity by the proposed spacecraft incremental-angle and angular velocity estimation method are as accurate as the earlier methods. This method also enables the spacecraft attitude to be reconstructed for mapping the stars into an imaginary unit sphere in the body reference frame, which will preserve the true angular separation of the stars. This will pave the way for identification of the stars using any angular separation or triangle matching techniques applied to even a narrow field of view sensor that is made to sweep the sky. A numerical simulation for inertial as well as Earth pointing spacecraft is carried out to establish the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm takes O(m 1/2) amortized time per arc and our second algorithm takes O(n 2.5/m) amortized time per arc, where n is the number of vertices and m is the total number of arcs. For sparse graphs, our O(m 1/2) bound improves the best previous bound by a factor of logn and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues). Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling. For dense graphs, our O(n 2.5/m) bound improves the best previously published bound by a factor of n 1/4 and a recent bound obtained independently of our work by a factor of logn. Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead. Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present four new reinforcement learning algorithms based on actor-critic and natural-gradient ideas, and provide their convergence proofs. Actor-critic rein- forcement learning methods are online approximations to policy iteration in which the value-function parameters are estimated using temporal difference learning and the policy parameters are updated by stochastic gradient descent. Methods based on policy gradients in this way are of special interest because of their com- patibility with function approximation methods, which are needed to handle large or infinite state spaces. The use of temporal difference learning in this way is of interest because in many applications it dramatically reduces the variance of the gradient estimates. The use of the natural gradient is of interest because it can produce better conditioned parameterizations and has been shown to further re- duce variance in some cases. Our results extend prior two-timescale convergence results for actor-critic methods by Konda and Tsitsiklis by using temporal differ- ence learning in the actor and by incorporating natural gradients, and they extend prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and Schaal by providing the first convergence proofs and the first fully incremental algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present two online algorithms for maintaining a topological order of a directed n-vertex acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm handles m arc additions in O(m(3/2)) time. For sparse graphs (m/n = O(1)), this bound improves the best previous bound by a logarithmic factor, and is tight to within a constant factor among algorithms satisfying a natural locality property. Our second algorithm handles an arbitrary sequence of arc additions in O(n(5/2)) time. For sufficiently dense graphs, this bound improves the best previous bound by a polynomial factor. Our bound may be far from tight: we show that the algorithm can take Omega(n(2)2 root(2lgn)) time by relating its performance to a generalization of the k-levels problem of combinatorial geometry. A completely different algorithm running in Theta (n(2) log n) time was given recently by Bender, Fineman, and Gilbert. We extend both of our algorithms to the maintenance of strong components, without affecting the asymptotic time bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an experimental study on damage assessment of reinforced concrete (RC) beams subjected to incremental cyclic loading. During testing acoustic emissions (AEs) were recorded. The analysis of the AE released was carried out by using parameters relaxation ratio, load ratio and calm ratio. Digital image correlation (DIC) technique and tracking with available MATLAB program were used to measure the displacement and surface strains in concrete. Earlier researchers classified the damage in RC beams using Kaiser effect, crack mouth opening displacement and proposed a standard. In general (or in practical situations), multiple cracks occur in reinforced concrete beams. In the present study damage assessment in RC beams was studied according to different limit states specified by the code of practice IS-456:2000 and AE technique. Based on the two ratios namely load ratio and calm ratio and when the deflection reached approximately 85% of the maximum allowable deflection it was observed that the RC beams were heavily damaged. The combination of AE and DIC techniques has the potential to provide the state of damage in RC structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Finite volume methods traditionally employ dimension by dimension extension of the one-dimensional reconstruction and averaging procedures to achieve spatial discretization of the governing partial differential equations on a structured Cartesian mesh in multiple dimensions. This simple approach based on tensor product stencils introduces an undesirable grid orientation dependence in the computed solution. The resulting anisotropic errors lead to a disparity in the calculations that is most prominent between directions parallel and diagonal to the grid lines. In this work we develop isotropic finite volume discretization schemes which minimize such grid orientation effects in multidimensional calculations by eliminating the directional bias in the lowest order term in the truncation error. Explicit isotropic expressions that relate the cell face averaged line and surface integrals of a function and its derivatives to the given cell area and volume averages are derived in two and three dimensions, respectively. It is found that a family of isotropic approximations with a free parameter can be derived by combining isotropic schemes based on next-nearest and next-next-nearest neighbors in three dimensions. Use of these isotropic expressions alone in a standard finite volume framework, however, is found to be insufficient in enforcing rotational invariance when the flux vector is nonlinear and/or spatially non-uniform. The rotationally invariant terms which lead to a loss of isotropy in such cases are explicitly identified and recast in a differential form. Various forms of flux correction terms which allow for a full recovery of rotational invariance in the lowest order truncation error terms, while preserving the formal order of accuracy and discrete conservation of the original finite volume method, are developed. Numerical tests in two and three dimensions attest the superior directional attributes of the proposed isotropic finite volume method. Prominent anisotropic errors, such as spurious asymmetric distortions on a circular reaction-diffusion wave that feature in the conventional finite volume implementation are effectively suppressed through isotropic finite volume discretization. Furthermore, for a given spatial resolution, a striking improvement in the prediction of kinetic energy decay rate corresponding to a general two-dimensional incompressible flow field is observed with the use of an isotropic finite volume method instead of the conventional discretization. (C) 2014 Elsevier Inc. All rights reserved.