944 resultados para Check points
Resumo:
We study the quenching dynamics of a many-body system in one dimension described by a Hamiltonian that has spatial periodicity. Specifically, we consider a spin-1/2 chain with equal xx and yy couplings and subject to a periodically varying magnetic field in the (z) over cap direction or, equivalently, a tight-binding model of spinless fermions with a periodic local chemical potential, having period 2q, where q is a positive integer. For a linear quench of the strength of the magnetic field (or chemical potential) at a rate 1/tau across a quantum critical point, we find that the density of defects thereby produced scales as 1/tau(q/(q+1)), deviating from the 1/root tau scaling that is ubiquitous in a range of systems. We analyze this behavior by mapping the low-energy physics of the system to a set of fermionic two-level systems labeled by the lattice momentum k undergoing a nonlinear quench as well as by performing numerical simulations. We also show that if the magnetic field is a superposition of different periods, the power law depends only on the smallest period for very large values of tau, although it may exhibit a crossover at intermediate values of tau. Finally, for the case where a zz coupling is also present in the spin chain, or equivalently, where interactions are present in the fermionic system, we argue that the power associated with the scaling law depends on a combination of q and the interaction strength.
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
Pervasive use of pointers in large-scale real-world applications continues to make points-to analysis an important optimization-enabler. Rapid growth of software systems demands a scalable pointer analysis algorithm. A typical inclusion-based points-to analysis iteratively evaluates constraints and computes a points-to solution until a fixpoint. In each iteration, (i) points-to information is propagated across directed edges in a constraint graph G and (ii) more edges are added by processing the points-to constraints. We observe that prioritizing the order in which the information is processed within each of the above two steps can lead to efficient execution of the points-to analysis. While earlier work in the literature focuses only on the propagation order, we argue that the other dimension, that is, prioritizing the constraint processing, can lead to even higher improvements on how fast the fixpoint of the points-to algorithm is reached. This becomes especially important as we prove that finding an optimal sequence for processing the points-to constraints is NP-Complete. The prioritization scheme proposed in this paper is general enough to be applied to any of the existing points-to analyses. Using the prioritization framework developed in this paper, we implement prioritized versions of Andersen's analysis, Deep Propagation, Hardekopf and Lin's Lazy Cycle Detection and Bloom Filter based points-to analysis. In each case, we report significant improvements in the analysis times (33%, 47%, 44%, 20% respectively) as well as the memory requirements for a large suite of programs, including SPEC 2000 benchmarks and five large open source programs.
Resumo:
It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.
Resumo:
In order to reduce the motion artifacts in DSA, non-rigid image registration is commonly used before subtracting the mask from the contrast image. Since DSA registration requires a set of spatially non-uniform control points, a conventional MRF model is not very efficient. In this paper, we introduce the concept of pivotal and non-pivotal control points to address this, and propose a non-uniform MRF for DSA registration. We use quad-trees in a novel way to generate the non-uniform grid of control points. Our MRF formulation produces a smooth displacement field and therefore results in better artifact reduction than that of registering the control points independently. We achieve improved computational performance using pivotal control points without compromising on the artifact reduction. We have tested our approach using several clinical data sets, and have presented the results of quantitative analysis, clinical assessment and performance improvement on a GPU. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Dialkyl succinates show a pattern of alternating behavior in their melting points, as the number of C atoms in the alkane side chain increases, unlike in the dialkyl oxalates Joseph et al. (2011). Acta Cryst. B67, 525-534]. Dialkyl succinates with odd numbers of C atoms in the alkyl side chain show higher melting points than the immediately adjacent analogues with even numbers. The crystal structures and their molecular packing have been analyzed for a series of dialkyl succinates with 1 - 4 C atoms in the alkyl side chain. The energy difference (Delta E) between the optimized and observed molecular conformations, density, Kitaigorodskii packing index (KPI) and C-H center dot center dot center dot O interactions are considered to rationalize this behavior. In contrast to the dialkyl oxalates where a larger number of moderately strong C-H center dot center dot center dot O interactions were characteristic of oxalates with elevated melting points, here the molecular packing and the density play a major role in raising the melting point. On moving from oxalate to succinate esters the introduction of the C2 spacer adds two activated H atoms to the asymmetric unit, resulting in the formation of stronger C-H center dot center dot center dot O hydrogen bonds in all succinates. As a result the crystallinity of long-chain alkyl substituted esters improves enormously in the presence of hydrogen bonds from activated donors.
Resumo:
Two Chrastil type expressions have been developed to model the solubility of supercritical fluids/gases in liquids. The three parameter expressions proposed correlates the solubility as a function of temperature, pressure and density. The equation can also be used to check the self-consistency of the experimental data of liquid phase compositions for supercritical fluid-liquid equilibria. Fifty three different binary systems (carbon-dioxide + liquid) with around 2700 data points encompassing a wide range of compounds like esters, alcohols, carboxylic acids and ionic liquids were successfully modeled for a wide range of temperatures and pressures. Besides the test for self-consistency, based on the data at one temperature, the model can be used to predict the solubility of supercritical fluids in liquids at different temperatures. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Visual tracking is an important task in various computer vision applications including visual surveillance, human computer interaction, event detection, video indexing and retrieval. Recent state of the art sparse representation (SR) based trackers show better robustness than many of the other existing trackers. One of the issues with these SR trackers is low execution speed. The particle filter framework is one of the major aspects responsible for slow execution, and is common to most of the existing SR trackers. In this paper,(1) we propose a robust interest point based tracker in l(1) minimization framework that runs at real-time with performance comparable to the state of the art trackers. In the proposed tracker, the target dictionary is obtained from the patches around target interest points. Next, the interest points from the candidate window of the current frame are obtained. The correspondence between target and candidate points is obtained via solving the proposed l(1) minimization problem. In order to prune the noisy matches, a robust matching criterion is proposed, where only the reliable candidate points that mutually match with target and candidate dictionary elements are considered for tracking. The object is localized by measuring the displacement of these interest points. The reliable candidate patches are used for updating the target dictionary. The performance and accuracy of the proposed tracker is benchmarked with several complex video sequences. The tracker is found to be considerably fast as compared to the reported state of the art trackers. The proposed tracker is further evaluated for various local patch sizes, number of interest points and regularization parameters. The performance of the tracker for various challenges including illumination change, occlusion, and background clutter has been quantified with a benchmark dataset containing 50 videos. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Fringe tracking and fringe order assignment have become the central topics of current research in digital photoelasticity. Isotropic points (IPs) appearing in low fringe order zones are often either overlooked or entirely missed in conventional as well as digital photoelasticity. We aim to highlight image processing for characterizing IPs in an isochromatic fringe field. By resorting to a global analytical solution of a circular disk, sensitivity of IPs to small changes in far-field loading on the disk is highlighted. A local theory supplements the global closed-form solutions of three-, four-, and six-point loading configurations of circular disk. The local theoretical concepts developed in this paper are demonstrated through digital image analysis of isochromatics in circular disks subjected to three-and four-point loads. (C) 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v(0)) and step angle (phi(m)) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v(0)-phi(m) plane. A given average forward velocity v(x,) (avg) can be achieved by several combinations of v(0) and phi(m). Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given v(x, avg). This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various v(x, avg,) a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity.