137 resultados para large deflections analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pervasive use of pointers in large-scale real-world applications continues to make points-to analysis an important optimization-enabler. Rapid growth of software systems demands a scalable pointer analysis algorithm. A typical inclusion-based points-to analysis iteratively evaluates constraints and computes a points-to solution until a fixpoint. In each iteration, (i) points-to information is propagated across directed edges in a constraint graph G and (ii) more edges are added by processing the points-to constraints. We observe that prioritizing the order in which the information is processed within each of the above two steps can lead to efficient execution of the points-to analysis. While earlier work in the literature focuses only on the propagation order, we argue that the other dimension, that is, prioritizing the constraint processing, can lead to even higher improvements on how fast the fixpoint of the points-to algorithm is reached. This becomes especially important as we prove that finding an optimal sequence for processing the points-to constraints is NP-Complete. The prioritization scheme proposed in this paper is general enough to be applied to any of the existing points-to analyses. Using the prioritization framework developed in this paper, we implement prioritized versions of Andersen's analysis, Deep Propagation, Hardekopf and Lin's Lazy Cycle Detection and Bloom Filter based points-to analysis. In each case, we report significant improvements in the analysis times (33%, 47%, 44%, 20% respectively) as well as the memory requirements for a large suite of programs, including SPEC 2000 benchmarks and five large open source programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Welding parameters like welding speed, rotation speed, plunge depth, shoulder diameter etc., influence the weld zone properties, microstructure of friction stir welds, and forming behavior of welded sheets in a synergistic fashion. The main aims of the present work are to (1) analyze the effect of welding speed, rotation speed, plunge depth, and shoulder diameter on the formation of internal defects during friction stir welding (FSW), (2) study the effect on axial force and torque during welding, (c) optimize the welding parameters for producing internal defect-free welds, and (d) propose and validate a simple criterion to identify defect-free weld formation. The base material used for FSW throughout the work is Al 6061T6 having a thickness value of 2.1 mm. Only butt welding of sheets is aimed in the present work. It is observed from the present analysis that higher welding speed, higher rotation speed, and higher plunge depth are preferred for producing a weld without internal defects. All the shoulder diameters used for FSW in the present work produced defect-free welds. The axial force and torque are not constant and a large variation is seen with respect to FSW parameters that produced defective welds. In the case of defect-free weld formation, the axial force and torque are relatively constant. A simple criterion, (a,tau/a,p)(defective) > (a,tau/a,p)(defect free) and (a,F/a,p)(defective) > (a,F/a,p)(defect free), is proposed with this observation for identifying the onset of defect-free weld formation. Here F is axial force, tau is torque, and p is welding speed or tool rotation speed or plunge depth. The same criterion is validated with respect to Al 5xxx base material. Even in this case, the axial force and torque remained constant while producing defect-free welds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classification of a large document collection involves dealing with a huge feature space where each distinct word is a feature. In such an environment, classification is a costly task both in terms of running time and computing resources. Further it will not guarantee optimal results because it is likely to overfit by considering every feature for classification. In such a context, feature selection is inevitable. This work analyses the feature selection methods, explores the relations among them and attempts to find a minimal subset of features which are discriminative for document classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Daily rainfall datasets of 10 years (1998-2007) of Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) version 6 and India Meteorological Department (IMD) gridded rain gauge have been compared over the Indian landmass, both in large and small spatial scales. On the larger spatial scale, the pattern correlation between the two datasets on daily scales during individual years of the study period is ranging from 0.4 to 0.7. The correlation improved significantly (similar to 0.9) when the study was confined to specific wet and dry spells each of about 5-8 days. Wavelet analysis of intraseasonal oscillations (ISO) of the southwest monsoon rainfall show the percentage contribution of the major two modes (30-50 days and 10-20 days), to be ranging respectively between similar to 30-40% and 5-10% for the various years. Analysis of inter-annual variability shows the satellite data to be underestimating seasonal rainfall by similar to 110 mm during southwest monsoon and overestimating by similar to 150 mm during northeast monsoon season. At high spatio-temporal scales, viz., 1 degrees x1 degrees grid, TMPA data do not correspond to ground truth. We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets are compatible with each other. This has been done by studying the contribution to total seasonal rainfall from different rainfall rate windows (at 1 mm intervals) on different spatial scales (at daily time scale). The compatibility spatial scale is seen to be beyond 5 degrees x5 degrees average spatial scale over the Indian landmass. This will help to decide the usability of TMPA products, if averaged at appropriate spatial scales, for specific process studies, e.g., cloud scale, meso scale or synoptic scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a mathematical modelling and analysis of reflection grating etched Si AFM cantilever deflections under different loading conditions. A simple analysis of the effect of grating structures on cantilever deflection is carried out with emphasis on optimizing the beam and gratings such that maximum amount of diffracted light remains within the detector area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to perform strong updates is the main contributor to the precision of flow-sensitive pointer analysis algorithms. Traditional flow-sensitive pointer analyses cannot strongly update pointers residing in the heap. This is a severe restriction for Java programs. In this paper, we propose a new flow-sensitive pointer analysis algorithm for Java that can perform strong updates on heap-based pointers effectively. Instead of points-to graphs, we represent our points-to information as maps from access paths to sets of abstract objects. We have implemented our analysis and run it on several large Java benchmarks. The results show considerable improvement in precision over the points-to graph based flow-insensitive and flow-sensitive analyses, with reasonable running time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large software systems are developed by composing multiple programs. If the programs manip-ulate and exchange complex data, such as network packets or files, it is essential to establish that they follow compatible data formats. Most of the complexity of data formats is associated with the headers. In this paper, we address compatibility of programs operating over headers of network packets, files, images, etc. As format specifications are rarely available, we infer the format associated with headers by a program as a set of guarded layouts. In terms of these formats, we define and check compatibility of (a) producer-consumer programs and (b) different versions of producer (or consumer) programs. A compatible producer-consumer pair is free of type mismatches and logical incompatibilities such as the consumer rejecting valid outputs gen-erated by the producer. A backward compatible producer (resp. consumer) is guaranteed to be compatible with consumers (resp. producers) that were compatible with its older version. With our prototype tool, we identified 5 known bugs and 1 potential bug in (a) sender-receiver modules of Linux network drivers of 3 vendors and (b) different versions of a TIFF image library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of modulation schemes for the physical layer network-coded two way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. Depending on the signal set used at the end nodes, the minimum distance of the effective constellation seen at the relay becomes zero for a finite number of channel fade states referred as the singular fade states. The singular fade states fall into the following two classes: (i) the ones which are caused due to channel outage and whose harmful effect cannot be mitigated by adaptive network coding called the non-removable singular fade states and (ii) the ones which occur due to the choice of the signal set and whose harmful effects can be removed called the removable singular fade states. In this paper, we derive an upper bound on the average end-to-end Symbol Error Rate (SER), with and without adaptive network coding at the relay, for a Rician fading scenario. It is shown that without adaptive network coding, at high Signal to Noise Ratio (SNR), the contribution to the end-to-end SER comes from the following error events which fall as SNR-1: the error events associated with the removable and nonremovable singular fade states and the error event during the BC phase. In contrast, for the adaptive network coding scheme, the error events associated with the removable singular fade states fall as SNR-2, thereby providing a coding gain over the case when adaptive network coding is not used. Also, it is shown that for a Rician fading channel, the error during the MA phase dominates over the error during the BC phase. Hence, adaptive network coding, which improves the performance during the MA phase provides more gain in a Rician fading scenario than in a Rayleigh fading scenario. Furthermore, it is shown that for large Rician factors, among those removable singular fade states which have the same magnitude, those which have the least absolute value of the phase - ngle alone contribute dominantly to the end-to-end SER and it is sufficient to remove the effect of only such singular fade states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variable Endmember Constrained Least Square (VECLS) technique is proposed to account endmember variability in the linear mixture model by incorporating the variance for each class, the signals of which varies from pixel to pixel due to change in urban land cover (LC) structures. VECLS is first tested with a computer simulated three class endmember considering four bands having small, medium and large variability with three different spatial resolutions. The technique is next validated with real datasets of IKONOS, Landsat ETM+ and MODIS. The results show that correlation between actual and estimated proportion is higher by an average of 0.25 for the artificial datasets compared to a situation where variability is not considered. With IKONOS, Landsat ETM+ and MODIS data, the average correlation increased by 0.15 for 2 and 3 classes and by 0.19 for 4 classes, when compared to single endmember per class. (C) 2013 COSPAR. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heat and mass transfer studies in a calandria based reactor is quite complex both due to geometry and due to the complex mixing flow. It is challenging to devise optimum operating conditions with efficient but safe working range for such a complex configuration. Numerical study known to be very effective is taken up for investigation. In the present study a 3D RANS code with turbulence model has been used to compute the flow fields and to get the heat transfer characteristics to understand certain design parameters of engineering importance. The angle of injection and of the coolant liquid has a large effect on the heat transfer within the reactor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Homogenization and error analysis of an optimal interior control problem in the framework of Stokes' system, on a domain with rapidly oscillating boundary, are the subject matters of this article. We consider a three dimensional domain constituted of a parallelepiped with a large number of rectangular cylinders at the top of it. An interior control is applied in a proper subdomain of the parallelepiped, away from the oscillating volume. We consider two types of functionals, namely a functional involving the L-2-norm of the state variable and another one involving its H-1-norm. The asymptotic analysis of optimality systems for both cases, when the cross sectional area of the rectangular cylinders tends to zero, is done here. Our major contribution is to derive error estimates for the state, the co-state and the associated pressures, in appropriate functional spaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we prove convergence of the weakly penalized adaptive discontinuous Galerkin methods. Unlike other works, we derive the contraction property for various discontinuous Galerkin methods only assuming the stabilizing parameters are large enough to stabilize the method. A central idea in the analysis is to construct an auxiliary solution from the discontinuous Galerkin solution by a simple post processing. Based on the auxiliary solution, we define the adaptive algorithm which guides to the convergence of adaptive discontinuous Galerkin methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mass balance between metal and electrolytic solution, separated by a moving interface, in stable pit growth results in a set of governing equations which are solved for concentration field and interface position (pit boundary evolution). The interface experiences a jump discontinuity in metal concentration. The extended finite-element model (XFEM) handles this jump discontinuity by using discontinuous-derivative enrichment formulation, eliminating the requirement of using front conforming mesh and re-meshing after each time step as in the conventional finite-element method. However, prior interface location is required so as to solve the governing equations for concentration field for which a numerical technique, the level set method, is used for tracking the interface explicitly and updating it over time. The level set method is chosen as it is independent of shape and location of the interface. Thus, a combined XFEM and level set method is developed in this paper. Numerical analysis for pitting corrosion of stainless steel 304 is presented. The above proposed model is validated by comparing the numerical results with experimental results, exact solutions and some other approximate solutions. An empirical model for pitting potential is also derived based on the finite-element results. Studies show that pitting profile depends on factors such as ion concentration, solution pH and temperature to a large extent. Studying the individual and combined effects of these factors on pitting potential is worth knowing, as pitting potential directly influences corrosion rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report addresses the assessment of variation in elastic property of soft biological tissues non-invasively using laser speckle contrast measurement. The experimental as well as the numerical (Monte-Carlo simulation) studies are carried out. In this an intense acoustic burst of ultrasound (an acoustic pulse with high power within standard safety limits), instead of continuous wave, is employed to induce large modulation of the tissue materials in the ultrasound insonified region of interest (ROI) and it results to enhance the strength of the ultrasound modulated optical signal in ultrasound modulated optical tomography (UMOT) system. The intensity fluctuation of speckle patterns formed by interference of light scattered (while traversing through tissue medium) is characterized by the motion of scattering sites. The displacement of scattering particles is inversely related to the elastic property of the tissue. We study the feasibility of laser speckle contrast analysis (LSCA) technique to reconstruct a map of the elastic property of a soft tissue-mimicking phantom. We employ source synchronized parallel speckle detection scheme to (experimentally) measure the speckle contrast from the light traversing through ultrasound (US) insonified tissue-mimicking phantom. The measured relative image contrast (the ratio of the difference of the maximum and the minimum values to the maximum value) for intense acoustic burst is 86.44 % in comparison to 67.28 % for continuous wave excitation of ultrasound. We also present 1-D and 2-D image of speckle contrast which is the representative of elastic property distribution.