929 resultados para Source analysis
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
Pervasive use of pointers in large-scale real-world applications continues to make points-to analysis an important optimization-enabler. Rapid growth of software systems demands a scalable pointer analysis algorithm. A typical inclusion-based points-to analysis iteratively evaluates constraints and computes a points-to solution until a fixpoint. In each iteration, (i) points-to information is propagated across directed edges in a constraint graph G and (ii) more edges are added by processing the points-to constraints. We observe that prioritizing the order in which the information is processed within each of the above two steps can lead to efficient execution of the points-to analysis. While earlier work in the literature focuses only on the propagation order, we argue that the other dimension, that is, prioritizing the constraint processing, can lead to even higher improvements on how fast the fixpoint of the points-to algorithm is reached. This becomes especially important as we prove that finding an optimal sequence for processing the points-to constraints is NP-Complete. The prioritization scheme proposed in this paper is general enough to be applied to any of the existing points-to analyses. Using the prioritization framework developed in this paper, we implement prioritized versions of Andersen's analysis, Deep Propagation, Hardekopf and Lin's Lazy Cycle Detection and Bloom Filter based points-to analysis. In each case, we report significant improvements in the analysis times (33%, 47%, 44%, 20% respectively) as well as the memory requirements for a large suite of programs, including SPEC 2000 benchmarks and five large open source programs.
Resumo:
In a cooperative system with an amplify-and-forward relay, the cascaded channel training protocol enables the destination to estimate the source-destination channel gain and the product of the source-relay (SR) and relay-destination (RD) channel gains using only two pilot transmissions from the source. Notably, the destination does not require a separate estimate of the SR channel. We develop a new expression for the symbol error probability (SEP) of AF relaying when imperfect channel state information (CSI) is acquired using the above training protocol. A tight SEP upper bound is also derived; it shows that full diversity is achieved, albeit at a high signal-to-noise ratio (SNR). Our analysis uses fewer simplifying assumptions, and leads to expressions that are accurate even at low SNRs and are different from those in the literature. For instance, it does not approximate the estimate of the product of SR and RD channel gains by the product of the estimates of the SR and RD channel gains. We show that cascaded channel estimation often outperforms a channel estimation protocol that incurs a greater training overhead by forwarding a quantized estimate of the SR channel gain to the destination. The extent of pilot power boosting, if allowed, that is required to improve performance is also quantified.
Resumo:
We consider the speech production mechanism and the asso- ciated linear source-filter model. For voiced speech sounds in particular, the source/glottal excitation is modeled as a stream of impulses and the filter as a cascade of second-order resonators. We show that the process of sampling speech signals can be modeled as filtering a stream of Dirac impulses (a model for the excitation) with a kernel function (the vocal tract response),and then sampling uniformly. We show that the problem of esti- mating the excitation is equivalent to the problem of recovering a stream of Dirac impulses from samples of a filtered version. We present associated algorithms based on the annihilating filter and also make a comparison with the classical linear prediction technique, which is well known in speech analysis. Results on synthesized as well as natural speech data are presented.
Resumo:
In view of the major advancement made in understanding the seismicity and seismotectonics of the Indian region in recent times, an updated probabilistic seismic hazard map of India covering 6-38 degrees N and 68-98 degrees E is prepared. This paper presents the results of probabilistic seismic hazard analysis of India done using regional seismic source zones and four well recognized attenuation relations considering varied tectonic provinces in the region. The study area was divided into small grids of size 0.1 degrees x 0.1 degrees. Peak Horizontal Acceleration (PHA) and spectral accelerations for periods 0.1 s and 1 s have been estimated and contour maps showing the spatial variation of the same are presented in the paper. The present study shows that the seismic hazard is moderate in peninsular shield, but the hazard in most parts of North and Northeast India is high. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The acoustical behaviour of an elliptical chamber muffler having a side inlet and side outlet port is analyzed in this paper, wherein a uniform velocity piston source is assumed to model the 3-D acoustic field in the elliptical chamber cavity. Towards this end, we consider the modal expansion of the acoustic pressure field in the elliptical cavity in terms of the angular and radial Mathieu func-tions, subjected to the rigid wall condition. Then, the Green's function due to the point source lo-cated on the side (curved) surface of the elliptical chamber is obtained. On integrating this function over the elliptical piston area on the curved surface of the elliptical chamber and subsequent divi-sion by the area of the elliptic piston, one obtains the acoustic pressure field due to the piston driven source which is equivalent to considering plane wave propagation in the side ports. Thus, one can obtain the acoustic pressure response functions, i.e., the impedance matrix (Z) parameters due to the sources (ports) located on the side surface, from which one may also obtain a progressive wave rep-resentation in terms of the scattering matrix (S). Finally, the acoustic performance of the muffler is evaluated in terms of the Transmission loss (TL) which is computed in terms of the scattering pa-rameters. The effect of the axial length of the muffler and the angular location of the ports on the TL characteristics is studied in detail. The acoustically long chambers show dominant axial plane wave propagation while the TL spectrum of short chambers indicates the dominance of the trans-versal modes. The 3-D analytical results are compared with the 3-D FEM simulations carried on a commercial software and are shown to be in an excellent agreement, thereby validating the analyti-cal procedure suggested in this work.
Resumo:
Accidental spills and improper disposal of industrial effluent/sludge containing heavy metals onto the open land or into subsurface result in soil and water contamination. Detailed investigations are carried out to identify the source of contamination of heavy metals in an industrial suburb near Bangalore in India. Detailed investigation of ground water and subsurface soil analysis for various heavy metals has been carried out. Ground water samples were collected in the entire area through the cluster of borewells. Subsurface soil samples were collected from near borewells which were found to contain heavy metals. Water samples and soils samples (after acid digestion) were analysed as per APHO-standard method of analysis. While the results of Zn, Ni and Cd showed that they are within allowable limits in the soil, the ground water and soils in the site have concentration of Cr+6 far exceeding the allowable limits (up to 832 mg/kg). Considering the topography of the area, ground water movement and results of chromium concentration in the borewells and subsurface it was possible to identify the origin, zone of contamination and the migration path of Cr+6. The results indicated that the predominant mechanism of migration of Cr+6 is by diffusion.
Resumo:
Given the significant gains that relay-based cooperation promises, the practical problems of acquisition of channel state information (CSI) and the characterization and optimization of performance with imperfect CSI are receiving increasing attention. We develop novel and accurate expressions for the symbol error probability (SEP) for fixed-gain amplify-and-forward relaying when the destination acquires CSI using the time-efficient cascaded channel estimation (CCE) protocol. The CCE protocol saves time by making the destination directly estimate the product of the source-relay and relay-destination channel gains. For a single relay system, we first develop a novel SEP expression and a tight SEP upper bound. We then similarly analyze an opportunistic multi-relay system, in which both selection and coherent demodulation use imperfect estimates. A distinctive aspect of our approach is the use of as few simplifying approximations as possible, which results in new results that are accurate at signal-to-noise-ratios as low as 1 dB for single and multi-relay systems. Using insights gleaned from an asymptotic analysis, we also present a simple, closed-form, nearly-optimal solution for allocation of energy between pilot and data symbols at the source and relay(s).
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
The voltage ripple and power loss in the DC-capacitor of a voltage source inverter depend on the harmonic currents flowing through the capacitor. This paper presents double Fourier series based harmonic analysis of DC capacitor current in a three-level neutral point clamped inverter, modulated with sine-triangle PWM. The analytical results are validated experimentally on a 5-kVA three-level inverter prototype. The results of the analysis are used for predicting the power loss in the DC capacitor.
Resumo:
Advanced bus-clamping pulse width modulation (ABCPWM) techniques are advantageous in terms of line current distortion and inverter switching loss in voltage source inverter-fed applications. However, the PWM waveforms corresponding to these techniques are not amenable to carrier-based generation. The modulation process in ABCPWM methods is analyzed here from a “per-phase” perspective. It is shown that three sets of descendant modulating functions (or modified modulating functions) can be generated from the three-phase sinusoidal signals. Each set of the modified modulating functions can be used to produce the PWM waveform of a given phase in a computationally efficient manner. Theoretical results and experimental investigations on a 5hp motor drive are presented
Resumo:
A series of spectral analyses of surface waves (SASW) tests were conducted on a cement concrete pavement by dropping steel balls of four different values of diameter (D) varying between 25.4 and 76.2 mm. These tests were performed (1) by using different combinations of source to nearest receiver distance (S) and receiver spacing (X), and (2) for two different heights (H) of fall, namely, 0.25 and 0.50 m. The values of the maximum wavelength (lambda(max)) and minimum wavelength (lambda(min)) associated with the combined dispersion curve, corresponding to a particular combination of D and H, were noted to increase almost linearly with an increase in the magnitude of the input source energy (E). A continuous increase in strength and duration of the signals was noted to occur with an increase in the magnitude of D. Based on statistical analysis, two regression equations have been proposed to determine lambda(max) and lambda(min) for different values of source energy. It is concluded that the SASW technique is capable of producing nearly a unique dispersion curve irrespective of (1) diameters and heights of fall of the dropping masses used for producing the vibration, and (2) the spacing between different receivers. The results presented in this paper can be used to provide guidelines for deciding about the input source energy based on the required exploration zone of the pavement. (C) 2014 American Society of Civil Engineers.
Resumo:
This report addresses the assessment of variation in elastic property of soft biological tissues non-invasively using laser speckle contrast measurement. The experimental as well as the numerical (Monte-Carlo simulation) studies are carried out. In this an intense acoustic burst of ultrasound (an acoustic pulse with high power within standard safety limits), instead of continuous wave, is employed to induce large modulation of the tissue materials in the ultrasound insonified region of interest (ROI) and it results to enhance the strength of the ultrasound modulated optical signal in ultrasound modulated optical tomography (UMOT) system. The intensity fluctuation of speckle patterns formed by interference of light scattered (while traversing through tissue medium) is characterized by the motion of scattering sites. The displacement of scattering particles is inversely related to the elastic property of the tissue. We study the feasibility of laser speckle contrast analysis (LSCA) technique to reconstruct a map of the elastic property of a soft tissue-mimicking phantom. We employ source synchronized parallel speckle detection scheme to (experimentally) measure the speckle contrast from the light traversing through ultrasound (US) insonified tissue-mimicking phantom. The measured relative image contrast (the ratio of the difference of the maximum and the minimum values to the maximum value) for intense acoustic burst is 86.44 % in comparison to 67.28 % for continuous wave excitation of ultrasound. We also present 1-D and 2-D image of speckle contrast which is the representative of elastic property distribution.
Resumo:
In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The present study examines an improved detoxification and rapid biological degradation of toxic pollutant acrylamide using a bacterium. The acrylamide degrading bacterium was isolated from the soil followed by its screening to know the acrylamide degrading capability. The minimal medium containing acrylamide (30 mM) served as a sole source of carbon and nitrogen for their active growth. The optimization of three different factors was analyzed by using Response Surface Methodology (RSM). The bacteria actively degraded the acrylamide at a temperature of 32 degrees C, with a maximum growth at 30 mM substrate (acrylamide) concentration at a pH of 7.2. The acrylamidase activity and degradation of acrylamide was determined by High Performance Liquid Chromatography (HPLC) and Matrix Assisted Laser Desorption and Ionization Time of Flight mass spectrometer (MALDI-TOF). Based on 168 rRNA analysis the selected strain was identified as Gram negative bacilli Stenotrophomonas acidaminiphila MSU12. The acrylamidase was isolated from bacterial extract and was purified by HPLC, whose mass spectrum showed a molecular mass of 38 kDa. (C) 2014 Elsevier Ltd. All rights reserved.