929 resultados para Source analysis
Resumo:
We propose a novel formulation of the points-to analysis as a system of linear equations. With this, the efficiency of the points-to analysis can be significantly improved by leveraging the advances in solution procedures for solving the systems of linear equations. However, such a formulation is non-trivial and becomes challenging due to various facts, namely, multiple pointer indirections, address-of operators and multiple assignments to the same variable. Further, the problem is exacerbated by the need to keep the transformed equations linear. Despite this, we successfully model all the pointer operations. We propose a novel inclusion-based context-sensitive points-to analysis algorithm based on prime factorization, which can model all the pointer operations. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that our approach is competitive to the state-of-the-art algorithms. With an average memory requirement of mere 21MB, our context-sensitive points-to analysis algorithm analyzes each benchmark in 55 seconds on an average.
Resumo:
The source localization algorithms in the earlier works, mostly used non-planar arrays. If we consider scenarios like human-computer communication, or human-television communication where the microphones need to be placed on the computer monitor or television front panel, i.e we need to use the planar arrays. The algorithm proposed in 1], is a Linear Closed Form source localization algorithm (LCF algorithm) which is based on Time Difference of Arrivals (TDOAs) that are obtained from the data collected using the microphones. It assumes non-planar arrays. The LCF algorithm is applied to planar arrays in the current work. The relationship between the error in the source location estimate and the perturbation in the TDOAs is derived using first order perturbation analysis and validated using simulations. If the TDOAs are erroneous, both the coefficient matrix and the data matrix used for obtaining source location will be perturbed. So, the Total least squares solution for source localization is proposed in the current work. The sensitivity analysis of the source localization algorithm for planar arrays and non-planar arrays is done by introducing perturbation in the TDOAs and the microphone locations. It is shown that the error in the source location estimate is less when we use planar array instead of the particular non-planar array considered for same perturbation in the TDOAs or microphone location. The location of the reference microphone is proved to be important for getting an accurate source location estimate if we are using the LCF algorithm.
Resumo:
The effect of using a spatially smoothed forward-backward covariance matrix on the performance of weighted eigen-based state space methods/ESPRIT, and weighted MUSIC for direction-of-arrival (DOA) estimation is analyzed. Expressions for the mean-squared error in the estimates of the signal zeros and the DOA estimates, along with some general properties of the estimates and optimal weighting matrices, are derived. A key result is that optimally weighted MUSIC and weighted state-space methods/ESPRIT have identical asymptotic performance. Moreover, by properly choosing the number of subarrays, the performance of unweighted state space methods can be significantly improved. It is also shown that the mean-squared error in the DOA estimates is independent of the exact distribution of the source amplitudes. This results in a unified framework for dealing with DOA estimation using a uniformly spaced linear sensor array and the time series frequency estimation problems.
Resumo:
An intelligent computer aided defect analysis (ICADA) system, based on artificial intelligence techniques, has been developed to identify design, process or material parameters which could be responsible for the occurrence of defective castings in a manufacturing campaign. The data on defective castings for a particular time frame, which is an input to the ICADA system, has been analysed. It was observed that a large proportion, i.e. 50-80% of all the defective castings produced in a foundry, have two, three or four types of defects occurring above a threshold proportion, say 10%. Also, a large number of defect types are either not found at all or found in a very small proportion, with a threshold value below 2%. An important feature of the ICADA system is the recognition of this pattern in the analysis. Thirty casting defect types and a large number of causes numbering between 50 and 70 for each, as identified in the AFS analysis of casting defects-the standard reference source for a casting process-constituted the foundation for building the knowledge base. Scientific rationale underlying the formation of a defect during the casting process was identified and 38 metacauses were coded. Process, material and design parameters which contribute to the metacauses were systematically examined and 112 were identified as rootcauses. The interconnections between defects, metacauses and rootcauses were represented as a three tier structured graph and the handling of uncertainty in the occurrence of events such as defects, metacauses and rootcauses was achieved by Bayesian analysis. The hill climbing search technique, associated with forward reasoning, was employed to recognize one or several root causes.
Resumo:
This paper presents a new approach to the power flow analysis in steady state for multiterminal DC-AC systems. A flexible and practical choice of per unit system is used to formulate the DC network and converter equations. A converter is represented by Norton's equivalent of a current source in parallel with the commutation resistance. Unlike in previous literature, the DC network equations are used to derive the controller equations for the DC system using a subset of specifications. The specifications considered are current or power at all terminals except the slack terminal where the DC voltage is specified. The control equations are solved by Newton's method, using the current injections at the converter terminals as state variables. Further, a systematic approach to the handling of constraints is proposed by identifying the priorities in rescheduling of the specified variables. The methodology is illustrated by example of a 5 terminal DC system.
Resumo:
This paper is on the design and performance analysis of practical distributed space-time codes for wireless relay networks with multiple antennas terminals. The amplify-andforward scheme is used in a way that each relay transmits a scaled version of the linear combination of the received symbols. We propose distributed generalized quasi-orthogonal space-time codes which are distributed among the source antennas and relays, and valid for any number of relays. Assuming M-PSK and M-QAM signals, we derive a formula for the symbol error probability of the investigated scheme over Rayleigh fading channels. For sufficiently large SNR, this paper derives closed-form average SER expression. The simplicity of the asymptotic results provides valuable insights into the performance of cooperative networks and suggests means of optimizing them. Our analytical results have been confirmed by simulation results, using full-rate full-diversity distributed codes.
Resumo:
Acoustic Emission (AE) signals, which are electrical version of acoustic emissions, are usually analysed using a set of signal parameters. The major objective of signal analysis is to study the characteristics of the sources of emissions. Peak amplitude (P-a) and rise time (R-t) are two such parameters used for source characterization. In this paper, we theoretically investigate the efficiency of P-a and R-t to classify and characterize AE sources by modelling the input stress pulse and transducer. Analytical expressions obtained for P-a and R-t clearly indicate their use and efficiency for source characterization. It is believed that these results may be of use to investigators in areas like control systems and signal processing also.
Resumo:
An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Fiber-optic CDMA technology is well suited for high speed local-area-networks (LANs) as it has good salient features. In this paper, we model the wavelength/time multiple-pulses-per-row (W/T MPR) FO-CDMA network channel, as a Z channel. We compare the performances of W/T MPR code with and without hard-limiter and show that significant performance improvement can be achieved by using hard-limiters in the receivers. In broadcast channels, MAI is the dominant source of noise. Hence the performance analysis is carried out considering only MAI and other receiver noises are neglected.
Resumo:
Fully structured and matured open source spatial and temporal analysis technology seems to be the official carrier of the future for planning of the natural resources especially in the developing nations. This technology has gained enormous momentum because of technical superiority, affordability and ability to join expertise from all sections of the society. Sustainable development of a region depends on the integrated planning approaches adopted in decision making which requires timely and accurate spatial data. With the increased developmental programmes, the need for appropriate decision support system has increased in order to analyse and visualise the decisions associated with spatial and temporal aspects of natural resources. In this regard Geographic Information System (GIS) along with remote sensing data support the applications that involve spatial and temporal analysis on digital thematic maps and the remotely sensed images. Open source GIS would help in wide scale applications involving decisions at various hierarchical levels (for example from village panchayat to planning commission) on economic viability, social acceptance apart from technical feasibility. GRASS (Geographic Resources Analysis Support System, http://wgbis.ces.iisc.ernet.in/grass) is an open source GIS that works on Linux platform (freeware), but most of the applications are in command line argument, necessitating a user friendly and cost effective graphical user interface (GUI). Keeping these aspects in mind, Geographic Resources Decision Support System (GRDSS) has been developed with functionality such as raster, topological vector, image processing, statistical analysis, geographical analysis, graphics production, etc. This operates through a GUI developed in Tcltk (Tool command language / Tool kit) under Linux as well as with a shell in X-Windows. GRDSS include options such as Import /Export of different data formats, Display, Digital Image processing, Map editing, Raster Analysis, Vector Analysis, Point Analysis, Spatial Query, which are required for regional planning such as watershed Analysis, Landscape Analysis etc. This is customised to Indian context with an option to extract individual band from the IRS (Indian Remote Sensing Satellites) data, which is in BIL (Band Interleaved by Lines) format. The integration of PostgreSQL (a freeware) in GRDSS aids as an efficient database management system.
Resumo:
Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.
Resumo:
In this paper we analyze a novel Micro Opto Electro Mechanical Systems (MOEMS) race track resonator based vibration sensor. In this vibration sensor the straight portion of a race track resonator is located at the foot of the cantilever beam with proof mass. As the beam deflects due to vibration, stress induced refractive change in the waveguide located over the beam lead to the wavelength shift providing the measure of vibration. A wavelength shift of 3.19 pm/g in the range of 280 g for a cantilever beam of 1750μm×450m×20μmhas been obtained. The maximum acceleration (breakdown) for these dimensions is 2900g when a safety factor of 2 is taken into account. Since the wavelength of operation is around 1.55μm hybrid integration of source and detector is possible on the same substrate. Also it is less amenable to noise as wavelength shift provides the sensor signal. This type of sensors can be used for aerospace application and other harsh environments with suitable design.
Resumo:
High-rate analysis of channel-optimized vector quantizationThis paper considers the high-rate performance of channel optimized source coding for noisy discrete symmetric channels with random index assignment. Specifically, with mean squared error (MSE) as the performance metric, an upper bound on the asymptotic (i.e., high-rate) distortion is derived by assuming a general structure on the codebook. This structure enables extension of the analysis of the channel optimized source quantizer to one with a singular point density: for channels with small errors, the point density that minimizes the upper bound is continuous, while as the error rate increases, the point density becomes singular. The extent of the singularity is also characterized. The accuracy of the expressions obtained are verified through Monte Carlo simulations.
Resumo:
Mufflers with at least one acoustically absorptive duct are generally called dissipative mufflers. Generally, for want of systems approach, these mufflers are characterized by transmission loss of the lined duct with overriding corrections for the terminations, mean flow, etc. In this article, it is proposed that dissipative duct should be integrated with other muffler elements, source impedance and radiation impedance, by means of transfer matrix approach. Towards this end, the transfer matrix for rectangular duct with mean flow has been derived here, for the least attenuated mode. Mean flow introduces a coupling between transverse wave numbers and axial wave number, the evaluation of which therefore calls for simultaneous solution of two or three transcendental equations. This is done by means of a Newton-Raphson iteration scheme, which is illustrated here for square ducts lined with porous ceramic tiles.
Resumo:
Three-dimensional effects are a primary source of discrepancy between the measured values of automotive muffler performance and those predicted by the plane wave theory at higher frequencies. The basically exact method of (truncated) eigenfunction expansions for simple expansion chambers involves very complicated algebra, and the numerical finite element method requires large computation time and core storage. A simple numerical method is presented in this paper. It makes use of compatibility conditions for acoustic pressure and particle velocity at a number of equally spaced points in the planes of the junctions (or area discontinuities) to generate the required number of algebraic equations for evaluation of the relative amplitudes of the various modes (eigenfunctions), the total number of which is proportional to the area ratio. The method is demonstrated for evaluation of the four-pole parameters of rigid-walled, simple expansion chambers of rectangular as well as circular cross-section for the case of a stationary medium. Computed values of transmission loss are compared with those computed by means of the plane wave theory, in order to highlight the onset (cutting-on) of various higher order modes and the effect thereof on transmission loss of the muffler. These are also compared with predictions of the finite element methods (FEM) and the exact methods involving eigenfunction expansions, in order to demonstrate the accuracy of the simple method presented here.