185 resultados para linear approximation method
em Indian Institute of Science - Bangalore - Índia
Resumo:
A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.
Resumo:
Dynamic systems involving convolution integrals with decaying kernels, of which fractionally damped systems form a special case, are non-local in time and hence infinite dimensional. Straightforward numerical solution of such systems up to time t needs O(t(2)) computations owing to the repeated evaluation of integrals over intervals that grow like t. Finite-dimensional and local approximations are thus desirable. We present here an approximation method which first rewrites the evolution equation as a coupled in finite-dimensional system with no convolution, and then uses Galerkin approximation with finite elements to obtain linear, finite-dimensional, constant coefficient approximations for the convolution. This paper is a broad generalization, based on a new insight, of our prior work with fractional order derivatives (Singh & Chatterjee 2006 Nonlinear Dyn. 45, 183-206). In particular, the decaying kernels we can address are now generalized to the Laplace transforms of known functions; of these, the power law kernel of fractional order differentiation is a special case. The approximation can be refined easily. The local nature of the approximation allows numerical solution up to time t with O(t) computations. Examples with several different kernels show excellent performance. A key feature of our approach is that the dynamic system in which the convolution integral appears is itself approximated using another system, as distinct from numerically approximating just the solution for the given initial values; this allows non-standard uses of the approximation, e. g. in stability analyses.
Resumo:
In our earlier work [1], we employed MVDR (minimum variance distortionless response) based spectral estimation instead of modified-linear prediction method [2] in pitch modification. Here, we use the Bauer method of MVDR spectral factorization, leading to a causal inverse filter rather than a noncausal filter setup with MVDR spectral estimation [1]. Further, this is employed to obtain source (or residual) signal from pitch synchronous speech frames. The residual signal is resampled using DCT/IDCT depending on the target pitch scale factor. Finally, forward filters realized from the above factorization are used to get pitch modified speech. The modified speech is evaluated subjectively by 10 listeners and mean opinion scores (MOS) are tabulated. Further, modified bark spectral distortion measure is also computed for objective evaluation of performance. We find that the proposed algorithm performs better compared to time domain pitch synchronous overlap [3] and modified-LP method [2]. A good MOS score is achieved with the proposed algorithm compared to [1] with a causal inverse and forward filter setup.
Resumo:
A methodology is presented for the synthesis of analog circuits using piecewise linear (PWL) approximations. The function to be synthesized is divided into PWL segments such that each segment can be realized using elementary MOS current-mode programmable-gain circuits. A number of these elementary current-mode circuits when connected in parallel, it is possible to realize piecewise linear approximation of any arbitrary analog function with in the allowed approximation error bounds. Simulation results show a close agreement between the desired function and the synthesized output. The number of PWL segments used for approximation and hence the circuit area is determined by the required accuracy and the smoothness of the resulting function.
Resumo:
This paper presents a chance-constrained linear programming formulation for reservoir operation of a multipurpose reservoir. The release policy is defined by a chance constraint that the probability of irrigation release in any period equalling or exceeding the irrigation demand is at least equal to a specified value P (called reliability level). The model determines the maximum annual hydropower produced while meeting the irrigation demand at a specified reliability level. The model considers variation in reservoir water level elevation and also the operating range within which the turbine operates. A linear approximation for nonlinear power production function is assumed and the solution obtained within a specified tolerance limit. The inflow into the reservoir is considered random. The chance constraint is converted into its deterministic equivalent using a linear decision rule and inflow probability distribution. The model application is demonstrated through a case study.
Resumo:
This paper presents three methodologies for determining optimum locations and magnitudes of reactive power compensation in power distribution systems. Method I and Method II are suitable for complex distribution systems with a combination of both radial and ring-main feeders and having different voltage levels. Method III is suitable for low-tension single voltage level radial feeders. Method I is based on an iterative scheme with successive powerflow analyses, with formulation and solution of the optimization problem using linear programming. Method II and Method III are essentially based on the steady state performance of distribution systems. These methods are simple to implement and yield satisfactory results comparable with the results of Method I. The proposed methods have been applied to a few distribution systems, and results obtained for two typical systems are presented for illustration purposes.
Resumo:
Head-on infall of two compact objects with arbitrary mass ratio is investigated using the multipolar post-Minkowskian approximation method. At the third post-Newtonian order the energy flux, in addition to the instantaneous contributions, also includes hereditary contributions consisting of the gravitational-wave tails, tails-of-tails, and the tail-squared terms. The results are given both for infall from infinity and also for infall from a finite distance. These analytical expressions should be useful for the comparison with the high accuracy numerical relativity results within the limit in which post-Newtonian approximations are valid.
Resumo:
The basic characteristic of a chaotic system is its sensitivity to the infinitesimal changes in its initial conditions. A limit to predictability in chaotic system arises mainly due to this sensitivity and also due to the ineffectiveness of the model to reveal the underlying dynamics of the system. In the present study, an attempt is made to quantify these uncertainties involved and thereby improve the predictability by adopting a multivariate nonlinear ensemble prediction. Daily rainfall data of Malaprabha basin, India for the period 1955-2000 is used for the study. It is found to exhibit a low dimensional chaotic nature with the dimension varying from 5 to 7. A multivariate phase space is generated, considering a climate data set of 16 variables. The chaotic nature of each of these variables is confirmed using false nearest neighbor method. The redundancy, if any, of this atmospheric data set is further removed by employing principal component analysis (PCA) method and thereby reducing it to eight principal components (PCs). This multivariate series (rainfall along with eight PCs) is found to exhibit a low dimensional chaotic nature with dimension 10. Nonlinear prediction employing local approximation method is done using univariate series (rainfall alone) and multivariate series for different combinations of embedding dimensions and delay times. The uncertainty in initial conditions is thus addressed by reconstructing the phase space using different combinations of parameters. The ensembles generated from multivariate predictions are found to be better than those from univariate predictions. The uncertainty in predictions is decreased or in other words predictability is increased by adopting multivariate nonlinear ensemble prediction. The restriction on predictability of a chaotic series can thus be altered by quantifying the uncertainty in the initial conditions and also by including other possible variables, which may influence the system. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Alopex is a correlation-based gradient-free optimization technique useful in many learning problems. However, there are no analytical results on the asymptotic behavior of this algorithm. This article presents a new version of Alopex that can be analyzed using techniques of two timescale stochastic approximation method. It is shown that the algorithm asymptotically behaves like a gradient-descent method, though it does not need (or estimate) any gradient information. It is also shown, through simulations, that the algorithm is quite effective.
Resumo:
Accurate system planning and performance evaluation requires knowledge of the joint impact of scheduling, interference, and fading. However, current analyses either require costly numerical simulations or make simplifying assumptions that limit the applicability of the results. In this paper, we derive analytical expressions for the spectral efficiency of cellular systems that use either the channel-unaware but fair round robin scheduler or the greedy, channel-aware but unfair maximum signal to interference ratio scheduler. As is the case in real deployments, non-identical co-channel interference at each user, both Rayleigh fading and lognormal shadowing, and limited modulation constellation sizes are accounted for in the analysis. We show that using a simple moment generating function-based lognormal approximation technique and an accurate Gaussian-Q function approximation leads to results that match simulations well. These results are more accurate than erstwhile results that instead used the moment-matching Fenton-Wilkinson approximation method and bounds on the Q function. The spectral efficiency of cellular systems is strongly influenced by the channel scheduler and the small constellation size that is typically used in third generation cellular systems.
Resumo:
This paper proposes a new approach for solving the state estimation problem. The approach is aimed at producing a robust estimator that rejects bad data, even if they are associated with leverage-point measurements. This is achieved by solving a sequence of Linear Programming (LP) problems. Optimization is carried via a new algorithm which is a combination of “upper bound optimization technique" and “an improved algorithm for discrete linear approximation". In this formulation of the LP problem, in addition to the constraints corresponding to the measurement set, constraints corresponding to bounds of state variables are also involved, which enables the LP problem more efficient in rejecting bad data, even if they are associated with leverage-point measurements. Results of the proposed estimator on IEEE 39-bus system and a 24-bus EHV equivalent system of the southern Indian grid are presented for illustrative purpose.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) offers a huge potential for designing trade-offs involving energy, power, temperature and performance of computing systems. In this paper, we evaluate three different DVFS schemes - our enhancement of a Petri net performance model based DVFS method for sequential programs to stream programs, a simple profile based Linear Scaling method, and an existing hardware based DVFS method for multithreaded applications - using multithreaded stream applications, in a full system Chip Multiprocessor (CMP) simulator. From our evaluation, we find that the software based methods achieve significant Energy/Throughput2(ET−2) improvements. The hardware based scheme degrades performance heavily and suffers ET−2 loss. Our results indicate that the simple profile based scheme achieves the benefits of the complex Petri net based scheme for stream programs, and present a strong case for the need for independent voltage/frequency control for different cores of CMPs, which is lacking in most of the state-of-the-art CMPs. This is in contrast to the conclusions of a recent evaluation of per-core DVFS schemes for multithreaded applications for CMPs.
Resumo:
Solder joints in electronic packages undergo thermo-mechanical cycling, resulting in nucleation of micro-cracks, especially at the solder/bond-pad interface, which may lead to fracture of the joints. The fracture toughness of a solder joint depends on material properties, process conditions and service history, as well as strain rate and mode-mixity. This paper reports on a methodology for determining the mixed-mode fracture toughness of solder joints with an interfacial starter-crack, using a modified compact mixed mode (CMM) specimen containing an adhesive joint. Expressions for stress intensity factor (K) and strain energy release rate (G) are developed, using a combination of experiments and finite element (FE) analysis. In this methodology, crack length dependent geometry factors to convert for the modified CMM sample are first obtained via the crack-tip opening displacement (CTOD)-based linear extrapolation method to calculate the under far-field mode I and II conditions (f(1a) and f(2a)), (ii) generation of a master-plot to determine a(c), and (iii) computation of K and G to analyze the fracture behavior of joints. The developed methodology was verified using J-integral calculations, and was also used to calculate experimental fracture toughness values of a few lead-free solder-Cu joints. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This study concerns the relationship between the power law recession coefficient k (in - dQ/dt = kQ(alpha), Q being discharge at the basin outlet) and past average discharge Q(N) (where N is the temporal distance from the center of the selected time span in the past to the recession peak), which serves as a proxy for past storage state of the basin. The strength of the k-Q(N) relationship is characterized by the coefficient of determination R-N(2), which is expected to indicate the basin's ability to hold water for N days. The main objective of this study is to examine how R-N(2) value of a basin is related with its physical characteristics. For this purpose, we use streamflow data from 358 basins in the United States and selected 18 physical parameters for each basin. First, we transform the physical parameters into mutually independent principal components. Then we employ multiple linear regression method to construct a model of R-N(2) in terms of the principal components. Furthermore, we employ step-wise multiple linear regression method to identify the dominant catchment characteristics that influence R-N(2) and their directions of influence. Our results indicate that R-N(2) is appreciably related to catchment characteristics. Particularly, it is noteworthy that the coefficient of determination of the relationship between R-N(2) and the catchment characteristics is 0.643 for N = 45. We found that topographical characteristics of a basin are the most dominant factors in controlling the value of R-N(2). Our results may be suggesting that it is possible to tell about the water holding capacity of a basin by just knowing about a few of its physical characteristics. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Scaling approaches are widely used by hydrologists for Regional Frequency Analysis (RFA) of floods at ungauged/sparsely gauged site(s) in river basins. This paper proposes a Recursive Multi-scaling (RMS) approach to RFA that overcomes limitations of conventional simple- and multi-scaling approaches. The approach involves identification of a separate set of attributes corresponding to each of the sites (being considered in the study area/region) in a recursive manner according to their importance, and utilizing those attributes to construct effective regional regression relationships to estimate statistical raw moments (SMs) of peak flows. The SMs are then utilized to arrive at parameters of flood frequency distribution and quantile estimate(s) corresponding to target return period(s). Effectiveness of the RMS approach in arriving at flood quantile estimates for ungauged sites is demonstrated through leave-one-out cross-validation experiment on watersheds in Indiana State, USA. Results indicate that the approach outperforms index-flood based Region-of-Influence approach, simple- and multi-scaling approaches and a multiple linear regression method. (C) 2015 Elsevier B.V. All rights reserved.