81 resultados para hybrid prediction method
em Indian Institute of Science - Bangalore - Índia
Resumo:
A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.
Resumo:
In our earlier work [1], we employed MVDR (minimum variance distortionless response) based spectral estimation instead of modified-linear prediction method [2] in pitch modification. Here, we use the Bauer method of MVDR spectral factorization, leading to a causal inverse filter rather than a noncausal filter setup with MVDR spectral estimation [1]. Further, this is employed to obtain source (or residual) signal from pitch synchronous speech frames. The residual signal is resampled using DCT/IDCT depending on the target pitch scale factor. Finally, forward filters realized from the above factorization are used to get pitch modified speech. The modified speech is evaluated subjectively by 10 listeners and mean opinion scores (MOS) are tabulated. Further, modified bark spectral distortion measure is also computed for objective evaluation of performance. We find that the proposed algorithm performs better compared to time domain pitch synchronous overlap [3] and modified-LP method [2]. A good MOS score is achieved with the proposed algorithm compared to [1] with a causal inverse and forward filter setup.
Resumo:
The significance of treating rainfall as a chaotic system instead of a stochastic system for a better understanding of the underlying dynamics has been taken up by various studies recently. However, an important limitation of all these approaches is the dependence on a single method for identifying the chaotic nature and the parameters involved. Many of these approaches aim at only analyzing the chaotic nature and not its prediction. In the present study, an attempt is made to identify chaos using various techniques and prediction is also done by generating ensembles in order to quantify the uncertainty involved. Daily rainfall data of three regions with contrasting characteristics (mainly in the spatial area covered), Malaprabha, Mahanadi and All-India for the period 1955-2000 are used for the study. Auto-correlation and mutual information methods are used to determine the delay time for the phase space reconstruction. Optimum embedding dimension is determined using correlation dimension, false nearest neighbour algorithm and also nonlinear prediction methods. The low embedding dimensions obtained from these methods indicate the existence of low dimensional chaos in the three rainfall series. Correlation dimension method is done on th phase randomized and first derivative of the data series to check whether the saturation of the dimension is due to the inherent linear correlation structure or due to low dimensional dynamics. Positive Lyapunov exponents obtained prove the exponential divergence of the trajectories and hence the unpredictability. Surrogate data test is also done to further confirm the nonlinear structure of the rainfall series. A range of plausible parameters is used for generating an ensemble of predictions of rainfall for each year separately for the period 1996-2000 using the data till the preceding year. For analyzing the sensitiveness to initial conditions, predictions are done from two different months in a year viz., from the beginning of January and June. The reasonably good predictions obtained indicate the efficiency of the nonlinear prediction method for predicting the rainfall series. Also, the rank probability skill score and the rank histograms show that the ensembles generated are reliable with a good spread and skill. A comparison of results of the three regions indicates that although they are chaotic in nature, the spatial averaging over a large area can increase the dimension and improve the predictability, thus destroying the chaotic nature. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Single receive antenna selection (AS) allows single-input single-output (SISO) systems to retain the diversity benefits of multiple antennas with minimum hardware costs. We propose a single receive AS method for time-varying channels, in which practical limitations imposed by next-generation wireless standards such as training, packetization and antenna switching time are taken into account. The proposed method utilizes low-complexity subspace projection techniques spanned by discrete prolate spheroidal (DPS) sequences. It only uses Doppler bandwidth knowledge, and does not need detailed correlation knowledge. Results show that the proposed AS method outperforms ideal conventional SISO systems with perfect CSI but no AS at the receiver and AS using the conventional Fourier estimation/prediction method. A closed-form expression for the symbol error probability (SEP) of phase-shift keying (MPSK) with symbol-by-symbol receive AS is derived.
Resumo:
Background: The set of indispensable genes that are required by an organism to grow and sustain life are termed as essential genes. There is a strong interest in identification of the set of essential genes, particularly in pathogens, not only for a better understanding of the pathogen biology, but also for identifying drug targets and the minimal gene set for the organism. Essentiality is inherently a systems property and requires consideration of the system as a whole for their identification. The available experimental approaches capture some aspects but each method comes with its own limitations. Moreover, they do not explain the basis for essentiality in most cases. A powerful prediction method to recognize this gene pool including rationalization of the known essential genes in a given organism would be very useful. Here we describe a multi-level multi-scale approach to identify the essential gene pool in a deadly pathogen, Mycobacterium tuberculosis. Results: The multi-level workflow analyses the bacterial cell by studying (a) genome-wide gene expression profiles to identify the set of genes which show consistent and significant levels of expression in multiple samples of the same condition, (b) indispensability for growth by using gene expression integrated flux balance analysis of a genome-scale metabolic model, (c) importance for maintaining the integrity and flow in a protein-protein interaction network and (d) evolutionary conservation in a set of genomes of the same ecological niche. In the gene pool identified, the functional basis for essentiality has been addressed by studying residue level conservation and the sub-structure at the ligand binding pockets, from which essential amino acid residues in that pocket have also been identified. 283 genes were identified as essential genes with high-confidence. An agreement of about 73.5% is observed with that obtained from the experimental transposon mutagenesis technique. A large proportion of the identified genes belong to the class of intermediary metabolism and respiration. Conclusions: The multi-scale, multi-level approach described can be generally applied to other pathogens as well. The essential gene pool identified form a basis for designing experiments to probe their finer functional roles and also serve as a ready shortlist for identifying drug targets.
Resumo:
A simple and efficient two-step hybrid electrochemical-thermal route was developed for the synthesis of large quantity of ZnO nanoparticles using aqueous sodium bicarbonate electrolyte and sacrificial Zn anode and cathode in an undivided cell under galvanostatic mode at room temperature. The bath concentration and current density were varied from 30 to 120 mmol and 0.05 to 1.5 A/dm(2). The electrochemically generated precursor was calcined for an hour at different range of temperature from 140 to 600 A degrees C. The calcined samples were characterized by XRD, SEM/EDX, TEM, TG-DTA, FT-IR, and UV-Vis spectral methods. Rietveld refinement of X-ray data indicates that the calcined compound exhibits hexagonal (Wurtzite) structure with space group of P63mc (No. 186). The crystallite sizes were in the range of 22-75 nm based on Debye-Scherrer equation. The TEM results reveal that the particle sizes were in the order of 30-40 nm. The blue shift was noticed in UV-Vis absorption spectra, the band gaps were found to be 5.40-5.11 eV. Scanning electron micrographs suggest that all the samples were randomly oriented granular morphology.
Resumo:
A hybrid technique to model two dimensional fracture problems which makes use of displacement discontinuity and direct boundary element method is presented. Direct boundary element method is used to model the finite domain of the body, while displacement discontinuity elements are utilized to represent the cracks. Thus the advantages of the component methods are effectively combined. This method has been implemented in a computer program and numerical results which show the accuracy of the present method are presented. The cases of bodies containing edge cracks as well as multiple cracks are considered. A direct method and an iterative technique are described. The present hybrid method is most suitable for modeling problems invoking crack propagation.
Resumo:
This paper presents a new approach by making use of a hybrid method of using the displacement discontinuity element method and direct boundary element method to model concrete cracking by incorporating fictitious crack model. Fracture mechanics approach is followed using the Hillerborg's fictitious crack model. A boundary element based substructure method and a hybrid technique of using displacement discontinuity element method and direct boundary element method are compared in this paper. In order to represent the process zone ahead of the crack, closing forces are assumed to act in such a way that they obey a linear normal stress-crack opening displacement law. Plain concrete beams with and without initial crack under three-point loading were analyzed by both the methods. The numerical results obtained were shown to agree well with the results from existing finite element method. The model is capable of reproducing the whole range of load-deflection response including strain-softening and snap-back behavior as illustrated in the numerical examples. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work focuses on the formulation of an asymptotically correct theory for symmetric composite honeycomb sandwich plate structures. In these panels, transverse stresses tremendously influence design. The conventional 2-D finite elements cannot predict the thickness-wise distributions of transverse shear or normal stresses and 3-D displacements. Unfortunately, the use of the more accurate three-dimensional finite elements is computationally prohibitive. The development of the present theory is based on the Variational Asymptotic Method (VAM). Its unique features are the identification and utilization of additional small parameters associated with the anisotropy and non-homogeneity of composite sandwich plate structures. These parameters are ratios of smallness of the thickness of both facial layers to that of the core and smallness of 3-D stiffness coefficients of the core to that of the face sheets. Finally, anisotropy in the core and face sheets is addressed by the small parameters within the 3-D stiffness matrices. Numerical results are illustrated for several sample problems. The 3-D responses recovered using VAM-based model are obtained in a much more computationally efficient manner than, and are in agreement with, those of available 3-D elasticity solutions and 3-D FE solutions of MSC NASTRAN. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
An efficient parallelization algorithm for the Fast Multipole Method which aims to alleviate the parallelization bottleneck arising from lower job-count closer to root levels is presented. An electrostatic problem of 12 million non-uniformly distributed mesh elements is solved with 80-85% parallel efficiency in matrix setup and matrix-vector product using 60GB and 16 threads on shared memory architecture.
Resumo:
We present a framework for obtaining reliable solid-state charge and optical excitations and spectra from optimally tuned range-separated hybrid density functional theory. The approach, which is fully couched within the formal framework of generalized Kohn-Sham theory, allows for the accurate prediction of exciton binding energies. We demonstrate our approach through first principles calculations of one- and two-particle excitations in pentacene, a molecular semiconducting crystal, where our work is in excellent agreement with experiments and prior computations. We further show that with one adjustable parameter, set to produce the known band gap, this method accurately predicts band structures and optical spectra of silicon and lithium fluoride, prototypical covalent and ionic solids. Our findings indicate that for a broad range of extended bulk systems, this method may provide a computationally inexpensive alternative to many-body perturbation theory, opening the door to studies of materials of increasing size and complexity.
Resumo:
Response analysis of a linear structure with uncertainties in both structural parameters and external excitation is considered here. When such an analysis is carried out using the spectral stochastic finite element method (SSFEM), often the computational cost tends to be prohibitive due to the rapid growth of the number of spectral bases with the number of random variables and the order of expansion. For instance, if the excitation contains a random frequency, or if it is a general random process, then a good approximation of these excitations using polynomial chaos expansion (PCE) involves a large number of terms, which leads to very high cost. To address this issue of high computational cost, a hybrid method is proposed in this work. In this method, first the random eigenvalue problem is solved using the weak formulation of SSFEM, which involves solving a system of deterministic nonlinear algebraic equations to estimate the PCE coefficients of the random eigenvalues and eigenvectors. Then the response is estimated using a Monte Carlo (MC) simulation, where the modal bases are sampled from the PCE of the random eigenvectors estimated in the previous step, followed by a numerical time integration. It is observed through numerical studies that this proposed method successfully reduces the computational burden compared with either a pure SSFEM of a pure MC simulation and more accurate than a perturbation method. The computational gain improves as the problem size in terms of degrees of freedom grows. It also improves as the timespan of interest reduces.
Resumo:
This study investigates the potential of Relevance Vector Machine (RVM)-based approach to predict the ultimate capacity of laterally loaded pile in clay. RVM is a sparse approximate Bayesian kernel method. It can be seen as a probabilistic version of support vector machine. It provides much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. RVM model outperforms the two other models based on root-mean-square-error (RMSE) and mean-absolute-error (MAE) performance criteria. It also stimates the prediction variance. The results presented in this paper clearly highlight that the RVM is a robust tool for prediction Of ultimate capacity of laterally loaded piles in clay.
Resumo:
K-means algorithm is a well known nonhierarchical method for clustering data. The most important limitations of this algorithm are that: (1) it gives final clusters on the basis of the cluster centroids or the seed points chosen initially, and (2) it is appropriate for data sets having fairly isotropic clusters. But this algorithm has the advantage of low computation and storage requirements. On the other hand, hierarchical agglomerative clustering algorithm, which can cluster nonisotropic (chain-like and concentric) clusters, requires high storage and computation requirements. This paper suggests a new method for selecting the initial seed points, so that theK-means algorithm gives the same results for any input data order. This paper also describes a hybrid clustering algorithm, based on the concepts of multilevel theory, which is nonhierarchical at the first level and hierarchical from second level onwards, to cluster data sets having (i) chain-like clusters and (ii) concentric clusters. It is observed that this hybrid clustering algorithm gives the same results as the hierarchical clustering algorithm, with less computation and storage requirements.