86 resultados para Fast methods
Resumo:
Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.
Resumo:
For the problem of speaker adaptation in speech recognition, the performance depends on the availability of adaptation data. In this paper, we have compared several existing speaker adaptation methods, viz. maximum likelihood linear regression (MLLR), eigenvoice (EV), eigenspace-based MLLR (EMLLR), segmental eigenvoice (SEV) and hierarchical eigenvoice (HEV) based methods. We also develop a new method by modifying the existing HEV method for achieving further performance improvement in a limited available data scenario. In the sense of availability of adaptation data, the new modified HEV (MHEV) method is shown to perform better than all the existing methods throughout the range of operation except the case of MLLR at the availability of more adaptation data.
Resumo:
Fundamental investigations in ultrasonics in India date back to the early 20th century. But, fundamental and applied research in the field of nondestructive evaluation (NDE) came much later. In the last four decades it has grown steadily in academic institutions, national laboratories and industry. Currently, commensurate with rapid industrial growth and realisation of the benefits of NDE, the activity is becoming much stronger, deeper, broader and very wide spread. Acoustic Emission (AE) is a recent entry into the field of nondestructive evaluation. Pioneering efforts in India in AE were carried out at the Indian Institute of Science in the early 1970s. The nuclear industry was the first to utilise it. Current activity in AE in the country spans materials research, incipient failure detection, integrity evaluation of structures, fracture mechanics studies and rock mechanics. In this paper, we attempt to project the current scenario in ultrasonics and acoustic emission research in India.
Resumo:
Abstract is not available.
Resumo:
A comprehensive set of new configurations for the holographic simulation of a wide variety of mirrors is described. These holographically simulated mirrors (HSMs) have been experimentally realized and their imaging performance has been studied.
Resumo:
Much progress in nanoscience and nanotechnology has been made in the past few years thanks to the increased availability of sophisticated physical methods to characterize nanomaterials. These techniques include electron microscopy and scanning probe microscopies, in addition to standard techniques such as X-ray and neutron diffraction, X-ray scattering, and various spectroscopies. Characterization of nanomaterials includes the determination not only of size and shape, but also of the atomic and electronic structures and other important properties. In this article we describe some of the important methods employed for characterization of nanostructures, describing a few case studies for illustrative purposes. These case studies include characterizations of Au, ReO3, and GaN nanocrystals; ZnO, Ni, and Co nanowires; inorganic and carbon nanotubes; and two-dimensional graphene.
Resumo:
Combustion is a complex phenomena involving a multiplicity of variables. Some important variables measured in flame tests follow [1]. In order to characterize ignition, such related parameters as ignition time, ease of ignition, flash ignition temperature, and self-ignition temperature are measured. For studying the propagation of the flame, parameters such as distance burned or charred, area of flame spread, time of flame spread, burning rate, charred or melted area, and fire endurance are measured. Smoke characteristics are studied by determining such parameters as specific optical density, maximum specific optical density, time of occurrence of the densities, maximum rate of density increase, visual obscuration time, and smoke obscuration index. In addition to the above variables, there are a number of specific properties of the combustible system which could be measured. These are soot formation, toxicity of combustion gases, heat of combustion, dripping phenomena during the burning of thermoplastics, afterglow, flame intensity, fuel contribution, visual characteristics, limiting oxygen concentration (OI), products of pyrolysis and combustion, and so forth. A multitude of flammability tests measuring one or more of these properties have been developed [2]. Admittedly, no one small scale test is adequate to mimic or assess the performance of a plastic in a real fire situation. The conditions are much too complicated [3, 4]. Some conceptual problems associated with flammability testing of polymers have been reviewed [5, 6].
Resumo:
We compare two popular methods for estimating the power spectrum from short data windows, namely the adaptive multivariate autoregressive (AMVAR) method and the multitaper method. By analyzing a simulated signal (embedded in a background Ornstein-Uhlenbeck noise process) we demonstrate that the AMVAR method performs better at detecting short bursts of oscillations compared to the multitaper method. However, both methods are immune to jitter in the temporal location of the signal. We also show that coherence can still be detected in noisy bivariate time series data by the AMVAR method even if the individual power spectra fail to show any peaks. Finally, using data from two monkeys performing a visuomotor pattern discrimination task, we demonstrate that the AMVAR method is better able to determine the termination of the beta oscillations when compared to the multitaper method.
Resumo:
The simply supported rhombic plate under transverse load has received extensive attention from elasticians, applied mathematicians and engineers. All known solutions are based on approximate procedures. Now, an exact solution in a fast converging explicit series form is derived for this problem, by applying Stevenson's tentative approach with complex variables. Numerical values for the central deflexion and moments are obtained for various corner angles. The present solution provides a basis for assessing the accuracy of approximate methods for analysing problems of skew plates or domains.
Resumo:
This Paper deals with the analysis of liquid limit of soils, an inferential parameter of universal acceptance. It has been undertaken primarily to re-examine one-point methods of determination of liquid limit water contents. It has been shown by basic characteristics of soils and associated physico-chemical factors that critical shear strengths at liquid limit water contents arise out of force field equilibrium and are independent of soil type. This leads to the formation of a scientific base for liquid limit determination by one-point methods, which hitherto was formulated purely on statistical analysis of data. Available methods (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) of one-point liquid limit determination have been critically re-examined. A simple one-point cone penetrometer method of computing liquid limit has been suggested and compared with other methods. Experimental data of Sherwood & Ryley (1970) have been employed for comparison of different cone penetration methods. Results indicate that, apart from mere statistical considerations, one-point methods have a strong scientific base on the uniqueness of modified flow line irrespective of soil type. Normalized flow line is obtained by normalization of water contents by liquid limit values thereby nullifying the effects of surface areas and associated physico-chemical factors that are otherwise reflected in different responses at macrolevel.Cet article traite de l'analyse de la limite de liquidité des sols, paramètre déductif universellement accepté. Cette analyse a été entreprise en premier lieu pour ré-examiner les méthodes à un point destinées à la détermination de la teneur en eau à la limite de liquidité. Il a été démontré par les caractéristiques fondamentales de sols et par des facteurs physico-chimiques associés que les résistances critiques à la rupture au cisaillement pour des teneurs en eau à la limite de liquidité résultent de l'équilibre des champs de forces et sont indépendantes du type de sol concerné. On peut donc constituer une base scientifique pour la détermination de la limite de liquidité par des méthodes à un point lesquelles, jusqu'alors, n'avaient été formulées que sur la base d'une analyse statistique des données. Les méthodes dont on dispose (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) pour la détermination de la limite de liquidité à un point font l'objet d'un ré-examen critique. Une simple méthode d'analyse à un point à l'aide d'un pénétromètre à cône pour le calcul de la limite de liquidité a été suggérée et comparée à d'autres méthodes. Les données expérimentales de Sherwood & Ryley (1970) ont été utilisées en vue de comparer différentes méthodes de pénétration par cône. En plus de considérations d'ordre purement statistque, les résultats montrent que les méthodes de détermination à un point constituent une base scientifique solide en raison du caractère unique de la ligne de courant modifiée, quel que soit le type de sol La ligne de courant normalisée est obtenue par la normalisation de la teneur en eau en faisant appel à des valeurs de limite de liquidité pour, de cette manière, annuler les effets des surfaces et des facteurs physico-chimiques associés qui sans cela se manifesteraient dans les différentes réponses au niveau macro.
Resumo:
A 4-degree-of-freedom single-input system and a 3-degree-of-freedom multi-input system are solved by the Coates', modified Coates' and Chan-Mai flowgraph methods. It is concluded that the Chan-Mai flowgraph method is superior to other flowgraph methods in such cases.
Resumo:
The control of shapes of nanocrystals is crucial for using them as building blocks for various applications. In this paper, we present a critical overview of the issues involved in shape-controlled synthesis of nanostructures. In particular, we focus on the mechanisms by which anisotropic structures of high-symmetry materials (fcc crystals, for instance) could be realized. Such structures require a symmetry-breaking mechanism to be operative that typically leads to selection of one of the facets/directions for growth over all the other symmetry-equivalent crystallographic facets. We show how this selection could arise for the growth of one-dimensional structures leading to ultrafine metal nanowires and for the case of two-dimensional nanostructures where the layer-by-layer growth takes place at low driving forces leading to plate-shaped structures. We illustrate morphology diagrams to predict the formation of two-dimensional structures during wet chemical synthesis. We show the generality of the method by extending it to predict the growth of plate-shaped inorganics produced by a precipitation reaction. Finally, we present the growth of crystals under high driving forces that can lead to the formation of porous structures with large surface areas.
Resumo:
A new approach to Penrose's twistor algebra is given. It is based on the use of a generalised quaternion algebra for the translation of statements in projective five-space into equivalent statements in twistor (conformal spinor) space. The formalism leads toSO(4, 2)-covariant formulations of the Pauli-Kofink and Fierz relations among Dirac bilinears, and generalisations of these relations.
Resumo:
The transforms dealt with in this paper are defined in terms of the transform kernels which are Kroneeker products of the two or more component kernels. The signal flow-graph for the computation of such a transform is obtained with the flow-graphs for the component transforms as building blocks.