46 resultados para INTERPOLATION
Resumo:
In this paper, we consider Bayesian interpolation and parameter estimation in a dynamic sinusoidal model. This model is more flexible than the static sinusoidal model since it enables the amplitudes and phases of the sinusoids to be time-varying. For the dynamic sinusoidal model, we derive a Bayesian inference scheme for the missing observations, hidden states and model parameters of the dynamic model. The inference scheme is based on a Markov chain Monte Carlo method known as Gibbs sampler. We illustrate the performance of the inference scheme to the application of packet-loss concealment of lost audio and speech packets. © EURASIP, 2010.
Resumo:
This paper extends n-gram graphone model pronunciation generation to use a mixture of such models. This technique is useful when pronunciation data is for a specific variant (or set of variants) of a language, such as for a dialect, and only a small amount of pronunciation dictionary training data for that specific variant is available. The performance of the interpolated n-gram graphone model is evaluated on Arabic phonetic pronunciation generation for words that can't be handled by the Buckwalter Morphological Analyser. The pronunciations produced are also used to train an Arabic broadcast audio speech recognition system. In both cases the interpolated graphone model leads to improved performance. Copyright © 2011 ISCA.
Resumo:
Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
A multivariate, robust, rational interpolation method for propagating uncertainties in several dimensions is presented. The algorithm for selecting numerator and denominator polynomial orders is based on recent work that uses a singular value decomposition approach. In this paper we extend this algorithm to higher dimensions and demonstrate its efficacy in terms of convergence and accuracy, both as a method for response suface generation and interpolation. To obtain stable approximants for continuous functions, we use an L2 error norm indicator to rank optimal numerator and denominator solutions. For discontinous functions, a second criterion setting an upper limit on the approximant value is employed. Analytical examples demonstrate that, for the same stencil, rational methods can yield more rapid convergence compared to pseudospectral or collocation approaches for certain problems. © 2012 AIAA.
Resumo:
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum.We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologramplane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique. © 2009 Optical Society of America.
Resumo:
This paper discusses the Cambridge University HTK (CU-HTK) system for the automatic transcription of conversational telephone speech. A detailed discussion of the most important techniques in front-end processing, acoustic modeling and model training, language and pronunciation modeling are presented. These include the use of conversation side based cepstral normalization, vocal tract length normalization, heteroscedastic linear discriminant analysis for feature projection, minimum phone error training and speaker adaptive training, lattice-based model adaptation, confusion network based decoding and confidence score estimation, pronunciation selection, language model interpolation, and class based language models. The transcription system developed for participation in the 2002 NIST Rich Transcription evaluations of English conversational telephone speech data is presented in detail. In this evaluation the CU-HTK system gave an overall word error rate of 23.9%, which was the best performance by a statistically significant margin. Further details on the derivation of faster systems with moderate performance degradation are discussed in the context of the 2002 CU-HTK 10 × RT conversational speech transcription system. © 2005 IEEE.
Resumo:
The Rolls-Royce Integrated-Planar Solid Oxide Fuel Cell (IP-SOFC) consists of ceramic modules which have electrochemical cells printed on the outer surfaces. The cathodes are the outermost layer of each cell and are supplied with oxygen from air flowing over the outside of the module. The anodes are in direct contact with the ceramic structure and are supplied with fuel from internal gas channels. Natural gas is reformed into hydrogen for use by the fuel cells in a separate reformer module of similar design except that the fuel cells are replaced by a reforming catalyst layer. The performance of the modules is intrinsically linked to the behaviour of the gas flows within their porous structures. Because the porous layers are very thin, a one-dimensional flow model provides a good representation of the flow property variations between fuel channel and fuel cell or reforming catalyst. The multi-component convective-diffusive flows are simulated using a new theory of flow in porous material, the Cylindrical Pore Interpolation Model. The effects of the catalysed methane reforming and water-gas shift chemical reactions are also considered using appropriate kinetic models. It is found that the shift reaction, which is catalysed by the anode material, has certain beneficial effects on the fuel cell module performance. In the reformer module it was found that the flow resistance of the porous support structure makes it difficult to sustain a high methane conversion rate. Although the analysis is based on IP-SOFC geometry, the modelling approach and general conclusions are applicable to other types of SOFC.
Resumo:
An increasingly common scenario in building speech synthesis and recognition systems is training on inhomogeneous data. This paper proposes a new framework for estimating hidden Markov models on data containing both multiple speakers and multiple languages. The proposed framework, speaker and language factorization, attempts to factorize speaker-/language-specific characteristics in the data and then model them using separate transforms. Language-specific factors in the data are represented by transforms based on cluster mean interpolation with cluster-dependent decision trees. Acoustic variations caused by speaker characteristics are handled by transforms based on constrained maximum-likelihood linear regression. Experimental results on statistical parametric speech synthesis show that the proposed framework enables data from multiple speakers in different languages to be used to: train a synthesis system; synthesize speech in a language using speaker characteristics estimated in a different language; and adapt to a new language. © 2012 IEEE.
Resumo:
This paper presents the modeling of second generation (2 G) high-temperature superconducting (HTS) pancake coils using finite element method. The axial symmetric model can be used to calculate current and magnetic field distribution inside the coil. The anisotropic characteristics of 2 G tapes are included in the model by direct interpolation. The model is validated by comparing to experimental results. We use the model to study critical currents of 2 G coils and find that 100μV/m is too high a criterion to determine long-term operating current of the coils, because the innermost turns of a coil will, due to the effect of local magnetic field, reach their critical current much earlier than outer turns. Our modeling shows that an average voltage criterion of 20μV/m over the coil corresponds to the point at which the innermost turns' electric field exceeds 100μV/m. So 20μV/m is suggested to be the critical current criterion of the HTS coil. The influence of background field on the coil critical current is also studied in the paper. © 2012 American Institute of Physics.
Resumo:
POMDP algorithms have made significant progress in recent years by allowing practitioners to find good solutions to increasingly large problems. Most approaches (including point-based and policy iteration techniques) operate by refining a lower bound of the optimal value function. Several approaches (e.g., HSVI2, SARSOP, grid-based approaches and online forward search) also refine an upper bound. However, approximating the optimal value function by an upper bound is computationally expensive and therefore tightness is often sacrificed to improve efficiency (e.g., sawtooth approximation). In this paper, we describe a new approach to efficiently compute tighter bounds by i) conducting a prioritized breadth first search over the reachable beliefs, ii) propagating upper bound improvements with an augmented POMDP and iii) using exact linear programming (instead of the sawtooth approximation) for upper bound interpolation. As a result, we can represent the bounds more compactly and significantly reduce the gap between upper and lower bounds on several benchmark problems. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Resumo:
Surface temperature measurements from two discs of a gas turbine compressor rig are used as boundary conditions for the transient conduction solution (inverse heat transfer analysis). The disc geometry is complex, and so the finite element method is used. There are often large radial temperature gradients on the discs, and the equations are therefore solved taking into account the dependence of thermal conductivity on temperature. The solution technique also makes use of a multigrid algorithm to reduce the solution time. This is particularly important since a large amount of data must be analyzed to obtain correlations of the heat transfer. The finite element grid is also used for a network analysis to calculate the radiant heat transfer in the cavity formed between the two compressor discs. The work discussed here proved particularly challenging as the disc temperatures were only measured at four different radial locations. Four methods of surface temperature interpolation are examined, together with their effect on the local heat fluxes. It is found that the choice of interpolation method depends on the available number of data points. Bessel interpolation gives the best results for four data points, whereas cubic splines are preferred when there are considerably more data points. The results from the analysis of the compressor rig data show that the heat transfer near the disc inner radius appears to be influenced by the central throughflow. However, for larger radii, the heat transfer from the discs and peripheral shroud is found to be consistent with that of a buoyancy-induced flow.
Resumo:
We offer a solution to the problem of efficiently translating algorithms between different types of discrete statistical model. We investigate the expressive power of three classes of model-those with binary variables, with pairwise factors, and with planar topology-as well as their four intersections. We formalize a notion of "simple reduction" for the problem of inferring marginal probabilities and consider whether it is possible to "simply reduce" marginal inference from general discrete factor graphs to factor graphs in each of these seven subclasses. We characterize the reducibility of each class, showing in particular that the class of binary pairwise factor graphs is able to simply reduce only positive models. We also exhibit a continuous "spectral reduction" based on polynomial interpolation, which overcomes this limitation. Experiments assess the performance of standard approximate inference algorithms on the outputs of our reductions.
Resumo:
The details of the Element Free Galerkin (EFG) method are presented with the method being applied to a study on hydraulic fracturing initiation and propagation process in a saturated porous medium using coupled hydro-mechanical numerical modelling. In this EFG method, interpolation (approximation) is based on nodes without using elements and hence an arbitrary discrete fracture path can be modelled.The numerical approach is based upon solving two governing partial differential equations of equilibrium and continuity of pore water simultaneously. Displacement increment and pore water pressure increment are discretized using the same EFG shape functions. An incremental constrained Galerkin weak form is used to create the discrete system of equations and a fully implicit scheme is used for discretization in the time domain. Implementation of essential boundary conditions is based on the penalty method. In order to model discrete fractures, the so-called diffraction method is used.Examples are presented and the results are compared to some closed-form solutions and FEM approximations in order to demonstrate the validity of the developed model and its capabilities. The model is able to take the anisotropy and inhomogeneity of the material into account. The applicability of the model is examined by simulating hydraulic fracture initiation and propagation process from a borehole by injection of fluid. The maximum tensile strength criterion and Mohr-Coulomb shear criterion are used for modelling tensile and shear fracture, respectively. The model successfully simulates the leak-off of fluid from the fracture into the surrounding material. The results indicate the importance of pore fluid pressure in the initiation and propagation pattern of fracture in saturated soils. © 2013 Elsevier Ltd.