8 resultados para scattered data interpolation

em Cambridge University Engineering Department Publications Database


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The detailed understanding of the electronic properties of carbon-based materials requires the determination of their electronic structure and more precisely the calculation of their joint density of states (JDOS) and dielectric constant. Low electron energy loss spectroscopy (EELS) provides a continuous spectrum which represents all the excitations of the electrons within the material with energies ranging between zero and about 100 eV. Therefore, EELS is potentially more powerful than conventional optical spectroscopy which has an intrinsic upper information limit of about 6 eV due to absorption of light from the optical components of the system or the ambient. However, when analysing EELS data, the extraction of the single scattered data needed for Kramers Kronig calculations is subject to the deconvolution of the zero loss peak from the raw data. This procedure is particularly critical when attempting to study the near-bandgap region of materials with a bandgap below 1.5 eV. In this paper, we have calculated the electronic properties of three widely studied carbon materials; namely amorphous carbon (a-C), tetrahedral amorphous carbon (ta-C) and C60 fullerite crystal. The JDOS curve starts from zero for energy values below the bandgap and then starts to rise with a rate depending on whether the material has a direct or an indirect bandgap. Extrapolating a fit to the data immediately above the bandgap in the stronger energy loss region was used to get an accurate value for the bandgap energy and to determine whether the bandgap is direct or indirect in character. Particular problems relating to the extraction of the single scattered data for these materials are also addressed. The ta-C and C60 fullerite materials are found to be direct bandgap-like semiconductors having a bandgaps of 2.63 and 1.59eV, respectively. On the other hand, the electronic structure of a-C was unobtainable because it had such a small bandgap that most of the information is contained in the first 1.2 eV of the spectrum, which is a region removed during the zero loss deconvolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new interpolation technique has been developed for replacing missing samples in a sampled waveform drawn from a stationary stochastic process, given the power spectrum for the process. The method works with a finite block of data and is based on the assumption that components of the block DFT are Gaussian zero-mean independent random variables with variance proportional to the power spectrum at each frequency value. These assumptions make the interpolator particularly suitable for signals with a sharply-defined harmonic structure, such as audio waveforms recorded from music or voiced speech. Some results are presented and comparisons are made with existing techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper extends n-gram graphone model pronunciation generation to use a mixture of such models. This technique is useful when pronunciation data is for a specific variant (or set of variants) of a language, such as for a dialect, and only a small amount of pronunciation dictionary training data for that specific variant is available. The performance of the interpolated n-gram graphone model is evaluated on Arabic phonetic pronunciation generation for words that can't be handled by the Buckwalter Morphological Analyser. The pronunciations produced are also used to train an Arabic broadcast audio speech recognition system. In both cases the interpolated graphone model leads to improved performance. Copyright © 2011 ISCA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.