41 resultados para Schaderreger, Schaderregerprognose, Prognosemodelle, GIS, Interpolation
Resumo:
In this paper, we consider Bayesian interpolation and parameter estimation in a dynamic sinusoidal model. This model is more flexible than the static sinusoidal model since it enables the amplitudes and phases of the sinusoids to be time-varying. For the dynamic sinusoidal model, we derive a Bayesian inference scheme for the missing observations, hidden states and model parameters of the dynamic model. The inference scheme is based on a Markov chain Monte Carlo method known as Gibbs sampler. We illustrate the performance of the inference scheme to the application of packet-loss concealment of lost audio and speech packets. © EURASIP, 2010.
Resumo:
This paper extends n-gram graphone model pronunciation generation to use a mixture of such models. This technique is useful when pronunciation data is for a specific variant (or set of variants) of a language, such as for a dialect, and only a small amount of pronunciation dictionary training data for that specific variant is available. The performance of the interpolated n-gram graphone model is evaluated on Arabic phonetic pronunciation generation for words that can't be handled by the Buckwalter Morphological Analyser. The pronunciations produced are also used to train an Arabic broadcast audio speech recognition system. In both cases the interpolated graphone model leads to improved performance. Copyright © 2011 ISCA.
Resumo:
Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
A multivariate, robust, rational interpolation method for propagating uncertainties in several dimensions is presented. The algorithm for selecting numerator and denominator polynomial orders is based on recent work that uses a singular value decomposition approach. In this paper we extend this algorithm to higher dimensions and demonstrate its efficacy in terms of convergence and accuracy, both as a method for response suface generation and interpolation. To obtain stable approximants for continuous functions, we use an L2 error norm indicator to rank optimal numerator and denominator solutions. For discontinous functions, a second criterion setting an upper limit on the approximant value is employed. Analytical examples demonstrate that, for the same stencil, rational methods can yield more rapid convergence compared to pseudospectral or collocation approaches for certain problems. © 2012 AIAA.
Resumo:
Geographical Information Systems (GIS) and Digital Elevation Models (DEM) can be used to perform many geospatial and hydrological modelling including drainage and watershed delineation, flood prediction and physical development studies of urban and rural settlements. This paper explores the use of contour data and planimetric features extracted from topographic maps to derive digital elevation models (DEMs) for watershed delineation and flood impact analysis (for emergency preparedness) of part of Accra, Ghana in a GIS environment. In the study two categories of DEMs were developed with 5 m contour and planimetric topographic data; bare earth DEM and built environment DEM. These derived DEMs were used as terrain inputs for performing spatial analysis and obtaining derivative products. The generated DEMs were used to delineate drainage patterns and watershed of the study area using ArcGIS desktop and its ArcHydro extension tool from Environmental Systems Research Institute (ESRI). A vector-based approach was used to derive inundation areas at various flood levels. The DEM of built-up areas was used as inputs for determining properties which will be inundated in a flood event and subsequently generating flood inundation maps. The resulting inundation maps show that about 80% areas which have perennially experienced extensive flooding in the city falls within the predicted flood extent. This approach can therefore provide a simplified means of predicting the extent of inundation during flood events for emergency action especially in less developed economies where sophisticated technologies and expertise are hard to come by. © 2009 Springer Netherlands.
Resumo:
Geographical Information Systems (GIS) and Digital Elevation Models (DEM) can be used to perform many geospatial and hydrological modelling including drainage and watershed delineation, flood prediction and physical development studies of urban and rural settlements. This paper explores the use of contour data and planimetric features extracted from topographic maps to derive digital elevation models (DEMs) for watershed delineation and flood impact analysis (for emergency preparedness) of part of Accra, Ghana in a GIS environment. In the study two categories of DEMs were developed with 5 m contour and planimetric topographic data; bare earth DEM and built environment DEM. These derived DEMs were used as terrain inputs for performing spatial analysis and obtaining derivative products. The generated DEMs were used to delineate drainage patterns and watershed of the study area using ArcGIS desktop and its ArcHydro extension tool from Environmental Systems Research Institute (ESRI). A vector-based approach was used to derive inundation areas at various flood levels. The DEM of built-up areas was used as inputs for determining properties which will be inundated in a flood event and subsequently generating flood inundation maps. The resulting inundation maps show that about 80% areas which have perennially experienced extensive flooding in the city falls within the predicted flood extent. This approach can therefore provide a simplified means of predicting the extent of inundation during flood events for emergency action especially in less developed economies where sophisticated technologies and expertise are hard to come by. © Springer Science + Business Media B.V. 2009.
Resumo:
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum.We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologramplane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique. © 2009 Optical Society of America.
Resumo:
This paper discusses the Cambridge University HTK (CU-HTK) system for the automatic transcription of conversational telephone speech. A detailed discussion of the most important techniques in front-end processing, acoustic modeling and model training, language and pronunciation modeling are presented. These include the use of conversation side based cepstral normalization, vocal tract length normalization, heteroscedastic linear discriminant analysis for feature projection, minimum phone error training and speaker adaptive training, lattice-based model adaptation, confusion network based decoding and confidence score estimation, pronunciation selection, language model interpolation, and class based language models. The transcription system developed for participation in the 2002 NIST Rich Transcription evaluations of English conversational telephone speech data is presented in detail. In this evaluation the CU-HTK system gave an overall word error rate of 23.9%, which was the best performance by a statistically significant margin. Further details on the derivation of faster systems with moderate performance degradation are discussed in the context of the 2002 CU-HTK 10 × RT conversational speech transcription system. © 2005 IEEE.