66 resultados para Structuring transforms


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Models for simulating Scanning Probe Microscopy (SPM) may serve as a reference point for validating experimental data and practice. Generally, simulations use a microscopic model of the sample-probe interaction based on a first-principles approach, or a geometric model of macroscopic distortions due to the probe geometry. Examples of the latter include use of neural networks, the Legendre Transform, and dilation/erosion transforms from mathematical morphology. Dilation and the Legendre Transform fall within a general family of functional transforms, which distort a function by imposing a convex solution.In earlier work, the authors proposed a generalized approach to modeling SPM using a hidden Markov model, wherein both the sample-probe interaction and probe geometry may be taken into account. We present a discussion of the hidden Markov model and its relationship to these convex functional transforms for simulating and restoring SPM images.©2009 SPIE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Successful innovation requires effective communication within and between technical and nontechnical communities, which can be challenging due to different educational backgrounds, experience, perceptions, and attitudes. Roadmapping has emerged as a method that can enable effective dialogue between these groups, and the way in which information is structured is a key feature that enables this communication. This is an area that has not received much attention in the literature, and this article seeks to address this gap by describing in detail the structures that have been successfully applied in roadmapping workshops and processes, from which key learning points and future research directions are identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discriminative mapping transforms (DMTs) is an approach to robustly adding discriminative training to unsupervised linear adaptation transforms. In unsupervised adaptation DMTs are more robust to unreliable transcriptions than directly estimating adaptation transforms in a discriminative fashion. They were previously proposed for use with MLLR transforms with the associated need to explicitly transform the model parameters. In this work the DMT is extended to CMLLR transforms. As these operate in the feature space, it is only necessary to apply a different linear transform at the front-end rather than modifying the model parameters. This is useful for rapidly changing speakers/environments. The performance of DMTs with CMLLR was evaluated on the WSJ 20k task. Experimental results show that DMTs based on constrained linear transforms yield 3% to 6% relative gain over MLE transforms in unsupervised speaker adaptation. © 2011 IEEE.