19 resultados para gum arabic


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amyloid nanofibers derived from hen egg white lysozyme were processed into macroscopic fibers in a wet-spinning process based on interfacial polyion complexation using a polyanionic polysaccharide as cross-linker. As a result of their amyloid nanostructure, the hierarchically self-assembled protein fibers have a stiffness of up to 14 GPa and a tensile strength of up to 326 MPa. Fine-tuning of the polyelectrolytic interactions via pH allows to trigger the release of small molecules, as demonstrated with riboflavin-5'-phophate. The amyloid fibrils, highly oriented within the gellan gum matrix, were mineralized with calcium phosphate, mimicking the fibrolamellar structure of bone. The formed mineral crystals are highly oriented along the nanofibers, thus resulting in a 9-fold increase in fiber stiffness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a novel method for the training of a complementary acoustic model with respect to set of given acoustic models. The method is based upon an extension of the Minimum Phone Error (MPE) criterion and aims at producing a model that makes complementary phone errors to those already trained. The technique is therefore called Complementary Phone Error (CPE) training. The method is evaluated using an Arabic large vocabulary continuous speech recognition task. Reductions in word error rate (WER) after combination with a CPE-trained system were obtained with up to 0.7% absolute for a system trained on 172 hours of acoustic data and up to 0.2% absolute for the final system trained on nearly 2000 hours of Arabic data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptation to speaker and environment changes is an essential part of current automatic speech recognition (ASR) systems. In recent years the use of multi-layer percpetrons (MLPs) has become increasingly common in ASR systems. A standard approach to handling speaker differences when using MLPs is to apply a global speaker-specific constrained MLLR (CMLLR) transform to the features prior to training or using the MLP. This paper considers the situation when there are both speaker and channel, communication link, differences in the data. A more powerful transform, front-end CMLLR (FE-CMLLR), is applied to the inputs to the MLP to represent the channel differences. Though global, these FE-CMLLR transforms vary from time-instance to time-instance. Experiments on a channel distorted dialect Arabic conversational speech recognition task indicates the usefulness of adapting MLP features using both CMLLR and FE-CMLLR transforms. © 2013 IEEE.