992 resultados para language activation
Resumo:
In mammals, trefoil factor family (TFF) proteins are involved in mucosal maintenance and repair, and they are also implicated in tumor suppression and cancer progression. A novel two domain TFF protein from frog Bombina maxima skin secretions (Bm-TFF2) has been purified and cloned. It activated human platelets in a dose-dependent manner and activation of integrin a(11b)beta(3) was involved. Aspirin and apyrase did not largely reduce platelet response to Bm-TFF2 (a 30% inhibition), indicating that the aggregation is not substantially dependent on ADP and thromboxane A2 autocrine feedback. Elimination of external Ca2+ with EGTA did not influence the platelet aggregation induced by Bm-TFF2, meanwhile a strong calcium signal (cytoplasmic Ca2+ release) was detected, suggesting that activation of phospholipase C (PLC) is involved. Subsequent immunoblotting revealed that, unlike in platelets activated by stejnulxin (a glycoprotein VI agonist), PLC gamma 2 was not phosphorylated in platelets activated by Bm-TFF2. FITC-labeled Bm-TFF2 bound to platelet membranes. Bm-TFF2 is the first TFF protein reported to possess human platelet activation activity. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
This paper investigates several approaches to bootstrapping a new spoken language understanding (SLU) component in a target language given a large dataset of semantically-annotated utterances in some other source language. The aim is to reduce the cost associated with porting a spoken dialogue system from one language to another by minimising the amount of data required in the target language. Since word-level semantic annotations are costly, Semantic Tuple Classifiers (STCs) are used in conjunction with statistical machine translation models both of which are trained from unaligned data to further reduce development time. The paper presents experiments in which a French SLU component in the tourist information domain is bootstrapped from English data. Results show that training STCs on automatically translated data produced the best performance for predicting the utterance's dialogue act type, however individual slot/value pairs are best predicted by training STCs on the source language and using them to decode translated utterances. © 2010 ISCA.
Resumo:
Phenoloxidase in shrimp and lobster is shown to exist in latent form which could be activated by trypsin and by an endogenous enzyme with tryptic activity. On Sephadex G-100 gel, three isoenzymes, differing in molecular weights, were isolated from naturally activated lobster shell extracts. Mechanism of activation of ptephenoloxidase involving limited proteolysis by the activating enzyme to form isoenzymes has been proposed.
Resumo:
Gelling times of polyester resins with varying quantities of catalyst and accelerator were studied and the results reported in this communication.
Resumo:
Humans are able to stabilize their movements in environments with unstable dynamics by selectively modifying arm impedance independently of force and torque. We further investigated adaptation to unstable dynamics to determine whether the CNS maintains a constant overall level of stability as the instability of the environmental dynamics is varied. Subjects performed reaching movements in unstable force fields of varying strength, generated by a robotic manipulator. Although the force fields disrupted the initial movements, subjects were able to adapt to the novel dynamics and learned to produce straight trajectories. After adaptation, the endpoint stiffness of the arm was measured at the midpoint of the movement. The stiffness had been selectively modified in the direction of the instability. The stiffness in the stable direction was relatively unchanged from that measured during movements in a null force field prior to exposure to the unstable force field. This impedance modification was achieved without changes in force and torque. The overall stiffness of the arm and environment in the direction of instability was adapted to the force field strength such that it remained equivalent to that of the null force field. This suggests that the CNS attempts both to maintain a minimum level of stability and minimize energy expenditure.
Resumo:
Somatic cell nuclear transfer (SCNT) is a remarkable process in which a somatic cell nucleus is acted upon by the ooplasm via mechanisms that today remain unknown. Here we show the developmental competence (% blastocyst) of embryos derived from SCNT (21%)
Resumo:
Functional glycine receptors (GlyRs) are enriched in the hippocampus, but their roles in synaptic transmission are unclear. In this study, we examined the effect of GlyR activation on paired-pulse stimulation of the whole-cell postsynaptic currents (PSCs)
Resumo:
Superimposed on the activation of the embryonic genome in the preimplantation mouse embryo is the formation of a transcriptionally repressive state during the two-cell stage. This repression appears mediated at the level of chromatin structure, because it is reversed by inducing histone hyperacetylation or inhibiting the second round of DNA replication. We report that of more than 200 amplicons analyzed by mRNA differential display, about 45% of them are repressed between the two-cell and four-cell stages. This repression is scored as either a decrease in amplicon expression that occurs between the two-cell and four-cell stages or on the ability of either trichostatin A tan inhibitor of histone deacetylases) or aphidicolin tan inhibitor of replicative DNA polymerases) to increase the level of amplicon expression. Results of this study also indicate that about 16% of the amplicons analyzed likely are novel genes whose sequence doesn't correspond to sequences in the current databases, whereas about 20% of the sequences expressed during this transition likely are repetitive sequences. Lastly, inducing histone hyperacetylation in the two-cell embryos inhibits cleavage to the four-cell stage. These results suggest that genome activation is global and relatively promiscuous and that a function of the transcriptionally repressive state is to dictate the appropriate profile of gene expression that is compatible with further development.
Resumo:
An increasingly common scenario in building speech synthesis and recognition systems is training on inhomogeneous data. This paper proposes a new framework for estimating hidden Markov models on data containing both multiple speakers and multiple languages. The proposed framework, speaker and language factorization, attempts to factorize speaker-/language-specific characteristics in the data and then model them using separate transforms. Language-specific factors in the data are represented by transforms based on cluster mean interpolation with cluster-dependent decision trees. Acoustic variations caused by speaker characteristics are handled by transforms based on constrained maximum-likelihood linear regression. Experimental results on statistical parametric speech synthesis show that the proposed framework enables data from multiple speakers in different languages to be used to: train a synthesis system; synthesize speech in a language using speaker characteristics estimated in a different language; and adapt to a new language. © 2012 IEEE.
Resumo:
Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents BAGEL, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that BAGEL can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data. © 2010 Association for Computational Linguistics.
Resumo:
State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple subsystems developed at different sites. Cross system adaptation can be used as an alternative to direct hypothesis level combination schemes such as ROVER. The standard approach involves only cross adapting acoustic models. To fully exploit the complimentary features among sub-systems, language model (LM) cross adaptation techniques can be used. Previous research on multi-level n-gram LM cross adaptation is extended to further include the cross adaptation of neural network LMs in this paper. Using this improved LM cross adaptation framework, significant error rate gains of 4.0%-7.1% relative were obtained over acoustic model only cross adaptation when combining a range of Chinese LVCSR sub-systems used in the 2010 and 2011 DARPA GALE evaluations. Copyright © 2011 ISCA.