988 resultados para Language loss
Resumo:
Icing is the practice for preserving prawns on board fishing boats in India. Majority of these boats need to preserve the catch only for a few hours because of the short duration of the fishing trip. However, with the anticipated introduction of a considerable number of bigger fishing vessels which can remain in the fishing ground for longer periods, more than fortnight, preservation methods, other than icing are required to retain prime quality. Freezing and cold storage of whole prawns on board followed by thawing and processing on land is a possible proposition. The extent of quality loss in prawns during these operations is one of the important points to be considered. Hence, laboratory scale studies were undertaken on double freezing of prawns and the results are dealt within this communication.
Resumo:
This paper deals with an extensive study conducted to estimate the extent of weight loss in frozen prawns. The weight Joss varied from 7 to 12% in peeled and deveined (PD), 5 to 7% in headless (HL) and about 7% in cooked and peeled (CP) prawns from the date of processing to the date of inspection, normally within two weeks. To compensate the weight loss nearly 11% of excess material is being added with every frozen block resulting in an average annual loss of Rs. 2.68 crores in foreign exchange. The relevant data pertain to the period 1971 to 1973 and the annual average loss was estimated for the ten years ending 1973.
Resumo:
Mutation C1494T in mitochondrial 12S rRNA gene was recently reported in two large Chinese families with aminoglycoside-induced and nonsyndromic hearing loss (AINHL) and was claimed to be pathogenic. This mutation, however, was first reported in a sample f
Resumo:
Specific interactions among biomolecules drive virtually all cellular functions and underlie phenotypic complexity and diversity. Biomolecules are not isolated particles, but are elements of integrated interaction networks, and play their roles through specific interactions. Simultaneous emergence or loss of multiple interacting partners is unlikely. If one of the interacting partners is lost, then what are the evolutionary consequences for the retained partner? Taking advantages of the availability of the large number of mammalian genome sequences and knowledge of phylogenetic relationships of the species, we examined the evolutionary fate of the motilin (MLN) hormone gene, after the pseudogenization of its specific receptor, MLN receptor (MLNR), on the rodent lineage. We speculate that the MLNR gene became a pseudogene before the divergence of the squirrel and other rodents about 75 mya. The evolutionary consequences for the MLN gene were diverse. While an intact open reading frame for the MLN gene, which appears functional, was preserved in the kangaroo rat, the MLN gene became inactivated independently on the lineages leading to the guinea pig and the common ancestor of the mouse and rat. Gain and loss of specific interactions among biomolecules through the birth and death of genes for biomolecules point to a general evolutionary dynamic: gene birth and death are widespread phenomena in genome evolution, at the genetic level; thus, once mutations arise, a stepwise process of elaboration and optimization ensues, which gradually integrates and orders mutations into a coherent pattern.
Resumo:
Loss of solids from and gain in weight of meat of whole prawn and prawn meat stored in ice has been studied to explain the mechanism of solid loss. Two stages are identified in this phenomenon. In the first stage water is absorbed without loss of solids resulting in a maximum increase in weight. In the second stage both solids and water are lost resulting in gradual decrease in weight from the maximum reached but not reaching the original weight. It is inferred that whole prawns stored in ice up to two days give the maximum peeled yield without loss of nutrients and at the same time making the peeling process easier.
Resumo:
Theoretical and experimental AC loss data on a superconducting pancake coil wound using second generation (2 G) conductors are presented. An anisotropic critical state model is used to calculate critical current and the AC losses of a superconducting pancake coil. In the coil there are two regions, the critical state region and the subcritical region. The model assumes that in the subcritical region the flux lines are parallel to the tape wide face. AC losses of the superconducting pancake coil are calculated using this model. Both calorimetric and electrical techniques were used to measure AC losses in the coil. The calorimetric method is based on measuring the boil-off rate of liquid nitrogen. The electric method used a compensation circuit to eliminate the inductive component to measure the loss voltage of the coil. The experimental results are consistent with the theoretical calculations thus validating the anisotropic critical state model for loss estimations in the superconducting pancake coil. © 2011 American Institute of Physics.
Resumo:
An increasingly common scenario in building speech synthesis and recognition systems is training on inhomogeneous data. This paper proposes a new framework for estimating hidden Markov models on data containing both multiple speakers and multiple languages. The proposed framework, speaker and language factorization, attempts to factorize speaker-/language-specific characteristics in the data and then model them using separate transforms. Language-specific factors in the data are represented by transforms based on cluster mean interpolation with cluster-dependent decision trees. Acoustic variations caused by speaker characteristics are handled by transforms based on constrained maximum-likelihood linear regression. Experimental results on statistical parametric speech synthesis show that the proposed framework enables data from multiple speakers in different languages to be used to: train a synthesis system; synthesize speech in a language using speaker characteristics estimated in a different language; and adapt to a new language. © 2012 IEEE.
Resumo:
Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents BAGEL, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that BAGEL can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data. © 2010 Association for Computational Linguistics.