386 resultados para lms


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The function of a protein can be deciphered with higher accuracy from its structure than from its amino acid sequence. Due to the huge gap in the available protein sequence and structural space, tools that can generate functionally homogeneous clusters using only the sequence information, hold great importance. For this, traditional alignment-based tools work well in most cases and clustering is performed on the basis of sequence similarity. But, in the case of multi-domain proteins, the alignment quality might be poor due to varied lengths of the proteins, domain shuffling or circular permutations. Multi-domain proteins are ubiquitous in nature, hence alignment-free tools, which overcome the shortcomings of alignment-based protein comparison methods, are required. Further, existing tools classify proteins using only domain-level information and hence miss out on the information encoded in the tethered regions or accessory domains. Our method, on the other hand, takes into account the full-length sequence of a protein, consolidating the complete sequence information to understand a given protein better. Results: Our web-server, CLAP (Classification of Proteins), is one such alignment-free software for automatic classification of protein sequences. It utilizes a pattern-matching algorithm that assigns local matching scores (LMS) to residues that are a part of the matched patterns between two sequences being compared. CLAP works on full-length sequences and does not require prior domain definitions. Pilot studies undertaken previously on protein kinases and immunoglobulins have shown that CLAP yields clusters, which have high functional and domain architectural similarity. Moreover, parsing at a statistically determined cut-off resulted in clusters that corroborated with the sub-family level classification of that particular domain family. Conclusions: CLAP is a useful protein-clustering tool, independent of domain assignment, domain order, sequence length and domain diversity. Our method can be used for any set of protein sequences, yielding functionally relevant clusters with high domain architectural homogeneity. The CLAP web server is freely available for academic use at http://nslab.mbu.iisc.ernet.in/clap/.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaptation may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences. ©2010 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple subsystems developed at different sites. Cross system adaptation can be used as an alternative to direct hypothesis level combination schemes such as ROVER. In normal cross adaptation it is assumed that useful diversity among systems exists only at acoustic level. However, complimentary features among complex LVCSR systems also manifest themselves in other layers of modelling hierarchy, e.g., subword and word level. It is thus interesting to also cross adapt language models (LM) to capture them. In this paper cross adaptation of multi-level LMs modelling both syllable and word sequences was investigated to improve LVCSR system combination. Significant error rate gains up to 6.7% rel. were obtained over ROVER and acoustic model only cross adaptation when combining 13 Chinese LVCSR subsystems used in the 2010 DARPA GALE evaluation. © 2010 ISCA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Blackburn College is making use of its Heritage Library Management System (LMS) for a wide variety of loans beyond that of books; this has enabled them to better manage the growing numbers and types of technologies that are being used in teaching and learning. In an additional pilot development, they have taken the bold step of training college departments in cataloguing their own technologies to add to the LMS for loan. This has enabled the departments to keep track of their own equipment easily, and provided a more consistent approach to equipment loan within the College.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present our experimental results supporting optical-electrical hybrid data storage by optical recording and electrical reading using Ge2Sb2Te5as recording medium. The sheet resistance of laser- irradiated Ge2Sb2Te5. lms exhibits an abrupt change of four orders of magnitude ( from 10 7 to 10 3./ sq) with increasing laser power, current- voltage curves of the amorphous area and the laser- crystallized dots, measured by a conductive atomic force microscope ( C- AFM), show that their resistivities are 2.725 and 3.375 x 10- 3., respectively, the surface current distribution in the. lms also shows high and low resistance states. All these results suggest that the laser- recorded bit can be read electrically by measuring the change of electrical resistivity, thus making optical electrical hybrid data storage possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[Es]En este proyecto se analizan el diseño y la evaluación de dos métodos para la supresión de la interferencia generada por las compresiones torácicas proporcionadas por el dispositivo mecánico LUCAS, en el electrocardiograma (ECG) durante el masaje de resucitación cardiopulmonar. El objetivo es encontrar un método que elimine el artefacto generado en el ECG de una manera efectiva, que permita el diagnóstico fiable del ritmo cardiaco. Encontrar un método eficaz sería de gran ayuda para no tener que interrumpir el masaje de resucitación para el análisis correcto del ritmo cardiaco, lo que supondría un aumento en las probabilidades de resucitación. Para llevar a cabo el proyecto se ha generado una base de datos propia partiendo de registros de paradas cardiorrespiratorias extra-hospitalarias. Esta nueva base de datos contiene 410 cortes correspondientes a 86 pacientes, siendo todos los episodios de 30 segundos de duración y durante los cuales el paciente, recibe masaje cardiaco. Por otro lado, se ha desarrollado una interfaz gráfica para caracterizar los métodos de supresión del artefacto. Esta, muestra las señales del ECG, de impedancia torácica y del ECG tras eliminar el artefacto en tiempo. Mediante esta herramienta se han procesado los registros aplicando un filtro adaptativo y un filtro de coeficientes constantes. La evaluación de los métodos se ha realizado en base a la sensibilidad y especificidad del algoritmo de clasificación de ritmos con las señales ECG filtradas. La mayor aportación del proyecto, por tanto, es el desarrollo de una potente herramienta eficaz para evaluar métodos de supresión del artefacto causado en el ECG por las compresiones torácicas al realizar el masaje de resucitación cardiopulmonar, y su posterior diagnóstico. Un instrumento que puede ser implementado para analizar episodios de resucitación de cualquier tipo de procedencia y capaz de integrar nuevos métodos de supresión del artefacto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The origin, character, analysis and treatment of subsurface damage (SSD) were summarized in this paper. SSD, which was introduced to substrates by manufacture processes, may bring about the decrease of laser-induced damage threshold (LIDT) of substrates and thin films. Nondestructive evaluation (NDE) methods for the measurement of SSD were used extensively because of their conveniences and reliabilities. The principle, experimental setup and some other technological details were given for total internal reflection microscopy (TIRM), high-frequency scanning acoustic microscopy (HFSAM) and laser-modulated scattering (LMS). However, the spatial resolution, probing depth and theoretic models of these NDE methods demanded further studies. Furthermore, effective surface treatments for minimizing or eliminating SSD were also presented in this paper. Both advantages and disadvantages of ion beam etching (IBE) and magnetorheological finishing (MRF) were discussed. Finally, the key problems and research directions of SSD were summarized. (c) 2005 Elsevier GmbH. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O Leito Móvel Simulado (LMS) é um processo de separação de compostos por adsorção muito eficiente, por trabalhar em um regime contínuo e também possuir fluxo contracorrente da fase sólida. Dentre as diversas aplicações, este processo tem se destacado na resolução de petroquímicos e principalmente na atualidade na separação de misturas racêmicas que são separações de um grau elevado de dificuldade. Neste trabalho foram propostas duas novas abordagens na modelagem do LMS, a abordagem Stepwise e a abordagem Front Velocity. Na modelagem Stepwise as colunas cromatográficas do LMS foram modeladas com uma abordagem discreta, onde cada uma delas teve seu domínio dividido em N células de mistura interligadas em série, e as concentrações dos compostos nas fases líquida e sólida foram simuladas usando duas cinéticas de transferência de massa distintas. Essa abordagem pressupõe que as interações decorrentes da transferência de massa entre as moléculas do composto nas suas fases líquida e sólida ocorram somente na superfície, de forma que com essa suposição pode-se admitir que o volume ocupado por cada molécula nas fases sólida e líquida é o mesmo, o que implica que o fator de residência pode ser considerado igual a constante de equilíbrio. Para descrever a transferência de massa que ocorre no processo cromatográfico a abordagem Front Velocity estabelece que a convecção é a fase dominante no transporte de soluto ao longo da coluna cromatográfica. O Front Velocity é um modelo discreto (etapas) em que a vazão determina o avanço da fase líquida ao longo da coluna. As etapas são: avanço da fase líquida e posterior transporte de massa entre as fases líquida e sólida, este último no mesmo intervalo de tempo. Desta forma, o fluxo volumétrico experimental é utilizado para a discretização dos volumes de controle que se deslocam ao longo da coluna porosa com a mesma velocidade da fase líquida. A transferência de massa foi representada por dois mecanismos cinéticos distintos, sem (tipo linear) e com capacidade máxima de adsorção (tipo Langmuir). Ambas as abordagens propostas foram estudadas e avaliadas mediante a comparação com dados experimentais de separação em LMS do anestésico cetamina e, posteriormente, com o fármaco Verapamil. Também foram comparados com as simulações do modelo de equilíbrio dispersivo para o caso da Cetamina, usado por Santos (2004), e para o caso do Verapamil (Perna 2013). Na etapa de caracterização da coluna cromatográfica as novas abordagens foram associadas à ferramenta inversa R2W de forma a determinar os parâmetros globais de transferência de massa apenas usando os tempos experimentais de residência de cada enantiômero na coluna de cromatografia líquida de alta eficiência (CLAE). Na segunda etapa os modelos cinéticos desenvolvidos nas abordagens foram aplicados nas colunas do LMS com os valores determinados na caracterização da coluna cromatográfica, para a simulação do processo de separação contínua. Os resultados das simulações mostram boa concordância entre as duas abordagens propostas e os experimentos de pulso para a caracterização da coluna na separação enantiomérica da cetamina ao longo do tempo. As simulações da separação em LMS, tanto do Verapamil quando da Cetamina apresentam uma discrepância com os dados experimentais nos primeiros ciclos, entretanto após esses ciclos iniciais a correlação entre os dados experimentais e as simulações. Para o caso da separação da cetamina (Santos, 2004), a qual a concentração da alimentação era relativamente baixa, os modelos foram capazes de predizer o processo de separação com as cinéticas Linear e Langmuir. No caso da separação do Verapamil (Perna, 2013), onde a concentração da alimentação é relativamente alta, somente a cinética de Langmuir representou o processo, devido a cinética Linear não representar a saturação das colunas cromatográficas. De acordo como o estudo conduzido ambas as abordagens propostas mostraram-se ferramentas com potencial na predição do comportamento cromatográfico de uma amostra em um experimento de pulso, assim como na simulação da separação de um composto no LMS, apesar das pequenas discrepâncias apresentadas nos primeiros ciclos de trabalho do LMS. Além disso, podem ser facilmente implementadas e aplicadas na análise do processo, pois requer um baixo número de parâmetros e são constituídas de equações diferenciais ordinárias.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diversas aplicações industriais relevantes envolvem os processos de adsorção, citando como exemplos a purificação de produtos, separação de substâncias, controle de poluição e umidade entre outros. O interesse crescente pelos processos de purificação de biomoléculas deve-se principalmente ao desenvolvimento da biotecnologia e à demanda das indústrias farmacêutica e química por produtos com alto grau de pureza. O leito móvel simulado (LMS) é um processo cromatográfico contínuo que tem sido aplicado para simular o movimento do leito de adsorvente, de forma contracorrente ao movimento do líquido, através da troca periódica das posições das correntes de entrada e saída, sendo operado de forma contínua, sem prejuízo da pureza das correntes de saída. Esta consiste no extrato, rico no componente mais fortemente adsorvido, e no rafinado, rico no componente mais fracamente adsorvido, sendo o processo particularmente adequado a separações binárias. O objetivo desta tese é estudar e avaliar diferentes abordagens utilizando métodos estocásticos de otimização para o problema inverso dos fenômenos envolvidos no processo de separação em LMS. Foram utilizados modelos discretos com diferentes abordagens de transferência de massa, com a vantagem da utilização de um grande número de pratos teóricos em uma coluna de comprimento moderado, neste processo a separação cresce à medida que os solutos fluem através do leito, isto é, ao maior número de vezes que as moléculas interagem entre a fase móvel e a fase estacionária alcançando assim o equilíbrio. A modelagem e a simulação verificadas nestas abordagens permitiram a avaliação e a identificação das principais características de uma unidade de separação do LMS. A aplicação em estudo refere-se à simulação de processos de separação do Baclofen e da Cetamina. Estes compostos foram escolhidos por estarem bem caracterizados na literatura, estando disponíveis em estudos de cinética e de equilíbrio de adsorção nos resultados experimentais. De posse de resultados experimentais avaliou-se o comportamento do problema direto e inverso de uma unidade de separação LMS visando comparar os resultados obtidos com os experimentais, sempre se baseando em critérios de eficiência de separação entre as fases móvel e estacionária. Os métodos estudados foram o GA (Genetic Algorithm) e o PCA (Particle Collision Algorithm) e também foi feita uma hibridização entre o GA e o PCA. Como resultado desta tese analisouse e comparou-se os métodos de otimização em diferentes aspectos relacionados com o mecanismo cinético de transferência de massa por adsorção e dessorção entre as fases sólidas do adsorvente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple subsystems developed at different sites. Cross system adaptation can be used as an alternative to direct hypothesis level combination schemes such as ROVER. The standard approach involves only cross adapting acoustic models. To fully exploit the complimentary features among sub-systems, language model (LM) cross adaptation techniques can be used. Previous research on multi-level n-gram LM cross adaptation is extended to further include the cross adaptation of neural network LMs in this paper. Using this improved LM cross adaptation framework, significant error rate gains of 4.0%-7.1% relative were obtained over acoustic model only cross adaptation when combining a range of Chinese LVCSR sub-systems used in the 2010 and 2011 DARPA GALE evaluations. Copyright © 2011 ISCA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mandarin Chinese is based on characters which are syllabic in nature and morphological in meaning. All spoken languages have syllabiotactic rules which govern the construction of syllables and their allowed sequences. These constraints are not as restrictive as those learned from word sequences, but they can provide additional useful linguistic information. Hence, it is possible to improve speech recognition performance by appropriately combining these two types of constraints. For the Chinese language considered in this paper, character level language models (LMs) can be used as a first level approximation to allowed syllable sequences. To test this idea, word and character level n-gram LMs were trained on 2.8 billion words (equivalent to 4.3 billion characters) of texts from a wide collection of text sources. Both hypothesis and model based combination techniques were investigated to combine word and character level LMs. Significant character error rate reductions up to 7.3% relative were obtained on a state-of-the-art Mandarin Chinese broadcast audio recognition task using an adapted history dependent multi-level LM that performs a log-linearly combination of character and word level LMs. This supports the hypothesis that character or syllable sequence models are useful for improving Mandarin speech recognition performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple sub-systems that may even be developed at different sites. Cross system adaptation, in which model adaptation is performed using the outputs from another sub-system, can be used as an alternative to hypothesis level combination schemes such as ROVER. Normally cross adaptation is only performed on the acoustic models. However, there are many other levels in LVCSR systems' modelling hierarchy where complimentary features may be exploited, for example, the sub-word and the word level, to further improve cross adaptation based system combination. It is thus interesting to also cross adapt language models (LMs) to capture these additional useful features. In this paper cross adaptation is applied to three forms of language models, a multi-level LM that models both syllable and word sequences, a word level neural network LM, and the linear combination of the two. Significant error rate reductions of 4.0-7.1% relative were obtained over ROVER and acoustic model only cross adaptation when combining a range of Chinese LVCSR sub-systems used in the 2010 and 2011 DARPA GALE evaluations. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In natural languages multiple word sequences can represent the same underlying meaning. Only modelling the observed surface word sequence can result in poor context coverage, for example, when using n-gram language models (LM). To handle this issue, paraphrastic LMs were proposed in previous research and successfully applied to a US English conversational telephone speech transcription task. In order to exploit the complementary characteristics of paraphrastic LMs and neural network LMs (NNLM), the combination between the two is investigated in this paper. To investigate paraphrastic LMs' generalization ability to other languages, experiments are conducted on a Mandarin Chinese broadcast speech transcription task. Using a paraphrastic multi-level LM modelling both word and phrase sequences, significant error rate reductions of 0.9% absolute (9% relative) and 0.5% absolute (5% relative) were obtained over the baseline n-gram and NNLM systems respectively, after a combination with word and phrase level NNLMs. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compared with the ordinary adaptive filter, the variable-length adaptive filter is more efficient (including smaller., lower power consumption and higher computational complexity output SNR) because of its tap-length learning algorithm, which is able to dynamically adapt its tap-length to the optimal tap-length that best balances the complexity and the performance of the adaptive filter. Among existing tap-length algorithms, the LMS-style Variable Tap-Length Algorithm (also called Fractional Tap-Length Algorithm or FT Algorithm) proposed by Y.Gong has the best performance because it has the fastest convergence rates and best stability. However, in some cases its performance deteriorates dramatically. To solve this problem, we first analyze the FT algorithm and point out some of its defects. Second, we propose a new FT algorithm called 'VSLMS' (Variable Step-size LMS) Style Tap-Length Learning Algorithm, which not only uses the concept of FT but also introduces a new concept of adaptive convergence slope. With this improvement the new FT algorithm has even faster convergence rates and better stability. Finally, we offer computer simulations to verify this improvement.