973 resultados para Word error rate
Resumo:
A new probabilistic neural network (PNN) learning algorithm based on forward constrained selection (PNN-FCS) is proposed. An incremental learning scheme is adopted such that at each step, new neurons, one for each class, are selected from the training samples arid the weights of the neurons are estimated so as to minimize the overall misclassification error rate. In this manner, only the most significant training samples are used as the neurons. It is shown by simulation that the resultant networks of PNN-FCS have good classification performance compared to other types of classifiers, but much smaller model sizes than conventional PNN.
Resumo:
The study examined: (a) the role of phonological, grammatical, and rapid automatized naming (RAN) skills in reading and spelling development; and (b) the component processes of early narrative writing skills. Fifty-seven Turkish-speaking children were followed from Grade 1 to Grade 2. RAN was the most powerful longitudinal predictor of reading speed and its effect was evident even when previous reading skills were taken into account. Broadly, the phonological and grammatical skills made reliable contributions to spelling performance but their effects were completely mediated by previous spelling skills. Different aspects of the narrative writing skills were related to different processing skills. While handwriting speed predicted writing fluency, spelling accuracy predicted spelling error rate. Vocabulary and working memory were the only reliable longitudinal predictors of the quality of composition content. The overall model, however, failed to explain any reliable variance in the structural quality of the compositions
Resumo:
Little has so far been reported on the performance of the near-far resistant CDMA detectors in the presence of the synchronization errors. Starting with the general mathematical model of matched filters, this paper examines the effects of three classes of synchronization errors (i.e. time-delay errors, carrier phase errors, and carrier frequency errors) on the performance (bit error rate and near-far resistance) of an emerging type of near-far resistant coherent DS/SSMA detectors, i.e. the linear decorrelating detector (LDD). For comparison, the corresponding results for the conventional detector are also presented. It is shown that the LDD can still maintain a considerable performance advantage over the conventional detector even when some synchronization errors exist. Finally, several computer simulations are carried out to verify the theoretical conclusions.
Resumo:
The existing dual-rate blind linear detectors, which operate at either the low-rate (LR) or the high-rate (HR) mode, are not strictly blind at the HR mode and lack theoretical analysis. This paper proposes the subspace-based LR and HR blind linear detectors, i.e., bad decorrelating detectors (BDD) and blind MMSE detectors (BMMSED), for synchronous DS/CDMA systems. To detect an LR data bit at the HR mode, an effective weighting strategy is proposed. The theoretical analyses on the performance of the proposed detectors are carried out. It has been proved that the bit-error-rate of the LR-BDD is superior to that of the HR-BDD and the near-far resistance of the LR blind linear detectors outperforms that of its HR counterparts. The extension to asynchronous systems is also described. Simulation results show that the adaptive dual-rate BMMSED outperform the corresponding non-blind dual-rate decorrelators proposed by Saquib, Yates and Mandayam (see Wireless Personal Communications, vol. 9, p.197-216, 1998).
Resumo:
In 1997, the UK implemented the worlds first commercial digital terrestrial television system. Under the ETS 300 744 standard, the chosen modulation method, COFDM, is assumed to be multipath resilient. Previous work has shown that this is not necessarily the case. It has been shown that the local oscillator required for demodulation from intermediate-frequency to baseband must be very accurate. This paper shows that under multipath conditions, standard methods for obtaining local oscillator phase lock may not be adequate. This paper demonstrates a set of algorithms designed for use with a simple local oscillator circuit which will allow correction for local oscillator phase offset to maintain a low bit error rate with multipath present.
Resumo:
Motivation: In order to enhance genome annotation, the fully automatic fold recognition method GenTHREADER has been improved and benchmarked. The previous version of GenTHREADER consisted of a simple neural network which was trained to combine sequence alignment score, length information and energy potentials derived from threading into a single score representing the relationship between two proteins, as designated by CATH. The improved version incorporates PSI-BLAST searches, which have been jumpstarted with structural alignment profiles from FSSP, and now also makes use of PSIPRED predicted secondary structure and bi-directional scoring in order to calculate the final alignment score. Pairwise potentials and solvation potentials are calculated from the given sequence alignment which are then used as inputs to a multi-layer, feed-forward neural network, along with the alignment score, alignment length and sequence length. The neural network has also been expanded to accommodate the secondary structure element alignment (SSEA) score as an extra input and it is now trained to learn the FSSP Z-score as a measurement of similarity between two proteins. Results: The improvements made to GenTHREADER increase the number of remote homologues that can be detected with a low error rate, implying higher reliability of score, whilst also increasing the quality of the models produced. We find that up to five times as many true positives can be detected with low error rate per query. Total MaxSub score is doubled at low false positive rates using the improved method.
Resumo:
Single-carrier frequency division multiple access (SC-FDMA) has appeared to be a promising technique for high data rate uplink communications. Aimed at SC-FDMA applications, a cyclic prefixed version of the offset quadrature amplitude modulation based OFDM (OQAM-OFDM) is first proposed in this paper. We show that cyclic prefixed OQAMOFDM CP-OQAM-OFDM) can be realized within the framework of the standard OFDM system, and perfect recovery condition in the ideal channel is derived. We then apply CP-OQAMOFDM to SC-FDMA transmission in frequency selective fading channels. Signal model and joint widely linear minimum mean square error (WLMMSE) equalization using a prior information with low complexity are developed. Compared with the existing DFTS-OFDM based SC-FDMA, the proposed SC-FDMA can significantly reduce envelope fluctuation (EF) of the transmitted signal while maintaining the bandwidth efficiency. The inherent structure of CP-OQAM-OFDM enables low-complexity joint equalization in the frequency domain to combat both the multiple access interference and the intersymbol interference. The joint WLMMSE equalization using a prior information guarantees optimal MMSE performance and supports Turbo receiver for improved bit error rate (BER) performance. Simulation resultsconfirm the effectiveness of the proposed SC-FDMA in termsof EF (including peak-to-average power ratio, instantaneous-toaverage power ratio and cubic metric) and BER performances.
Resumo:
This paper presents an adaptive frame length mechanism based on a cross-layer analysis of intrinsic relations between the MAC frame length, bit error rate (BER) of the wireless link and normalized goodput. The proposed mechanism selects the optimal frame length that keeps the service normalized goodput at required levels while satisfying the lowest requirement on the BER, thus increasing the transmission reliability. Numerical results are provided and show that an optimal frame length satisfying the lowest BER requirement does indeed exist. The performance of BER requirement as a function of the MAC frame length is evaluated and compared for transmission scenarios with and without automatic repeat request (ARQ). Furthermore, issues related to the MAC overhead length are also discussed to illuminate the functionality and performance of the proposed mechanism.
Resumo:
This paper introduces a new adaptive nonlinear equalizer relying on a radial basis function (RBF) model, which is designed based on the minimum bit error rate (MBER) criterion, in the system setting of the intersymbol interference channel plus a co-channel interference. Our proposed algorithm is referred to as the on-line mixture of Gaussians estimator aided MBER (OMG-MBER) equalizer. Specifically, a mixture of Gaussians based probability density function (PDF) estimator is used to model the PDF of the decision variable, for which a novel on-line PDF update algorithm is derived to track the incoming data. With the aid of this novel on-line mixture of Gaussians based sample-by-sample updated PDF estimator, our adaptive nonlinear equalizer is capable of updating its equalizer’s parameters sample by sample to aim directly at minimizing the RBF nonlinear equalizer’s achievable bit error rate (BER). The proposed OMG-MBER equalizer significantly outperforms the existing on-line nonlinear MBER equalizer, known as the least bit error rate equalizer, in terms of both the convergence speed and the achievable BER, as is confirmed in our simulation study
Resumo:
Whole-genome sequencing (WGS) could potentially provide a single platform for extracting all the information required to predict an organism’s phenotype. However, its ability to provide accurate predictions has not yet been demonstrated in large independent studies of specific organisms. In this study, we aimed to develop a genotypic prediction method for antimicrobial susceptibilities. The whole genomes of 501 unrelated Staphylococcus aureus isolates were sequenced, and the assembled genomes were interrogated using BLASTn for a panel of known resistance determinants (chromosomal mutations and genes carried on plasmids). Results were compared with phenotypic susceptibility testing for 12 commonly used antimicrobial agents (penicillin, methicillin, erythromycin, clindamycin, tetracycline, ciprofloxacin, vancomycin, trimethoprim, gentamicin, fusidic acid, rifampin, and mupirocin) performed by the routine clinical laboratory. We investigated discrepancies by repeat susceptibility testing and manual inspection of the sequences and used this information to optimize the resistance determinant panel and BLASTn algorithm. We then tested performance of the optimized tool in an independent validation set of 491 unrelated isolates, with phenotypic results obtained in duplicate by automated broth dilution (BD Phoenix) and disc diffusion. In the validation set, the overall sensitivity and specificity of the genomic prediction method were 0.97 (95% confidence interval [95% CI], 0.95 to 0.98) and 0.99 (95% CI, 0.99 to 1), respectively, compared to standard susceptibility testing methods. The very major error rate was 0.5%, and the major error rate was 0.7%. WGS was as sensitive and specific as routine antimicrobial susceptibility testing methods. WGS is a promising alternative to culture methods for resistance prediction in S. aureus and ultimately other major bacterial pathogens.
Resumo:
Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Resumo:
Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.
Resumo:
The coexistence between different types of templates has been the choice solution to the information crisis of prebiotic evolution, triggered by the finding that a single RNA-like template cannot carry enough information to code for any useful replicase. In principle, confining d distinct templates of length L in a package or protocell, whose Survival depends on the coexistence of the templates it holds in, could resolve this crisis provided that d is made sufficiently large. Here we review the prototypical package model of Niesert et al. [1981. Origin of life between Scylla and Charybdis. J. Mol. Evol. 17, 348-353] which guarantees the greatest possible region of viability of the protocell population, and show that this model, and hence the entire package approach, does not resolve the information crisis. In particular, we show that the total information stored in a viable protocell (Ld) tends to a constant value that depends only on the spontaneous error rate per nucleotide of the template replication mechanism. As a result, an increase of d must be followed by a decrease of L, so that the net information gain is null. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
O objetivo desta pesquisa foi identificar a percepção dos empregados sobre a relação da legislação de compras com o desempenho da Embrapa Semiárido com base no critério de eficiência após a implantação do Pregão como uma nova modalidade licitatória. Foram realizadas pesquisas bibliográfica, documental, de campo, e para assegurar a validade das informações foi utilizada uma triangulação de técnicas de coleta de dados de análise documental, observação direta e entrevistas semiabertas, realizadas com empregados que atuam ou atuaram no Setor de Compras a mais de 10 anos e que fossem pregoeiros, e pesquisadores da Unidade, também com mais de 10 anos de experiência, que tivessem projetos aprovados com orçamento do Tesouro Nacional, com execução nos períodos anteriores e após à implantação do Pregão. De acordo com a documentação analisada e na ótica dos empregados do Setor, foi identificado que após a implantação do Pregão, a unidade tem conseguido economia de recursos nas contratações em média de 20% abaixo do valor de referência estabelecido nos Editais, o que poderia indicar eficiência. Entretanto, ficou evidente que o índice de erro nos processos passou a ser muito maior em relação aos processos realizados antes do pregão, e que apesar de tal modalidade ser apontada como benéfica, tem-se a impressão de que está sendo utilizada de maneira inadequada generalizadamente para toda e qualquer contratação sem uma análise prévia da modalidade licitatória mais adequada. Nessa mesma linha, no tocante aos relatórios de pesquisa e à opinião dos pesquisadores, foi evidenciado que tem havido perdas orçamentárias por não entrega de processos feitos em fim de ano, implicando em não cumprimento de metas, etapas ou tarefas dos experimentos, bem como, muitos atrasos nas entregas, e desperdícios de recursos por conta de aquisições de má qualidade que não servem para o trabalho de pesquisa. Por fim, a título de contribuição, este estudo apontou que a legislação de compras públicas atual influencia diretamente no desempenho da Embrapa Semiárido, e não atende às suas necessidades enquanto instituição de pesquisa, indicando que a instituição não pode seguir as mesmas regras, normas e legislação dos processos de aquisições dos demais órgãos da administração pública, e precisa de um novo aparato legal, que lhe proporcione maior flexibilidade para tocar seus projetos e cumprir sua missão institucional
Resumo:
Este estudo visa desenvolver um sistema portátil de radiocomunicação de radiação restrita, indicado para biotelemetria digital de curta distância aplicada ao Teste da Caminhada de Seis Minutos (TC6M) em pacientes com doença pulmonar obstrutiva crônica ou hipertensão pulmonar. A saturação periférica da hemoglobina (SpO2) e a freqüência cardíaca (FC) são monitoradas em tempo real. É utilizada a banda destinada a aplicações médicas, industriais e científicas (ISM), com freqüência de portadora em 916MHz e potência de transmissão de 0,75mW. Este sistema foi projetado para operar através de um enlace half duplex e codificação Manchester NRZ incorporando um protocolo para correção de erros do tipo automatic repeat request error com utilização de um código CRC-16 para detecção de erros. A velocidade máxima de transmissão de dados é de 115.2 kbps. O sistema é constituído de três partes: unidade portátil (Master), unidade estacionária (Slave) e software de visualização em tempo real. A unidade portátil recebe do oxímetro os parâmetros de monitorização que são transmitidos via enlace de rádio-freqüência. A interface da unidade estacionária com o software é feita através da porta de comunicação serial padrão RS-232. Os testes de laboratório e de campo demonstraram que o sistema de biotelemetria é adequado a realizar o TC6M com precisão de SpO2 de ±3 dígitos (com ±1 desvio padrão) e FC de ±3% ambos com taxa de Frame Error Rate < 10-4 (0,01%), sem restrigir os movimentos do usuário durante o processo de monitorização.