846 resultados para Bit error rate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new probabilistic neural network (PNN) learning algorithm based on forward constrained selection (PNN-FCS) is proposed. An incremental learning scheme is adopted such that at each step, new neurons, one for each class, are selected from the training samples arid the weights of the neurons are estimated so as to minimize the overall misclassification error rate. In this manner, only the most significant training samples are used as the neurons. It is shown by simulation that the resultant networks of PNN-FCS have good classification performance compared to other types of classifiers, but much smaller model sizes than conventional PNN.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study examined: (a) the role of phonological, grammatical, and rapid automatized naming (RAN) skills in reading and spelling development; and (b) the component processes of early narrative writing skills. Fifty-seven Turkish-speaking children were followed from Grade 1 to Grade 2. RAN was the most powerful longitudinal predictor of reading speed and its effect was evident even when previous reading skills were taken into account. Broadly, the phonological and grammatical skills made reliable contributions to spelling performance but their effects were completely mediated by previous spelling skills. Different aspects of the narrative writing skills were related to different processing skills. While handwriting speed predicted writing fluency, spelling accuracy predicted spelling error rate. Vocabulary and working memory were the only reliable longitudinal predictors of the quality of composition content. The overall model, however, failed to explain any reliable variance in the structural quality of the compositions

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivation: In order to enhance genome annotation, the fully automatic fold recognition method GenTHREADER has been improved and benchmarked. The previous version of GenTHREADER consisted of a simple neural network which was trained to combine sequence alignment score, length information and energy potentials derived from threading into a single score representing the relationship between two proteins, as designated by CATH. The improved version incorporates PSI-BLAST searches, which have been jumpstarted with structural alignment profiles from FSSP, and now also makes use of PSIPRED predicted secondary structure and bi-directional scoring in order to calculate the final alignment score. Pairwise potentials and solvation potentials are calculated from the given sequence alignment which are then used as inputs to a multi-layer, feed-forward neural network, along with the alignment score, alignment length and sequence length. The neural network has also been expanded to accommodate the secondary structure element alignment (SSEA) score as an extra input and it is now trained to learn the FSSP Z-score as a measurement of similarity between two proteins. Results: The improvements made to GenTHREADER increase the number of remote homologues that can be detected with a low error rate, implying higher reliability of score, whilst also increasing the quality of the models produced. We find that up to five times as many true positives can be detected with low error rate per query. Total MaxSub score is doubled at low false positive rates using the improved method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Whole-genome sequencing (WGS) could potentially provide a single platform for extracting all the information required to predict an organism’s phenotype. However, its ability to provide accurate predictions has not yet been demonstrated in large independent studies of specific organisms. In this study, we aimed to develop a genotypic prediction method for antimicrobial susceptibilities. The whole genomes of 501 unrelated Staphylococcus aureus isolates were sequenced, and the assembled genomes were interrogated using BLASTn for a panel of known resistance determinants (chromosomal mutations and genes carried on plasmids). Results were compared with phenotypic susceptibility testing for 12 commonly used antimicrobial agents (penicillin, methicillin, erythromycin, clindamycin, tetracycline, ciprofloxacin, vancomycin, trimethoprim, gentamicin, fusidic acid, rifampin, and mupirocin) performed by the routine clinical laboratory. We investigated discrepancies by repeat susceptibility testing and manual inspection of the sequences and used this information to optimize the resistance determinant panel and BLASTn algorithm. We then tested performance of the optimized tool in an independent validation set of 491 unrelated isolates, with phenotypic results obtained in duplicate by automated broth dilution (BD Phoenix) and disc diffusion. In the validation set, the overall sensitivity and specificity of the genomic prediction method were 0.97 (95% confidence interval [95% CI], 0.95 to 0.98) and 0.99 (95% CI, 0.99 to 1), respectively, compared to standard susceptibility testing methods. The very major error rate was 0.5%, and the major error rate was 0.7%. WGS was as sensitive and specific as routine antimicrobial susceptibility testing methods. WGS is a promising alternative to culture methods for resistance prediction in S. aureus and ultimately other major bacterial pathogens.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental trade off between energy and spectral-efficient transmission designs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cognitive functions such as attention and memory are known to be impaired in End Stage Renal Disease (ESRD), but the sites of the neural changes underlying these impairments are uncertain. Patients and controls took part in a latent learning task, which had previously shown a dissociation between patients with Parkinson’s disease and those with medial temporal damage. ESRD patients (n=24) and age and education-matched controls (n=24) were randomly assigned to either an exposed or unexposed condition. In Phase 1 of the task, participants learned that a cue (word) on the back of a schematic head predicted that the subsequently seen face would be smiling. For the exposed (but not unexposed) condition, an additional (irrelevant) colour cue was shown during presentation. In Phase 2, a different association, between colour and facial expression, was learned. Instructions were the same for each phase: participants had to predict whether the subsequently viewed face was going to be happy or sad. No difference in error rate between the groups was found in Phase 1, suggesting that patients and controls learned at a similar rate. However, in Phase 2, a significant interaction was found between group and condition, with exposed controls performing significantly worse than unexposed (therefore demonstrating learned irrelevance). In contrast, exposed patients made a similar number of errors to unexposed in Phase 2. The pattern of results in ESRD was different from that previously found in Parkinson’s disease, suggesting a different neural origin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The coexistence between different types of templates has been the choice solution to the information crisis of prebiotic evolution, triggered by the finding that a single RNA-like template cannot carry enough information to code for any useful replicase. In principle, confining d distinct templates of length L in a package or protocell, whose Survival depends on the coexistence of the templates it holds in, could resolve this crisis provided that d is made sufficiently large. Here we review the prototypical package model of Niesert et al. [1981. Origin of life between Scylla and Charybdis. J. Mol. Evol. 17, 348-353] which guarantees the greatest possible region of viability of the protocell population, and show that this model, and hence the entire package approach, does not resolve the information crisis. In particular, we show that the total information stored in a viable protocell (Ld) tends to a constant value that depends only on the spontaneous error rate per nucleotide of the template replication mechanism. As a result, an increase of d must be followed by a decrease of L, so that the net information gain is null. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo desta pesquisa foi identificar a percepção dos empregados sobre a relação da legislação de compras com o desempenho da Embrapa Semiárido com base no critério de eficiência após a implantação do Pregão como uma nova modalidade licitatória. Foram realizadas pesquisas bibliográfica, documental, de campo, e para assegurar a validade das informações foi utilizada uma triangulação de técnicas de coleta de dados de análise documental, observação direta e entrevistas semiabertas, realizadas com empregados que atuam ou atuaram no Setor de Compras a mais de 10 anos e que fossem pregoeiros, e pesquisadores da Unidade, também com mais de 10 anos de experiência, que tivessem projetos aprovados com orçamento do Tesouro Nacional, com execução nos períodos anteriores e após à implantação do Pregão. De acordo com a documentação analisada e na ótica dos empregados do Setor, foi identificado que após a implantação do Pregão, a unidade tem conseguido economia de recursos nas contratações em média de 20% abaixo do valor de referência estabelecido nos Editais, o que poderia indicar eficiência. Entretanto, ficou evidente que o índice de erro nos processos passou a ser muito maior em relação aos processos realizados antes do pregão, e que apesar de tal modalidade ser apontada como benéfica, tem-se a impressão de que está sendo utilizada de maneira inadequada generalizadamente para toda e qualquer contratação sem uma análise prévia da modalidade licitatória mais adequada. Nessa mesma linha, no tocante aos relatórios de pesquisa e à opinião dos pesquisadores, foi evidenciado que tem havido perdas orçamentárias por não entrega de processos feitos em fim de ano, implicando em não cumprimento de metas, etapas ou tarefas dos experimentos, bem como, muitos atrasos nas entregas, e desperdícios de recursos por conta de aquisições de má qualidade que não servem para o trabalho de pesquisa. Por fim, a título de contribuição, este estudo apontou que a legislação de compras públicas atual influencia diretamente no desempenho da Embrapa Semiárido, e não atende às suas necessidades enquanto instituição de pesquisa, indicando que a instituição não pode seguir as mesmas regras, normas e legislação dos processos de aquisições dos demais órgãos da administração pública, e precisa de um novo aparato legal, que lhe proporcione maior flexibilidade para tocar seus projetos e cumprir sua missão institucional

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este estudo visa desenvolver um sistema portátil de radiocomunicação de radiação restrita, indicado para biotelemetria digital de curta distância aplicada ao Teste da Caminhada de Seis Minutos (TC6M) em pacientes com doença pulmonar obstrutiva crônica ou hipertensão pulmonar. A saturação periférica da hemoglobina (SpO2) e a freqüência cardíaca (FC) são monitoradas em tempo real. É utilizada a banda destinada a aplicações médicas, industriais e científicas (ISM), com freqüência de portadora em 916MHz e potência de transmissão de 0,75mW. Este sistema foi projetado para operar através de um enlace half duplex e codificação Manchester NRZ incorporando um protocolo para correção de erros do tipo automatic repeat request error com utilização de um código CRC-16 para detecção de erros. A velocidade máxima de transmissão de dados é de 115.2 kbps. O sistema é constituído de três partes: unidade portátil (Master), unidade estacionária (Slave) e software de visualização em tempo real. A unidade portátil recebe do oxímetro os parâmetros de monitorização que são transmitidos via enlace de rádio-freqüência. A interface da unidade estacionária com o software é feita através da porta de comunicação serial padrão RS-232. Os testes de laboratório e de campo demonstraram que o sistema de biotelemetria é adequado a realizar o TC6M com precisão de SpO2 de ±3 dígitos (com ±1 desvio padrão) e FC de ±3% ambos com taxa de Frame Error Rate < 10-4 (0,01%), sem restrigir os movimentos do usuário durante o processo de monitorização.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation describes the implementation of a WirelessHART networks simulation module for the Network Simulator 3, aiming for the acceptance of both on the present context of networks research and industry. For validating the module were imeplemented tests for attenuation, packet error rate, information transfer success rate and battery duration per station

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wavelet coding has emerged as an alternative coding technique to minimize the fading effects of wireless channels. This work evaluates the performance of wavelet coding, in terms of bit error probability, over time-varying, frequency-selective multipath Rayleigh fading channels. The adopted propagation model follows the COST207 norm, main international standards reference for GSM, UMTS, and EDGE applications. The results show the wavelet coding s efficiency against the inter symbolic interference which characterizes these communication scenarios. This robustness of the presented technique enables its usage in different environments, bringing it one step closer to be applied in practical wireless communication systems

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context-aware applications are typically dynamic and use services provided by several sources, with different quality levels. Context information qualities are expressed in terms of Quality of Context (QoC) metadata, such as precision, correctness, refreshment, and resolution. On the other hand, service qualities are expressed via Quality of Services (QoS) metadata such as response time, availability and error rate. In order to assure that an application is using services and context information that meet its requirements, it is essential to continuously monitor the metadata. For this purpose, it is needed a QoS and QoC monitoring mechanism that meet the following requirements: (i) to support measurement and monitoring of QoS and QoC metadata; (ii) to support synchronous and asynchronous operation, thus enabling the application to periodically gather the monitored metadata and also to be asynchronously notified whenever a given metadata becomes available; (iii) to use ontologies to represent information in order to avoid ambiguous interpretation. This work presents QoMonitor, a module for QoS and QoC metadata monitoring that meets the abovementioned requirement. The architecture and implementation of QoMonitor are discussed. To support asynchronous communication QoMonitor uses two protocols: JMS and Light-PubSubHubbub. In order to illustrate QoMonitor in the development of ubiquitous application it was integrated to OpenCOPI (Open COntext Platform Integration), a Middleware platform that integrates several context provision middleware. To validate QoMonitor we used two applications as proofof- concept: an oil and gas monitoring application and a healthcare application. This work also presents a validation of QoMonitor in terms of performance both in synchronous and asynchronous requests