960 resultados para ERROR rates


Relevância:

60.00% 60.00%

Publicador:

Resumo:

[Es]Este documento explica el procedimiento seguido para desarrollar la última etapa de un decodificador de DVB-T2, que consiste en la extracción de un archivo de vídeo desde un archivo binario resultante del resto del decodificador. Este decodificador se trata del software de un receptor desarrollado por el departamento de TSR (Tratamiento de Señal y Radiocomunicaciones) de la Escuela de Ingenieros de Bilbao en el año 2010. Dicho software es capaz de analizar la señal recibida de DVB-T2 para calcular la tasa de errores y conocer otros parámetros relevantes como el tipo de modulación utilizado. No obstante, para analizar de manera subjetiva las mejoras de DVB-T2 e incluso para determinar de qué manera afectan los errores a la calidad de la imagen es necesario visualizar el video transmitido. Por esta razón se ha comenzado un proyecto en el que el objetivo es programar un nuevo software que proporcione un archivo que contenga el video en cuestión. Este software se ha programado en lenguaje del programa Matlab, y toma el fichero resultante del receptor como entrada, para procesarlo y obtener uno nuevo con el vídeo. De modo que una vez programado y probado para su corrección, se aplica a continuación del receptor del departamento TSR. Una vez obtenido el vídeo es posible comparar la calidad de la imagen con diferentes tasas de error en la comunicación, simulando transmisiones en diferentes ámbitos cada uno con su correspondiente ruido. De esta manera, se estima con muy alta precisión el comportamiento de una transmisión real dependiendo de la climatología y otros factores que afecten a la relación señal a ruido.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O crescimento dos serviços de banda-larga em redes de comunicações móveis tem provocado uma demanda por dados cada vez mais rápidos e de qualidade. A tecnologia de redes móveis chamada LTE (Long Term Evolution) ou quarta geração (4G) surgiu com o objetivo de atender esta demanda por acesso sem fio a serviços, como acesso à Internet, jogos online, VoIP e vídeo conferência. O LTE faz parte das especificações do 3GPP releases 8 e 9, operando numa rede totalmente IP, provendo taxas de transmissão superiores a 100 Mbps (DL), 50 Mbps (UL), baixa latência (10 ms) e compatibilidade com as versões anteriores de redes móveis, 2G (GSM/EDGE) e 3G (UMTS/HSPA). O protocolo TCP desenvolvido para operar em redes cabeadas, apresenta baixo desempenho sobre canais sem fio, como redes móveis celulares, devido principalmente às características de desvanecimento seletivo, sombreamento e às altas taxas de erros provenientes da interface aérea. Como todas as perdas são interpretadas como causadas por congestionamento, o desempenho do protocolo é ruim. O objetivo desta dissertação é avaliar o desempenho de vários tipos de protocolo TCP através de simulações, sob a influência de interferência nos canais entre o terminal móvel (UE User Equipment) e um servidor remoto. Para isto utilizou-se o software NS3 (Network Simulator versão 3) e os protocolos TCP Westwood Plus, New Reno, Reno e Tahoe. Os resultados obtidos nos testes mostram que o protocolo TCP Westwood Plus possui um desempenho melhor que os outros. Os protocolos TCP New Reno e Reno tiveram desempenho muito semelhante devido ao modelo de interferência utilizada ter uma distribuição uniforme e, com isso, a possibilidade de perdas de bits consecutivos é baixa em uma mesma janela de transmissão. O TCP Tahoe, como era de se esperar, apresentou o pior desempenho dentre todos, pois o mesmo não possui o mecanismo de fast recovery e sua janela de congestionamento volta sempre para um segmento após o timeout. Observou-se ainda que o atraso tem grande importância no desempenho dos protocolos TCP, mas até do que a largura de banda dos links de acesso e de backbone, uma vez que, no cenário testado, o gargalo estava presente na interface aérea. As simulações com erros na interface aérea, introduzido com o script de fading (desvanecimento) do NS3, mostraram que o modo RLC AM (com reconhecimento) tem um desempenho melhor para aplicações de transferência de arquivos em ambientes ruidosos do que o modo RLC UM sem reconhecimento.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As biometrias vêm sendo utilizadas como solução de controle de acesso a diversos sistemas há anos, mas o simples uso da biometria não pode ser considerado como solução final e perfeita. Muitos riscos existem e não devem ser ignorados. A maioria dos problemas está relacionada ao caminho de transmissão entre o local onde os usuários requerem seus acessos e os servidores onde são guardados os dados biométricos capturados em seu cadastro. Vários tipos de ataques podem ser efetuados por impostores que desejam usar o sistema indevidamente. Além dos aspectos técnicos, existe o aspecto social. É crescente a preocupação do usuário tanto com o armazenamento quanto o uso indevido de suas biometrias, pois é um identificador único e, por ser invariável no tempo, pode ser perdido para sempre caso seja comprometido. O fato de que várias empresas com seus diferentes servidores guardarem as biometrias está causando incomodo aos usuários, pois as torna mais suscetíveis à ataques. Nesta dissertação, o uso de cartões inteligentes é adotado como possível solução para os problemas supracitados. Os cartões inteligentes preparados para multi-aplicações são usados para realizar as comparações biométricas internamente. Dessa forma, não seria mais necessário utilizar diversos servidores, pois as características biométricas estarão sempre em um único cartão em posse do dono. Foram desenvolvidas e implementadas três diferentes algoritmos de identificação biométrica utilizando diferentes características: impressão digital, impressão da palma da mão e íris. Considerando a memória utilizada, tempo médio de execução e acurácia, a biometria da impressão da palma da mão obteve os melhores resultados, alcançando taxas de erro mínimas e tempos de execução inferiores a meio segundo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper reviews advances in the technology of integrated semiconductor optical amplifier based photonic switch fabrics, with particular emphasis on their suitability for high performance network switches for use within a datacenter. The key requirements for large port count optical switch fabrics are addressed noting the need for switches with substantial port counts. The design options for a 16×16 port photonic switch fabric architecture are discussed and the choice of a Clos-tree design is described. The control strategy, based on arbitration and scheduling, for an integrated switch fabric is explained. The detailed design and fabrication of the switch is followed by experimental characterization, showing net optical gain and operation at 10 Gb/s with bit error rates lower than 10-9. Finally improvements to the switch are suggested, which should result in 100 Gb/s per port operation at energy efficiencies of 3 pJ/bit. © 2011 Optical Society of America.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Otolith thermal marking is an efficient method for mass marking hatchery-reared salmon and can be used to estimate the proportion of hatchery fish captured in a mixed-stock fishery. Accuracy of the thermal pattern classification depends on the prominence of the pattern, the methods used to prepare and view the patterns, and the training and experience of the personnel who determine the presence or absence of a particular pattern. Estimating accuracy rates is problematic when no secondary marking is available and no error-free standards exist. Agreement measures, such as kappa (κ), provide a relative measure of the reliability of the determinations when independent readings by two readers are available, but the magnitude of κ can be influenced by the proportion of marked fish. If a third reader is used or if two or more groups of paired readings are examined, latent class models can provide estimates of the error rates of each reader. Applications of κ and latent class models are illustrated by a program providing contribution estimates of hatchery-reared chum and sockeye salmon in Southeast Alaska.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, the use of morphological decomposition strategies for Arabic Automatic Speech Recognition (ASR) has become increasingly popular. Systems trained on morphologically decomposed data are often used in combination with standard word-based approaches, and they have been found to yield consistent performance improvements. The present article contributes to this ongoing research endeavour by exploring the use of the 'Morphological Analysis and Disambiguation for Arabic' (MADA) tools for this purpose. System integration issues concerning language modelling and dictionary construction, as well as the estimation of pronunciation probabilities, are discussed. In particular, a novel solution for morpheme-to-word conversion is presented which makes use of an N-gram Statistical Machine Translation (SMT) approach. System performance is investigated within a multi-pass adaptation/combination framework. All the systems described in this paper are evaluated on an Arabic large vocabulary speech recognition task which includes both Broadcast News and Broadcast Conversation test data. It is shown that the use of MADA-based systems, in combination with word-based systems, can reduce the Word Error Rates by up to 8.1 relative. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Virtual assembly environment (VAE) technology has the great potential for benefiting the manufacturing applications in industry. Usability is an important aspect of the VAE. This paper presents the usability evaluation of a developed multi-sensory VAE. The evaluation is conducted by using its three attributes: (a) efficiency of use; (b) user satisfaction; and (c) reliability. These are addressed by using task completion times (TCTs), questionnaires, and human performance error rates (HPERs), respectively. A peg-in-a-hole and a Sener electronic box assembly task have been used to perform the experiments, using sixteen participants. The outcomes showed that the introduction of 3D auditory and/or visual feedback could improve the usability. They also indicated that the integrated feedback (visual plus auditory) offered better usability than either feedback used in isolation. Most participants preferred the integrated feedback to either feedback (visual or auditory) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had negative effects on the usability, and easily made them feel frustrated. The possible reasons behind the outcomes are also analysed. © 2007 ACADEMY PUBLISHER.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An electro-optically (EO) modulated oxide-confined vertical-cavity surface-emitting laser (VCSEL) containing a saturable absorber in the VCSEL cavity is studied. The device contains an EO modulator section that is resonant with the VCSEL cavity. A type-II EO superlattice medium is employed in the modulator section and shown to result in a strong negative EO effect in weak electric fields. Applying the reverse bias voltages to the EO section allows triggering of short pulses in the device. Digital data transmission (return-to-zero pseudo-random bit sequence, 27-1) at 10Gb/s at bit-error-rates well below 10-9 is demonstrated. © 2014 AIP Publishing LLC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose a new scheme for omnidirectional object-recognition in free space. The proposed scheme divides above problem into several onmidirectional object-recognition with different depression angles. An onmidirectional object-recognition system with oblique observation directions based on a new recognition theory-Biomimetic Pattern Recognition (BPR) is discussed in detail. Based on it, we can get the size of training samples in the onmidirectional object-recognition system in free space. Omnidirection ally cognitive tests were done on various kinds of animal models of rather similar shapes. For the total 8400 tests, the correct recognition rate is 99.89%. The rejection rate is 0.11% and on the condition of zero error rates. Experimental results are presented to show that the proposed approach outperforms three types of SVMs with either a three degree polynomial kernel or a radial basis function kernel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new theoretical model of Pattern Recognition principles was proposed, which is based on "matter cognition" instead of "matter classification" in traditional statistical Pattern Recognition. This new model is closer to the function of human being, rather than traditional statistical Pattern Recognition using "optimal separating" as its main principle. So the new model of Pattern Recognition is called the Biomimetic Pattern Recognition (BPR)(1). Its mathematical basis is placed on topological analysis of the sample set in the high dimensional feature space. Therefore, it is also called the Topological Pattern Recognition (TPR). The fundamental idea of this model is based on the fact of the continuity in the feature space of any one of the certain kinds of samples. We experimented with the Biomimetic Pattern Recognition (BPR) by using artificial neural networks, which act through covering the high dimensional geometrical distribution of the sample set in the feature space. Onmidirectionally cognitive tests were done on various kinds of animal and vehicle models of rather similar shapes. For the total 8800 tests, the correct recognition rate is 99.87%. The rejection rate is 0.13% and on the condition of zero error rates, the correct rate of BPR was much better than that of RBF-SVM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Stimulus-Response Compatibility is a key concept in human-machine interaction. It is proved that to map stimulus to response according to salient-feature coding principle will get a compatible pair. In the designing of Chinese Pinyin Code inputting devices, stimulus-response compatibility will bring the device with features of ease of use and ease of learning. In this research, Response time and error rates of two designs of salient-feature coding principle and one design of random mapping were tested along with the QWERTY keyboard. Cross-modal Compatibility Effects were found, and no significant difference between two salient-feature coding types, both on response time and error rates; but response time has shown difference between salient-feature coding designs and random mapping design. Compared with the QWERTY keyboard group, the error rates of subjects of chord keyboard group showed no significant differences. But subjects assigned to the QWERTY keyboard group have a shorter response time. One possible reason is the subjects of chord keyboard group only at beginner skill after at most 6 hours practice whereas subjects of QWERTY group were at least at novice level after take a foundation of computer class at their own college.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Binary image classifiction is a problem that has received much attention in recent years. In this paper we evaluate a selection of popular techniques in an effort to find a feature set/ classifier combination which generalizes well to full resolution image data. We then apply that system to images at one-half through one-sixteenth resolution, and consider the corresponding error rates. In addition, we further observe generalization performance as it depends on the number of training images, and lastly, compare the system's best error rates to that of a human performing an identical classification task given teh same set of test images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis examines the problem of an autonomous agent learning a causal world model of its environment. Previous approaches to learning causal world models have concentrated on environments that are too "easy" (deterministic finite state machines) or too "hard" (containing much hidden state). We describe a new domain --- environments with manifest causal structure --- for learning. In such environments the agent has an abundance of perceptions of its environment. Specifically, it perceives almost all the relevant information it needs to understand the environment. Many environments of interest have manifest causal structure and we show that an agent can learn the manifest aspects of these environments quickly using straightforward learning techniques. We present a new algorithm to learn a rule-based causal world model from observations in the environment. The learning algorithm includes (1) a low level rule-learning algorithm that converges on a good set of specific rules, (2) a concept learning algorithm that learns concepts by finding completely correlated perceptions, and (3) an algorithm that learns general rules. In addition this thesis examines the problem of finding a good expert from a sequence of experts. Each expert has an "error rate"; we wish to find an expert with a low error rate. However, each expert's error rate and the distribution of error rates are unknown. A new expert-finding algorithm is presented and an upper bound on the expected error rate of the expert is derived.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper introduces an algorithm that uses boosting to learn a distance measure for multiclass k-nearest neighbor classification. Given a family of distance measures as input, AdaBoost is used to learn a weighted distance measure, that is a linear combination of the input measures. The proposed method can be seen both as a novel way to learn a distance measure from data, and as a novel way to apply boosting to multiclass recognition problems, that does not require output codes. In our approach, multiclass recognition of objects is reduced into a single binary recognition task, defined on triples of objects. Preliminary experiments with eight UCI datasets yield no clear winner among our method, boosting using output codes, and k-nn classification using an unoptimized distance measure. Our algorithm did achieve lower error rates in some of the datasets, which indicates that, in some domains, it may lead to better results than existing methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.