869 resultados para Error-correcting codes (Information theory)
Resumo:
We consider bipartitions of one-dimensional extended systems whose probability distribution functions describe stationary states of stochastic models. We define estimators of the information shared between the two subsystems. If the correlation length is finite, the estimators stay finite for large system sizes. If the correlation length diverges, so do the estimators. The definition of the estimators is inspired by information theory. We look at several models and compare the behaviors of the estimators in the finite-size scaling limit. Analytical and numerical methods as well as Monte Carlo simulations are used. We show how the finite-size scaling functions change for various phase transitions, including the case where one has conformal invariance.
Resumo:
In general, the normal distribution is assumed for the surrogate of the true covariates in the classical error model. This paper considers a class of distributions, which includes the normal one, for the variables subject to error. An estimation approach yielding consistent estimators is developed and simulation studies reported.
Resumo:
This paper uses Shannon's information theory to give a quantitative definition of information flow in systems that transform inputs to outputs. For deterministic systems, the definition is shown to specialise to a simpler form when the information source and the known inputs jointly determine the inputs. For this special case, the definition is related to the classical security condition of non-interference and an equivalence is established between non-interference and independence of random variables. Quantitative information flow for deterministic systems is then presented in relational form. With this presentation, it is shown how relational parametricity can be used to derive upper and lower bounds on information flows through families of functions defined in the second order lambda calculus.
Resumo:
Este trabalho investiga basicamente a validade do Teste Illinois de Habilidades Psicolinguísticas -ITPA -instrumento de avaliação do desenvolvimento da linguagem infantil. Seus autores, S. Kirk e J.J. McCarthy (1961), utilizam o referencial teórico proposto por C. Osgood (1957), a ele incorporando o modelo derivado da Teoria da Informação, o que permite que, na prática clínica, o ITPA possa ser incluído no processo psicodiagnóstico como instrumento de avaliação dos problemas da comunicação em crianças entre três e dez anos. Os objetivos que conduzem e orientam o trabalho apresentado podem ser definidos em três níveis: 1) O que trata dos constructos e suas interrelaç6es -análise crítica da validade teóricado ITPA; 2) O que avalia sua condição de instrumento diagnóstico do desempenho escolar -sensibilidade discriminante do rendimento acadêmico; 3) O que trata da eficácia da prática psicopedagógica proposta pelo mesmo instrumento. O estudo sobre a validade teórica foi realizado com 931 crianças entre três e dez anos de idade, em processo de escolarização, frequentando creches, jardins de infância ou classes regulares da Rede de Ensino do Primeiro Grau no Município do Rio de Janeiro. Utilizou-se a técnica da Análise Fatorial, complementada por uma abordagem lógica que comprovaram algumas das dimensões propostas pelo referencial teórico de Kirk e McCarthy. Para a validade diagnóstica foram avaliadas 71 crianças com dificuldades no desempenho acadêmico, expressas através de conceitos de insuficiência ou deficiência de rendimento e seus resultados foram comparados com os de um subgrupo, aleatoriamente constituído de crianças que participaram do estudo anterior. Utilizou-se a técnica da Análise Discriminante chegando-se à seguinte conclusão: embora a validade de constructo do IIPA não tenha sido completamente confirmada num nível diagnóstico os resultados permitem identificar, com baixa margem de erro, as crianças que pertencem a um ou outro dos grupos de contraste. Quanto ao terceiro nível, foi feita ampla-revelação bibliográfica sobre investigações efetuadas com este instrumento no Brasil e no Exterior. Visou-se avaliar a eficácia da prática psicopedagógica utilizada quando desenvolvida à luz dos recursos de intervenção que o IIPA propõe. Concluiu-se que as pesquisas, até o presente momento efetuadas, não são suficientes para formar um juízo mais seguro da praxis educativa destinada à reabilitação das crianças com problemas da comunicação - o que constitui impedimento a seu desempenho acadêmico - em função das controvérsias que tais pesquisas apresentam.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
O conhecimento prévio do valor da carga é de extrema importância para o planejamento e operação dos sistemas de energia elétrica. Este trabalho apresenta os resultados de um estudo investigativo da aplicação de Redes Neurais Artificiais do tipo Perceptron Multicamadas com treinamento baseado na Teoria da Informação para o problema de Previsão de Carga a curto prazo. A aprendizagem baseada na Teoria da Informação se concentra na utilização da quantidade de informação (Entropia) para treinamento de uma rede neural artificial. Dois modelos previsores são apresentados sendo que os mesmos foram desenvolvidos a partir de dados reais fornecidos por uma concessionária de energia. Para comparação e verificação da eficiência dos modelos propostos um terceiro modelo foi também desenvolvido utilizando uma rede neural com treinamento baseado no critério clássico do erro médio quadrático. Os resultados alcançados mostraram a eficiência dos sistemas propostos, que obtiveram melhores resultados de previsão quando comparados ao sistema de previsão baseado na rede treinada pelo critério do MSE e aos sistemas previsores já apresentados na literatura.
Resumo:
Corresponding to $C_{0}[n,n-r]$, a binary cyclic code generated by a primitive irreducible polynomial $p(X)\in \mathbb{F}_{2}[X]$ of degree $r=2b$, where $b\in \mathbb{Z}^{+}$, we can constitute a binary cyclic code $C[(n+1)^{3^{k}}-1,(n+1)^{3^{k}}-1-3^{k}r]$, which is generated by primitive irreducible generalized polynomial $p(X^{\frac{1}{3^{k}}})\in \mathbb{F}_{2}[X;\frac{1}{3^{k}}\mathbb{Z}_{0}]$ with degree $3^{k}r$, where $k\in \mathbb{Z}^{+}$. This new code $C$ improves the code rate and has error corrections capability higher than $C_{0}$. The purpose of this study is to establish a decoding procedure for $C_{0}$ by using $C$ in such a way that one can obtain an improved code rate and error-correcting capabilities for $C_{0}$.
Resumo:
Similarity measure is one of the main factors that affect the accuracy of intensity-based 2D/3D registration of X-ray fluoroscopy to CT images. Information theory has been used to derive similarity measure for image registration leading to the introduction of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. Previous attempt to incorporate spatial information into mutual information either requires computing the entropy of higher dimensional probability distributions, or is not robust to outliers. In this paper, we show how to incorporate spatial information into mutual information without suffering from these problems. Using a variational approximation derived from the Kullback-Leibler bound, spatial information can be effectively incorporated into mutual information via energy minimization. The resulting similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on datasets of two applications: (a) intra-operative patient pose estimation from a few (e.g. 2) calibrated fluoroscopic images, and (b) post-operative cup alignment estimation from single X-ray radiograph with gonadal shielding.
Resumo:
Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.
Resumo:
Information theory-based metric such as mutual information (MI) is widely used as similarity measurement for multimodal registration. Nevertheless, this metric may lead to matching ambiguity for non-rigid registration. Moreover, maximization of MI alone does not necessarily produce an optimal solution. In this paper, we propose a segmentation-assisted similarity metric based on point-wise mutual information (PMI). This similarity metric, termed SPMI, enhances the registration accuracy by considering tissue classification probabilities as prior information, which is generated from an expectation maximization (EM) algorithm. Diffeomorphic demons is then adopted as the registration model and is optimized in a hierarchical framework (H-SPMI) based on different levels of anatomical structure as prior knowledge. The proposed method is evaluated using Brainweb synthetic data and clinical fMRI images. Both qualitative and quantitative assessment were performed as well as a sensitivity analysis to the segmentation error. Compared to the pure intensity-based approaches which only maximize mutual information, we show that the proposed algorithm provides significantly better accuracy on both synthetic and clinical data.
Resumo:
The security of quantum key distribution protocols is guaranteed by the laws of quantum mechanics. However, a precise analysis of the security properties requires tools from both classical cryptography and information theory. Here, we employ recent results in non-asymptotic classical information theory to show that information reconciliation imposes fundamental limitations on the amount of secret key that can be extracted in the finite key regime. In particular, we find that an often used approximation for the information leakage during one-way information reconciliation is flawed and we propose an improved estimate.
Resumo:
Desde que las Tecnologías de la Información y la Comunicación comenzaron a adquirir una gran importancia en la sociedad, uno de los principales objetivos ha sido conseguir que la información transmitida llegue en perfectas condiciones al receptor. Por este motivo, se hace necesario el desarrollo de nuevos sistemas de comunicación digital capaces de ofrecer una transmisión segura y fiable. Con el paso de los años, se han ido mejorando las características de los mismos, lo que significa importantes avances en la vida cotidiana. En este contexto, uno de los sistemas que más éxito ha tenido es la Modulación Reticulada con Codificación TCM, que aporta grandes ventajas en la comunicación digital, especialmente en los sistemas de banda estrecha. Este tipo de código de protección contra errores, basado en la codificación convolucional, se caracteriza por realizar la modulación y codificación en una sola función. Como consecuencia, se obtiene una mayor velocidad de transmisión de datos sin necesidad de incrementar el ancho de banda, a costa de pasar a una constelación superior. Con este Proyecto Fin de Grado se quiere analizar el comportamiento de la modulación TCM y cuáles son las ventajas que ofrece frente a otros sistemas similares. Se propone realizar cuatro simulaciones, que permitan visualizar diversas gráficas en las que se relacione la probabilidad de bit erróneo BER y la relación señal a ruido SNR. Además, con estas gráficas se puede determinar la ganancia que se obtiene con respecto a la probabilidad de bit erróneo teórica. Estos sistemas pasan de una modulación QPSK a una 8PSK o de una 8PSK a una 16QAM. Finalmente, se desarrolla un entorno gráfico de Matlab con el fin de proporcionar un sencillo manejo al usuario y una mayor interactividad. ABSTRACT. Since Information and Communication Technologies began to gain importance on society, one of the main objectives has been to achieve the transmitted information reaches the receiver perfectly. For this reason, it is necessary to develop new digital communication systems with the ability to offer a secure and reliable transmission. The systems characteristics have improved over the past years, what it means important progress in everyday life. In this context, one of the most successful systems is Trellis Coded Modulation TCM, that brings great advantages in terms of digital communications, especially narrowband systems. This kind of error correcting code, based on convolutional coding, is characterized by codifying and modulating at the same time. As a result, a higher data transmission speed is achieved without increasing bandwidth at the expense of using a superior modulation. The aim of this project is to analyze the TCM performance and the advantages it offers in comparison with other similar systems. Four simulations are proposed, that allows to display several graphics that show how the Bit Error Ratio BER and Signal Noise Ratio SNR are related. Furthermore, it is possible to calculate the coding gain. Finally, a Matlab graphic environment is designed in order to guarantee the interactivity with the final user.
Resumo:
In the first part of this work, we show how certain techniques from quantum information theory can be used in order to obtain very sharp embeddings between noncommutative Lp-spaces. Then, we use these estimates to study the classical capacity with restricted assisted entanglement of the quantum erasure channel and the quantum depolarizing channel. In particular, we exactly compute the capacity of the first one and we show that certain nonmultiplicative results hold for the second one.
Resumo:
Photocopy.
Resumo:
Mode of access: Internet.