869 resultados para Error-correcting codes (Information theory)
Resumo:
We study noisy computation in randomly generated k-ary Boolean formulas. We establish bounds on the noise level above which the results of computation by random formulas are not reliable. This bound is saturated by formulas constructed from a single majority-like gate. We show that these gates can be used to compute any Boolean function reliably below the noise bound.
Resumo:
The adequate attitude to the information models and information objects in the culture context is one of the main problems to be investigated on the threshold of information society. The goal of this paper is to outline some problems connected with the main styles of perceiving of the mental and artificially generated information models stored in the information objects and used in the processes of the Information Interaction or simply – in the Inforaction. The culture influence on inforaction is discussed.
Resumo:
In this paper, the “Information Market” is introduced as a payable information exchange and based on it information interaction. In addition, special kind of Information Markets - the Knowledge Markets are outlined. The main focus of the paper is concentrated on the investigation of the staple commodities of the knowledge markets. They are introduced as kind of information objects, called “knowledge information objects”. The main theirs distinctive characteristic is that they contain information models, which concern sets of information models and interconnections between them.
Resumo:
The concept INFOS is very important for understanding the information phenomena. Because of this, it is basic for the General Information Theory. The more precise formal definition of this concept is given in the paper.
Resumo:
Определение многокритериального решения по своей природе компромиссно и принципиально основано на использовании субъективной информации. Возможность решения проблемы основана на гипотезе существования некоторой функции полезности. Традиционный подход линеаризации функции полезности обладает многими недостатками. Предлагается концепция нелинейной схемы компромиссов.
Resumo:
Рассматриваются проблемы коллективного выбора. Обсуждаются преимущества и недостатки коллективного выбора, различные подходы для решения этих задач. Предлагается подход, позволяющие использовать аппарат нечеткой логики для моделирования задач коллективного выбора.
Resumo:
A definition of the concept "business informatics" based on the General Information Theory is discussed in the paper.
Resumo:
Предлагаются методы автоматической классификации и моделирования генетического кода. Излагаются принципы проектирования и результаты использования гетерогенных генно- нейронных сетей.
Resumo:
We review the recent progress of information theory in optical communications, and describe the current experimental results and associated advances in various individual technologies which increase the information capacity. We confirm the widely held belief that the reported capacities are approaching the fundamental limits imposed by signal-to-noise ratio and the distributed non-linearity of conventional optical fibres, resulting in the reduction in the growth rate of communication capacity. We also discuss the techniques which are promising to increase and/or approach the information capacity limit.
Resumo:
We present the analytical description of a specific nonlinear fibre channel with non-compensated dispersion. Information theory analysis shows that capacity of such nonlinear fibre channel does not decay with growing signal-to-noise ratio. © 2012 IEEE.
Resumo:
A closed-form expression for a lower bound on the per soliton capacity of the nonlinear optical fibre channel in the presence of (optical) amplifier spontaneous emission (ASE) noise is derived. This bound is based on a non-Gaussian conditional probability density function for the soliton amplitude jitter induced by the ASE noise and is proven to grow logarithmically as the signal-to-noise ratio increases.
Resumo:
With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.
Resumo:
With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.
Resumo:
Information extraction is a frequent and relevant problem in digital signal processing. In the past few years, different methods have been utilized for the parameterization of signals and the achievement of efficient descriptors. When the signals possess statistical cyclostationary properties, the Cyclic Autocorrelation Function (CAF) and the Spectral Cyclic Density (SCD) can be used to extract second-order cyclostationary information. However, second-order cyclostationary information is poor in nongaussian signals, as the cyclostationary analysis in this case should comprise higher-order statistical information. This paper proposes a new mathematical tool for the higher-order cyclostationary analysis based on the correntropy function. Specifically, the cyclostationary analysis is revisited focusing on the information theory, while the Cyclic Correntropy Function (CCF) and Cyclic Correntropy Spectral Density (CCSD) are also defined. Besides, it is analytically proven that the CCF contains information regarding second- and higher-order cyclostationary moments, being a generalization of the CAF. The performance of the aforementioned new functions in the extraction of higher-order cyclostationary characteristics is analyzed in a wireless communication system where nongaussian noise exists.
Resumo:
Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.