951 resultados para One-shot information theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interaction between globally available information and public library users is a changing one. Global information is readily available yet provider and user struggle to find efficiencies of time and resources. As a primary resource of global information the Denver Public Library (DPL) is approaching this challenge by providing changing technology to a changing user and by providing a customized approach to immigrant populations. DPL provides global information to library users through collections, programs and Internet. Internet and collections global information usage cannot be directly measured due to privacy restrictions. Only 12.5% of general user programs focus on global information. Four percent of budget serves the immigrant users. This is greater than national averages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To solve problems in polymer fluid dynamics, one needs the equation of continuity, motion, and energy. The last two equations contain the stress tensor and the heat-flux vector for the material. There are two ways to formulate the stress tensor: (1) one can write a continuum expression for the stress tensor in terms of kinematic tensors, or (2) one can select a molecular model that represents the polymer molecule, and then develop an expression for the stress tensor from kinetic theory. The advantage of the kinetic theory approach is that one gets information about the relation between the molecular structure of the polymers and the rheological properties. In this review, we restrict the discussion primarily to the simplest stress tensor expressions or “constitutive equations” containing from two to four adjustable parameters, although we do indicate how these formulations may be extended to give more complicated expressions. We also explore how these simplest expressions are recovered as special cases of a more general framework, the Oldroyd 8-constant model. The virtue of studying the simplest models is that we can discover some general notions as to which types of empiricisms or which types of molecular models seem to be worth investigating further. We also explore equivalences between continuum and molecular approaches. We restrict the discussion to several types of simple flows, such as shearing flows and extensional flows. These are the flows that are of greatest importance in industrial operations. Furthermore, if these simple flows cannot be well described by continuum or molecular models, then it is not necessary to lavish time and energy to apply them to more complex flow problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photocopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interviews with Australian university students returning from study in France indicate that problems in accessing crucial information are common experiences, and frequently lead to students reproducing stereotypes of French administrative inefficiency. Our paper argues that the issue is not one of information per se but of cultural differences in the dissemination of information. It analyses the ways in which students interpret their information-gathering difficulties, and the appropriateness of the strategies they devise for overcoming them. It then examines the pedagogical implications for preparing students for study abroad, suggesting means of both equipping students with alternative ways of understanding 'information skills' and intervening in the perpetuation of stereotypes. Cet article se base sur une quarantaine d'interviews avec des étudiants australiens ayant effectué des séjours d'études en France. La difficulté d'accéder aux renseignements jugés indispensables revient souvent au cours des entretiens, source de frustrations qui amène les Australiens à reproduire un stéréotype de l'inefficacité française. Nous posons qu'il s'agit moins d'un manque d'informations que d'une différence culturelle dans la diffusion des renseignements. Notre analyse porte sur les façons dont les étudiants interprètent leurs difficultés, ainsi que sur l'utilité de leurs stratégies pour réunir les données souhaitées. Ce travail a des conséquences pédagogiques pour la préparation de tels séjours : nous suggérons des moyens de conduire les étudiants à concevoir autrement la recherche de l'information et leurs expériences, intervenant ainsi dans la transmission des stéréotypes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we investigate the energy gap between the ground state and the first excited state in a model of two single-mode Bose-Einstein condensates coupled via Josephson tunnelling. The ene:rgy gap is never zero when the tunnelling interaction is non-zero. The gap exhibits no local minimum below a threshold coupling which separates a delocalized phase from a self-trapping phase that occurs in the absence of the external potential. Above this threshold point one minimum occurs close to the Josephson regime, and a set of minima and maxima appear in the Fock regime. Expressions for the position of these minima and maxima are obtained. The connection between these minima and maxima and the dynamics for the expectation value of the relative number of particles is analysed in detail. We find that the dynamics of the system changes as the coupling crosses these points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multispin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The adequate attitude to the information models and information objects in the culture context is one of the main problems to be investigated on the threshold of information society. The goal of this paper is to outline some problems connected with the main styles of perceiving of the mental and artificially generated information models stored in the information objects and used in the processes of the Information Interaction or simply – in the Inforaction. The culture influence on inforaction is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current formal as well as not formal definitions of the concept "Information” are presented in the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The usual assumption that the processing times of the operations are known in advance is the strictest one in scheduling theory. This assumption essentially restricts practical aspects of deterministic scheduling theory since it is not valid for the most processes arising in practice. The paper is devoted to a stability analysis of an optimal schedule, which may help to extend the significance of scheduling theory for decision-making in the real-world applications. The term stability is generally used for the phase of an algorithm, at which an optimal solution of a problem has already been found, and additional calculations are performed in order to study how solution optimality depends on variation of the numerical input data.