7 resultados para One-shot information theory

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most experiments in particle physics are scattering experiments, the analysis of which leads to masses, scattering phases, decay widths and other properties of one or multi-particle systems. Until the advent of Lattice Quantum Chromodynamics (LQCD) it was difficult to compare experimental results on low energy hadron-hadron scattering processes to the predictions of QCD, the current theory of strong interactions. The reason being, at low energies the QCD coupling constant becomes large and the perturbation expansion for scattering; amplitudes does not converge. To overcome this, one puts the theory onto a lattice, imposes a momentum cutoff, and computes the integral numerically. For particle masses, predictions of LQCD agree with experiment, but the area of decay widths is largely unexplored. ^ LQCD provides ab initio access to unusual hadrons like exotic mesons that are predicted to contain real gluonic structure. To study decays of these type resonances the energy spectra of a two-particle decay state in a finite volume of dimension L can be related to the associated scattering phase shift δ(k) at momentum k through exact formulae derived by Lüscher. Because the spectra can be computed using numerical Monte Carlo techniques, the scattering phases can thus be determined using Lüscher's formulae, and the corresponding decay widths can be found by fitting Breit-Wigner functions. ^ Results of such a decay width calculation for an exotic hybrid( h) meson (JPC = 1-+) are presented for the decay channel h → πa 1. This calculation employed Lüscher's formulae and an approximation of LQCD called the quenched approximation. Energy spectra for the h and πa1 systems were extracted using eigenvalues of a correlation matrix, and the corresponding scattering phase shifts were determined for a discrete set of πa1 momenta. Although the number of phase shift data points was sparse, fits to a Breit-Wigner model were made, resulting in a decay width of about 60 MeV. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The ultimate intent of this dissertation was to broaden and strengthen our understanding of IT implementation by emphasizing research efforts on the dynamic nature of the implementation process. More specifically, efforts were directed toward opening the "black box" and providing the story that explains how and why contextual conditions and implementation tactics interact to produce project outcomes. In pursuit of this objective, the dissertation was aimed at theory building and adopted a case study methodology combining qualitative and quantitative evidence. Precisely, it examined the implementation process, use and consequences of three clinical information systems at Jackson Memorial Hospital, a large tertiary care teaching hospital.^ As a preliminary step toward the development of a more realistic model of system implementation, the study proposes a new set of research propositions reflecting the dynamic nature of the implementation process.^ Findings clearly reveal that successful implementation projects are likely to be those where key actors envision end goals, anticipate challenges ahead, and recognize the presence of and seize opportunities. It was also found that IT implementation is characterized by the systems theory of equifinality, that is, there are likely several equally effective ways to achieve a given end goal. The selection of a particular implementation strategy appears to be a rational process where actions and decisions are largely influenced by the degree to which key actors recognize the mediating role of each tactic and are motivated to action. The nature of the implementation process is also characterized by the concept of "duality of structure," that is, context and actions mutually influence each other. Another key finding suggests that there is no underlying program that regulates the process of change and moves it form one given point toward a subsequent and already prefigured end. For this reason, the implementation process cannot be thought of as a series of activities performed in a sequential manner such as conceived in stage models. Finally, it was found that IT implementation is punctuated by a certain indeterminacy. Results suggest that only when substantial efforts are focused on what to look for and think about, it is less likely that unfavorable and undesirable consequences will occur. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation examines one category of international capital flows, private portfolio investments (private refers to the source of capital). There is an overall lack of a coherent and consistent definition of foreign portfolio investment. We clarify these definitional issues.^ Two main questions that pertain to private foreign portfolio investments (FPI) are explored. The first problem is the phenomenon of home preference, often referred to as home bias. Related to this are the observed cross-investment flows between countries that seem to contradict the textbook rendition of private FPI. A description of the theories purporting to resolve the home preference puzzle (and the cross-investment one) are summarized and evaluated. Most of this literature considers investors from major developed countries. I consider--as well--whether investors in less developed countries have home preference.^ The dissertation shows that home preference is indeed pervasive and profound across countries, in both developed and emerging markets. For the U.S., I examine home bias in both equity and bond holdings as well. I find that home bias is greater when we look at equity and bond holdings than equity holdings solely.^ In this dissertation a model is developed to explain home bias. This model is original and fills a gap in the literature as there have been no satisfactory models that handle at the same time both home preference and cross-border holdings in the context of information asymmetries. This model reflects what we see in the data and permits us to reach certain results by the use of comparative statics methods. The model suggests, counter-intuitively, that as the rate of return in a country relative to the world rate of return increases, home preference decreases. In the context of our relatively simple model we ascribe this result to the higher variance of the now higher return for home assets. We also find, this time as intended, that as risk aversion increases, investors diversify further so that home preference decreases.^ The second question that the dissertation deals with is the volatility of private foreign portfolio investment. Countries that are recipients of these flows have been wary of such flows because of their perceived volatility. Often the contrast is made with the perceived absence of volatility in foreign direct investment flows. I analyze the validity of these concerns using first net flow data and then gross flow data. The results show that FPI is not, in relative terms, more volatile than other flows in our sample of eight countries (half were developed countries and the rest were emerging markets).^ The implication therefore is that restricting FPI flows may be harmful in the sense that private capital may not be allocated efficiently worldwide to the detriment of capital poor economies. More to the point, any such restrictions would in fact be misguided. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Extreme stock price movements are of great concern to both investors and the entire economy. For investors, a single negative return, or a combination of several smaller returns, can possible wipe out so much capital that the firm or portfolio becomes illiquid or insolvent. If enough investors experience this loss, it could shock the entire economy. An example of such a case is the stock market crash of 1987. Furthermore, there has been a lot of recent interest regarding the increasing volatility of stock prices. ^ This study presents an analysis of extreme stock price movements. The data utilized was the daily returns for the Standard and Poor's 500 index from January 3, 1978 to May 31, 2001. Research questions were analyzed using the statistical models provided by extreme value theory. One of the difficulties in examining stock price data is that there is no consensus regarding the correct shape of the distribution function generating the data. An advantage with extreme value theory is that no detailed knowledge of this distribution function is required to apply the asymptotic theory. We focus on the tail of the distribution. ^ Extreme value theory allows us to estimate a tail index, which we use to derive bounds on the returns for very low probabilities on an excess. Such information is useful in evaluating the volatility of stock prices. There are three possible limit laws for the maximum: Gumbel (thick-tailed), Fréchet (thin-tailed) or Weibull (no tail). Results indicated that extreme returns during the time period studied follow a Fréchet distribution. Thus, this study finds that extreme value analysis is a valuable tool for examining stock price movements and can be more efficient than the usual variance in measuring risk. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, the past decade has seen growing interest in quantitative theories of information flow that allow us to quantify the information being leaked. Within these theories, the system is modeled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures quantify the amount of information leakage caused by the channel. ^ This thesis presents new results in the theory of min-entropy leakage. First, we study the perspective of secrecy as a resource that is gradually consumed by a system. We explore this intuition through various models of min-entropy consumption. Next, we consider several composition operators that allow smaller systems to be combined into larger systems, and explore the extent to which the leakage of a combined system is constrained by the leakage of its constituents. Most significantly, we prove upper bounds on the leakage of a cascade of two channels, where the output of the first channel is used as input to the second. In addition, we show how to decompose a channel into a cascade of channels. ^ We also establish fundamental new results about the recently-proposed g-leakage family of measures. These results further highlight the significance of channel cascading. We prove that whenever channel A is composition refined by channel B, that is, whenever A is the cascade of B and R for some channel R, the leakage of A never exceeds that of B, regardless of the prior distribution or leakage measure (Shannon leakage, guessing entropy leakage, min-entropy leakage, or g-leakage). Moreover, we show that composition refinement is a partial order if we quotient away channel structure that is redundant with respect to leakage alone. These results are strengthened by the proof that composition refinement is the only way for one channel to never leak more than another with respect to g-leakage. Therefore, composition refinement robustly answers the question of when a channel is always at least as secure as another from a leakage point of view.^