979 resultados para quantum information theory


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The nonlinear interaction between light and atoms is an extensive field of study with a broad range of applications in quantum information science and condensed matter physics. Nonlinear optical phenomena occurring in cold atoms are particularly interesting because such slowly moving atoms can spatially organize into density gratings, which allows for studies involving optical interactions with structured materials. In this thesis, I describe a novel nonlinear optical effect that arises when cold atoms spatially bunch in an optical lattice. I show that employing this spatial atomic bunching provides access to a unique physical regime with reduced thresholds for nonlinear optical processes and enhanced material properties. Using this method, I observe the nonlinear optical phenomenon of transverse optical pattern formation at record-low powers. These transverse optical patterns are generated by a wave- mixing process that is mediated by the cold atomic vapor. The optical patterns are highly multimode and induce rich non-equilibrium atomic dynamics. In particular, I find that there exists a synergistic interplay between the generated optical pat- terns and the atoms, wherein the scattered fields help the atoms to self-organize into new, multimode structures that are not externally imposed on the atomic sample. These self-organized structures in turn enhance the power in the optical patterns. I provide the first detailed investigation of the motional dynamics of atoms that have self-organized in a multimode geometry. I also show that the transverse optical patterns induce Sisyphus cooling in all three spatial dimensions, which is the first observation of spontaneous three-dimensional cooling. My experiment represents a unique means by which to study nonlinear optics and non-equilibrium dynamics at ultra-low required powers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.

This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.

In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The conventional mechanism of fermion mass generation in the Standard Model involves Spontaneous Symmetry Breaking (SSB). In this thesis, we study an alternate mechanism for the generation of fermion masses that does not require SSB, in the context of lattice field theories. Being inherently strongly coupled, this mechanism requires a non-perturbative approach like the lattice approach.

In order to explore this mechanism, we study a simple lattice model with a four-fermion interaction that has massless fermions at weak couplings and massive fermions at strong couplings, but without any spontaneous symmetry breaking. Prior work on this type of mass generation mechanism in 4D, was done long ago using either mean-field theory or Monte-Carlo calculations on small lattices. In this thesis, we have developed a new computational approach that enables us to perform large scale quantum Monte-Carlo calculations to study the phase structure of this theory. In 4D, our results confirm prior results, but differ in some quantitative details of the phase diagram. In contrast, in 3D, we discover a new second order critical point using calculations on lattices up to size $ 60^3$. Such large scale calculations are unprecedented. The presence of the critical point implies the existence of an alternate mechanism of fermion mass generation without any SSB, that could be of interest in continuum quantum field theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.

The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.

We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.

Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optical nanofibres (ONFs) are very thin optical waveguides with sub-wavelength diameters. ONFs have very high evanescent fields and the guided light is confined strongly in the transverse direction. These fibres can be used to achieve strong light-matter interactions. Atoms around the waist of an ONF can be probed by collecting the atomic fluorescence coupling or by measuring the transmission (or the polarisation) of the probe beam sent through it. This thesis presents experiments using ONFs for probing and manipulating laser-cooled 87Rb atoms. As an initial experiment, a single mode ONF was integrated into a magneto-optical trap (MOT) and used for measuring the characteristics of the MOT, such as the loading time and the average temperature of the atom cloud. The effect of a near-resonant probe beam on the local temperature of the cold atoms has been studied. Next, the ONF was used for manipulating the atoms in the evanescent fields region in order to generate nonlinear optical effects. Four-wave mixing, ac Stark effect (Autler-Townes splitting) and electromagnetically induced transparency have been observed at unprecedented ultralow power levels. In another experiment, a few-mode ONF, supporting only the fundamental mode and the first higher order mode group, has been used for studying cold atoms. A higher pumping rate of the atomic fluorescence into the higher order fibreguided modes and more interactions with the surrounding atoms for higher order mode evanescent light, when compared to signals for the fundamental mode, have been identified. The results obtained in the thesis are particularly for a fundamental understanding of light-atom interactions when atoms are near a dielectric surface and also for the development of fibre-based quantum information technologies. Atoms coupled to ONFs could be used for preparing intrinsically fibre-coupled quantum nodes for quantum computing and the studies presented here are significant for a detailed understanding of such a system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Se presentan los resultados de la aplicación de una metodología integradora de auditoría de información y conocimiento, llevada a cabo en un Centro de Investigación del Ministerio de Ciencia, Tecnología y Medio Ambiente de la provincia de Holguín, Cuba, conformada por siete etapas con un enfoque híbrido dirigida a revisar la estrategia y la política de gestión de información y conocimiento, identificar e inventariar y mapear los recursos de I+C y sus flujos, y valorar los procesos asociados a su gestión. La alta dirección de este centro, sus especialistas e investigadores manifestaron la efectividad de la metodología aplicada cuyos resultados propiciaron reajustar la proyección estratégica en relación con la gestión de la I+C, rediseñar los flujos informativos de los procesos claves, disponer de un directorio de sus expertos por áreas y planificar el futuro aprendizaje y desarrollo profesional.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the achievable sum-rate of uplink massive multiple-input multiple-output (MIMO) systems considering a practical channel impairment, namely, aged channel state information (CSI). Taking into account both maximum ratio combining (MRC) and zero-forcing (ZF) receivers at the base station, we present tight closed-form lower bounds on the sum-rate for both receivers, which provide efficient means to evaluate the sum-rate of the system. More importantly, we characterize the impact of channel aging on the power scaling law. Specifically, we show that the transmit power of each user can be scaled down by 1/√(M), which indicates that aged CSI does not affect the power scaling law; instead, it causes only a reduction on the sum rate by reducing the effective signal-to-interference-and-noise ratio (SINR).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’augmentation exponentielle de la demande de bande passante pour les communications laisse présager une saturation prochaine de la capacité des réseaux de télécommunications qui devrait se matérialiser au cours de la prochaine décennie. En effet, la théorie de l’information prédit que les effets non linéaires dans les fibres monomodes limite la capacité de transmission de celles-ci et peu de gain à ce niveau peut être espéré des techniques traditionnelles de multiplexage développées et utilisées jusqu’à présent dans les systèmes à haut débit. La dimension spatiale du canal optique est proposée comme un nouveau degré de liberté qui peut être utilisé pour augmenter le nombre de canaux de transmission et, par conséquent, résoudre cette menace de «crise de capacité». Ainsi, inspirée par les techniques micro-ondes, la technique émergente appelée multiplexage spatial (SDM) est une technologie prometteuse pour la création de réseaux optiques de prochaine génération. Pour réaliser le SDM dans les liens de fibres optiques, il faut réexaminer tous les dispositifs intégrés, les équipements et les sous-systèmes. Parmi ces éléments, l’amplificateur optique SDM est critique, en particulier pour les systèmes de transmission pour les longues distances. En raison des excellentes caractéristiques de l’amplificateur à fibre dopée à l’erbium (EDFA) utilisé dans les systèmes actuels de pointe, l’EDFA est à nouveau un candidat de choix pour la mise en œuvre des amplificateurs SDM pratiques. Toutefois, étant donné que le SDM introduit une variation spatiale du champ dans le plan transversal de la fibre, les amplificateurs à fibre dopée à l’erbium spatialement intégrés (SIEDFA) nécessitent une conception soignée. Dans cette thèse, nous examinons tout d’abord les progrès récents du SDM, en particulier les amplificateurs optiques SDM. Ensuite, nous identifions et discutons les principaux enjeux des SIEDFA qui exigent un examen scientifique. Suite à cela, la théorie des EDFA est brièvement présentée et une modélisation numérique pouvant être utilisée pour simuler les SIEDFA est proposée. Sur la base d’un outil de simulation fait maison, nous proposons une nouvelle conception des profils de dopage annulaire des fibres à quelques-modes dopées à l’erbium (ED-FMF) et nous évaluons numériquement la performance d’un amplificateur à un étage, avec fibre à dopage annulaire, à ainsi qu’un amplificateur à double étage pour les communications sur des fibres ne comportant que quelques modes. Par la suite, nous concevons des fibres dopées à l’erbium avec une gaine annulaire et multi-cœurs (ED-MCF). Nous avons évalué numériquement le recouvrement de la pompe avec les multiples cœurs de ces amplificateurs. En plus de la conception, nous fabriquons et caractérisons une fibre multi-cœurs à quelques modes dopées à l’erbium. Nous réalisons la première démonstration des amplificateurs à fibre optique spatialement intégrés incorporant de telles fibres dopées. Enfin, nous présentons les conclusions ainsi que les perspectives de cette recherche. La recherche et le développement des SIEDFA offriront d’énormes avantages non seulement pour les systèmes de transmission future SDM, mais aussi pour les systèmes de transmission monomode sur des fibres standards à un cœur car ils permettent de remplacer plusieurs amplificateurs par un amplificateur intégré.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Internet users consume online targeted advertising based on information collected about them and voluntarily share personal information in social networks. Sensor information and data from smart-phones is collected and used by applications, sometimes in unclear ways. As it happens today with smartphones, in the near future sensors will be shipped in all types of connected devices, enabling ubiquitous information gathering from the physical environment, enabling the vision of Ambient Intelligence. The value of gathered data, if not obvious, can be harnessed through data mining techniques and put to use by enabling personalized and tailored services as well as business intelligence practices, fueling the digital economy. However, the ever-expanding information gathering and use undermines the privacy conceptions of the past. Natural social practices of managing privacy in daily relations are overridden by socially-awkward communication tools, service providers struggle with security issues resulting in harmful data leaks, governments use mass surveillance techniques, the incentives of the digital economy threaten consumer privacy, and the advancement of consumergrade data-gathering technology enables new inter-personal abuses. A wide range of fields attempts to address technology-related privacy problems, however they vary immensely in terms of assumptions, scope and approach. Privacy of future use cases is typically handled vertically, instead of building upon previous work that can be re-contextualized, while current privacy problems are typically addressed per type in a more focused way. Because significant effort was required to make sense of the relations and structure of privacy-related work, this thesis attempts to transmit a structured view of it. It is multi-disciplinary - from cryptography to economics, including distributed systems and information theory - and addresses privacy issues of different natures. As existing work is framed and discussed, the contributions to the state-of-theart done in the scope of this thesis are presented. The contributions add to five distinct areas: 1) identity in distributed systems; 2) future context-aware services; 3) event-based context management; 4) low-latency information flow control; 5) high-dimensional dataset anonymity. Finally, having laid out such landscape of the privacy-preserving work, the current and future privacy challenges are discussed, considering not only technical but also socio-economic perspectives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we extend recent results of Fiorini et al. on the extension complexity of the cut polytope and related polyhedra. We first describe a lifting argument to show exponential extension complexity for a number of NP-complete problems including subset-sum and three dimensional matching. We then obtain a relationship between the extension complexity of the cut polytope of a graph and that of its graph minors. Using this we are able to show exponential extension complexity for the cut polytope of a large number of graphs, including those used in quantum information and suspensions of cubic planar graphs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In economics of information theory, credence products are those whose quality is difficult or impossible for consumers to assess, even after they have consumed the product (Darby & Karni, 1973). This dissertation is focused on the content, consumer perception, and power of online reviews for credence services. Economics of information theory has long assumed, without empirical confirmation, that consumers will discount the credibility of claims about credence quality attributes. The same theories predict that because credence services are by definition obscure to the consumer, reviews of credence services are incapable of signaling quality. Our research aims to question these assumptions. In the first essay we examine how the content and structure of online reviews of credence services systematically differ from the content and structure of reviews of experience services and how consumers judge these differences. We have found that online reviews of credence services have either less important or less credible content than reviews of experience services and that consumers do discount the credibility of credence claims. However, while consumers rationally discount the credibility of simple credence claims in a review, more complex argument structure and the inclusion of evidence attenuate this effect. In the second essay we ask, “Can online reviews predict the worst doctors?” We examine the power of online reviews to detect low quality, as measured by state medical board sanctions. We find that online reviews are somewhat predictive of a doctor’s suitability to practice medicine; however, not all the data are useful. Numerical or star ratings provide the strongest quality signal; user-submitted text provides some signal but is subsumed almost completely by ratings. Of the ratings variables in our dataset, we find that punctuality, rather than knowledge, is the strongest predictor of medical board sanctions. These results challenge the definition of credence products, which is a long-standing construct in economics of information theory. Our results also have implications for online review users, review platforms, and for the use of predictive modeling in the context of information systems research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We show that the multiscale entanglement renormalization ansatz (MERA) can be reformulated in terms of a causality constraint on discrete quantum dynamics. This causal structure is that of de Sitter space with a flat space-like boundary, where the volume of a spacetime region corresponds to the number of variational parameters it contains. This result clarifies the nature of the ansatz, and suggests a generalization to quantum field theory. It also constitutes an independent justification of the connection between MERA and hyperbolic geometry which was proposed as a concrete implementation of the AdS-CFT correspondence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Homomorphic encryption is a particular type of encryption method that enables computing over encrypted data. This has a wide range of real world ramifications such as being able to blindly compute a search result sent to a remote server without revealing its content. In the first part of this thesis, we discuss how database search queries can be made secure using a homomorphic encryption scheme based on the ideas of Gahi et al. Gahi’s method is based on the integer-based fully homomorphic encryption scheme proposed by Dijk et al. We propose a new database search scheme called the Homomorphic Query Processing Scheme, which can be used with the ring-based fully homomorphic encryption scheme proposed by Braserski. In the second part of this thesis, we discuss the cybersecurity of the smart electric grid. Specifically, we use the Homomorphic Query Processing scheme to construct a keyword search technique in the smart grid. Our work is based on the Public Key Encryption with Keyword Search (PEKS) method introduced by Boneh et al. and a Multi-Key Homomorphic Encryption scheme proposed by L´opez-Alt et al. A summary of the results of this thesis (specifically the Homomorphic Query Processing Scheme) is published at the 14th Canadian Workshop on Information Theory (CWIT).