933 resultados para Quantum computation and information


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The more information is available, and the more predictable are events, the better forecasts ought to be. In this paper forecasts by bookmakers, prediction markets and tipsters are evaluated for a range of events with varying degrees of predictability and information availability. All three types of forecast represent different structures of information processing and as such would be expected to perform differently. By and large, events that are more predictable, and for which more information is available, do tend to be forecast better.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce semiconductor quantum dot-based fluorescence imaging with approximately 2-fold increased optical resolution in three dimensions as a method that allows both studying cellular structures and spatial organization of biomolecules in membranes and subcellular organelles. Target biomolecules are labelled with quantum dots via immunocytochemistry. The resolution enhancement is achieved by three-photon absorption of quantum dots and subsequent fluorescence emission from a higher-order excitonic state. Different from conventional multiphoton microscopy, this approach can be realized on any confocal microscope without the need for pulsed excitation light. We demonstrate quantum dot triexciton imaging (QDTI) of the microtubule network of U373 cells, 3D imaging of TNF receptor 2 on the plasma membrane of HeLa cells, and multicolor 3D imaging of mitochondrial cytochrome c oxidase and actin in COS-7 cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increase in e-commerce and the digitisation of design data and information,the construction sector has become reliant upon IT infrastructure and systems. The design and production process is more complex, more interconnected, and reliant upon greater information mobility, with seamless exchange of data and information in real time. Construction small and medium-sized enterprises (CSMEs), in particular,the speciality contractors, can effectively utilise cost-effective collaboration-enabling technologies, such as cloud computing, to help in the effective transfer of information and data to improve productivity. The system dynamics (SD) approach offers a perspective and tools to enable a better understanding of the dynamics of complex systems. This research focuses upon system dynamics methodology as a modelling and analysis tool in order to understand and identify the key drivers in the absorption of cloud computing for CSMEs. The aim of this paper is to determine how the use of system dynamics (SD) can improve the management of information flow through collaborative technologies leading to improved productivity. The data supporting the use of system dynamics was obtained through a pilot study consisting of questionnaires and interviews from five CSMEs in the UK house-building sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ZnO nanocrystals are studied using theoretical calculations based on the density functional theory. The two main effects related to the reduced size of the nanocrystals are investigated: quantum confinement and a large surface:volume ratio. The effects of quantum confinement are studied by saturating the surface dangling bonds of the nanocrystals with hypothetical H atoms. To understand the effects of the surfaces of the nanocrystals, all saturation is removed and the system is relaxed to its minimum energy position. Several different surface motifs are reported, which should be observed experimentally. Spin-polarized calculations are performed in the nonsaturated nanocrystals, leading to different magnetic moments. We propose that this magnetic moment can be responsible for the intrinsic magnetism observed in ZnO nanostructures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based only on the parallel-transport condition, we present a general method to compute Abelian or non-Abelian geometric phases acquired by the basis states of pure or mixed density operators, which also holds for nonadiabatic and noncyclic evolution. Two interesting features of the non-Abelian geometric phase obtained by our method stand out: i) it is a generalization of Wilczek and Zee`s non-Abelian holonomy, in that it describes nonadiabatic evolution where the basis states are parallelly transported between distinct degenerate subspaces, and ii) the non-Abelian character of our geometric phase relies on the transitional evolution of the basis states, even in the nondegenerate case. We apply our formalism to a two-level system evolving nonadiabatically under spontaneous decay to emphasize the non- Abelian nature of the geometric phase induced by the reservoir. We also show, through the generalized invariant theory, that our general approach encompasses previous results in the literature. Copyright (c) EPLA, 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we use Nuclear Magnetic Resonance (NMR) to write electronic states of a ferromagnetic system into high-temperature paramagnetic nuclear spins. Through the control of phase and duration of radio frequency pulses, we set the NMR density matrix populations, and apply the technique of quantum state tomography to experimentally obtain the matrix elements of the system, from which we calculate the temperature dependence of magnetization for different magnetic fields. The effects of the variation of temperature and magnetic field over the populations can be mapped in the angles of spin rotations, carried out by the RF pulses. The experimental results are compared to the Brillouin functions of ferromagnetic ordered systems in the mean field approximation for two cases: the mean field is given by (i) B = B(0) + lambda M and (ii) B = B(0) + lambda M + lambda`M(3), where B(0) is the external magnetic field, and lambda, lambda` are mean field parameters. The first case exhibits second order transition, whereas the second case has first order transition with temperature hysteresis. The NMR simulations are in good agreement with the magnetic predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present parallel algorithms on the BSP/CGM model, with p processors, to count and generate all the maximal cliques of a circle graph with n vertices and m edges. To count the number of all the maximal cliques, without actually generating them, our algorithm requires O(log p) communication rounds with O(nm/p) local computation time. We also present an algorithm to generate the first maximal clique in O(log p) communication rounds with O(nm/p) local computation, and to generate each one of the subsequent maximal cliques this algorithm requires O(log p) communication rounds with O(m/p) local computation. The maximal cliques generation algorithm is based on generating all maximal paths in a directed acyclic graph, and we present an algorithm for this problem that uses O(log p) communication rounds with O(m/p) local computation for each maximal path. We also show that the presented algorithms can be extended to the CREW PRAM model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we discuss inferential aspects of the measurement error regression models with null intercepts when the unknown quantity x (latent variable) follows a skew normal distribution. We examine first the maximum-likelihood approach to estimation via the EM algorithm by exploring statistical properties of the model considered. Then, the marginal likelihood, the score function and the observed information matrix of the observed quantities are presented allowing direct inference implementation. In order to discuss some diagnostics techniques in this type of models, we derive the appropriate matrices to assessing the local influence on the parameter estimates under different perturbation schemes. The results and methods developed in this paper are illustrated considering part of a real data set used by Hadgu and Koch [1999, Application of generalized estimating equations to a dental randomized clinical trial. Journal of Biopharmaceutical Statistics, 9, 161-178].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This presentation was offered as part of the CUNY Library Assessment Conference, Reinventing Libraries: Reinventing Assessment, held at the City University of New York in June 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This presentation was offered as part of the CUNY Library Assessment Conference, Reinventing Libraries: Reinventing Assessment, held at the City University of New York in June 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese se dedica ao estudo de modelos de fixação de preços e suas implicações macroeconômicas. Nos primeiros dois capítulos analiso modelos em que as decisões das firmas sobre seus preços praticados levam em conta custos de menu e de informação. No Capítulo 1 eu estimo tais modelos empregando estatísticas de variações de preços dos Estados Unidos, e concluo que: os custos de informação são significativamente maiores que os custos de menu; os dados claramente favorecem o modelo em que informações sobre condições agregadas são custosas enquanto que as idiossincráticas têm custo zero. No Capítulo 2 investigo as consequências de choques monetários e anúncios de desinflação usando os modelos previamente estimados. Mostro que o grau de não-neutralidade monetária é maior no modelo em que parte da informação é grátis. O Capítulo 3 é um artigo em conjunto com Carlos Carvalho (PUC-Rio) e Antonella Tutino (Federal Reserve Bank of Dallas). No artigo examinamos um modelo de fixação de preços em que firmas estão sujeitas a uma restrição de fluxo de informação do tipo Shannon. Calibramos o modelo e estudamos funções impulso-resposta a choques idiossincráticos e agregados. Mostramos que as firmas vão preferir processar informações agregadas e idiossincráticas conjuntamente ao invés de investigá-las separadamente. Este tipo de processamento gera ajustes de preços mais frequentes, diminuindo a persistência de efeitos reais causados por choques monetários.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My dissertation focuses on dynamic aspects of coordination processes such as reversibility of early actions, option to delay decisions, and learning of the environment from the observation of other people’s actions. This study proposes the use of tractable dynamic global games where players privately and passively learn about their actions’ true payoffs and are able to adjust early investment decisions to the arrival of new information to investigate the consequences of the presence of liquidity shocks to the performance of a Tobin tax as a policy intended to foster coordination success (chapter 1), and the adequacy of the use of a Tobin tax in order to reduce an economy’s vulnerability to sudden stops (chapter 2). Then, it analyzes players’ incentive to acquire costly information in a sequential decision setting (chapter 3). In chapter 1, a continuum of foreign agents decide whether to enter or not in an investment project. A fraction λ of them are hit by liquidity restrictions in a second period and are forced to withdraw early investment or precluded from investing in the interim period, depending on the actions they chose in the first period. Players not affected by the liquidity shock are able to revise early decisions. Coordination success is increasing in the aggregate investment and decreasing in the aggregate volume of capital exit. Without liquidity shocks, aggregate investment is (in a pivotal contingency) invariant to frictions like a tax on short term capitals. In this case, a Tobin tax always increases success incidence. In the presence of liquidity shocks, this invariance result no longer holds in equilibrium. A Tobin tax becomes harmful to aggregate investment, which may reduces success incidence if the economy does not benefit enough from avoiding capital reversals. It is shown that the Tobin tax that maximizes the ex-ante probability of successfully coordinated investment is decreasing in the liquidity shock. Chapter 2 studies the effects of a Tobin tax in the same setting of the global game model proposed in chapter 1, with the exception that the liquidity shock is considered stochastic, i.e, there is also aggregate uncertainty about the extension of the liquidity restrictions. It identifies conditions under which, in the unique equilibrium of the model with low probability of liquidity shocks but large dry-ups, a Tobin tax is welfare improving, helping agents to coordinate on the good outcome. The model provides a rationale for a Tobin tax on economies that are prone to sudden stops. The optimal Tobin tax tends to be larger when capital reversals are more harmful and when the fraction of agents hit by liquidity shocks is smaller. Chapter 3 focuses on information acquisition in a sequential decision game with payoff complementar- ity and information externality. When information is cheap relatively to players’ incentive to coordinate actions, only the first player chooses to process information; the second player learns about the true payoff distribution from the observation of the first player’s decision and follows her action. Miscoordination requires that both players privately precess information, which tends to happen when it is expensive and the prior knowledge about the distribution of the payoffs has a large variance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electric current and the magnetoresistance effect are studied in a double quantum-dot system, where one of the dots QD(a) is coupled to two ferromagnetic electrodes (F-1; F-2), while the second QD(b) is connected to a superconductor S. For energy scales within the superconductor gap, electric conduction is allowed by Andreev reflection processes. Due to the presence of two ferromagnetic leads, non-local crossed Andreev reflections are possible. We found that the magnetoresistance sign can be changed by tuning the external potential applied to the ferromagnets. In addition, it is possible to control the current of the first ferromagnet (F-1) through the potential applied to the second one (F-2). We have also included intradot interaction and gate voltages at each quantum dot and analyzed their influence through a mean field approximation. The interaction reduces the current amplitudes with respect to the non-interacting case, but the switching effect still remains as a manifestation of quantum coherence, in scales of the order of the superconductor coherence length. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4723000]