202 resultados para quantum statistical mechanics
Resumo:
Which gates are universal for quantum computation? Although it is well known that certain gates on two-level quantum systems (qubits), such as the controlled-NOT, are universal when assisted by arbitrary one-qubit gates, it has only recently become clear precisely what class of two-qubit gates is universal in this sense. We present an elementary proof that any entangling two-qubit gate is universal for quantum computation, when assisted by one-qubit gates. A proof of this result for systems of arbitrary finite dimension has been provided by Brylinski and Brylinski; however, their proof relies on a long argument using advanced mathematics. In contrast, our proof provides a simple constructive procedure which is close to optimal and experimentally practical.
Resumo:
We introduce a model of computation based on read only memory (ROM), which allows us to compare the space-efficiency of reversible, error-free classical computation with reversible, error-free quantum computation. We show that a ROM-based quantum computer with one writable qubit is universal, whilst two writable bits are required for a universal classical ROM-based computer. We also comment on the time-efficiency advantages of quantum computation within this model.
Resumo:
This Letter presents a simple formula for the average fidelity between a unitary quantum gate and a general quantum operation on a qudit, generalizing the formula for qubits found by Bowdrey et al. [Phys. Lett. A 294 (2002) 258]. This formula may be useful for experimental determination of average gate fidelity. We also give a simplified proof of a formula due to Horodecki et al. [Phys. Rev. A 60 (1999) 1888], connecting average gate fidelity to entanglement fidelity. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Recently quantum tomography has been proposed as a fundamental tool for prototyping a few qubit quantum device. It allows the complete reconstruction of the state produced from a given input into the device. From this reconstructed density matrix, relevant quantum information quantities such as the degree of entanglement and entropy can be calculated. Generally, orthogonal measurements have been discussed for this tomographic reconstruction. In this paper, we extend the tomographic reconstruction technique to two new regimes. First, we show how nonorthogonal measurements allow the reconstruction of the state of the system provided the measurements span the Hilbert space. We then detail how quantum-state tomography can be performed for multiqudits with a specific example illustrating how to achieve this in one- and two-qutrit systems.
Resumo:
Parrondo's paradox arises when two losing games are combined to produce a winning one. A history-dependent quantum Parrondo game is studied where the rotation operators that represent the toss of a classical biased coin are replaced by general SU(2) operators to transform the game into the quantum domain. In the initial state, a superposition of qubits can be used to couple the games and produce interference leading to quite different payoffs to those in the classical case. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The authors investigated the effect of manual hyperinflation (MHI) with set parameters applied to patients on mechanical ventilation on hemodynamics, respiratory mechanics, and gas exchange. Sixteen critically ill patients post-septic shock, with acute lung injury, were studied. Heart rate, arterial pressure, and mean pulmonary artery pressure were recorded every minute. pulmonary artery occlusion pressure, cardiac output, arterial blood gases, and dynamic compliance (C-dyn) were recorded pre- and post-MHI. From this, systemic vascular resistance index (SVRI), cardiac index, oxygen delivery, and partial pressure of oxygen:fraction of inspired oxygen (PaO2:FiO(2)) ratio were calculated. There were significant increases in SVRI (P < 0.05) post-MHI and diastolic arterial pressure (P < 0.01)during MHI. C-dyn increased post-MHI (P < 0.01) and was sustained at 20 minutes post-MHI (P < 0.01). Subjects with an intrapulmonary cause of lung disease had a significant decrease (P = 0.02) in PaO2:FiO(2), and those with extrapulmonary causes of lung disease had a significant increase (P < 0.001) in PaO2:FiO(2) post-MHI. In critically ill patients, MHI resulted in an improvement in lung mechanics and an improvement in gas exchange in patients with lung disease due to extrapulmonary events and did not result in impairment of the cardiovascular system.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
The effect of number of samples and selection of data for analysis on the calculation of surface motor unit potential (SMUP) size in the statistical method of motor unit number estimates (MUNE) was determined in 10 normal subjects and 10 with amyotrophic lateral sclerosis (ALS). We recorded 500 sequential compound muscle action potentials (CMAPs) at three different stable stimulus intensities (10–50% of maximal CMAP). Estimated mean SMUP sizes were calculated using Poisson statistical assumptions from the variance of 500 sequential CMAP obtained at each stimulus intensity. The results with the 500 data points were compared with smaller subsets from the same data set. The results using a range of 50–80% of the 500 data points were compared with the full 500. The effect of restricting analysis to data between 5–20% of the CMAP and to standard deviation limits was also assessed. No differences in mean SMUP size were found with stimulus intensity or use of different ranges of data. Consistency was improved with a greater sample number. Data within 5% of CMAP size gave both increased consistency and reduced mean SMUP size in many subjects, but excluded valid responses present at that stimulus intensity. These changes were more prominent in ALS patients in whom the presence of isolated SMUP responses was a striking difference from normal subjects. Noise, spurious data, and large SMUP limited the Poisson assumptions. When these factors are considered, consistent statistical MUNE can be calculated from a continuous sequence of data points. A 2 to 2.5 SD or 10% window are reasonable methods of limiting data for analysis. Muscle Nerve 27: 320–331, 2003
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
We give a selective review of quantum mechanical methods for calculating and characterizing resonances in small molecular systems, with an emphasis on recent progress in Chebyshev and Lanczos iterative methods. Two archetypal molecular systems are discussed: isolated resonances in HCO, which exhibit regular mode and state specificity, and overlapping resonances in strongly bound HO2, which exhibit irregular and chaotic behavior. Future directions in this field are also discussed.