929 resultados para Quantum Error-correction
Resumo:
PURPOSE: The Bonferroni correction adjusts probability (p) values because of the increased risk of a type I error when making multiple statistical tests. The routine use of this test has been criticised as deleterious to sound statistical judgment, testing the wrong hypothesis, and reducing the chance of a type I error but at the expense of a type II error; yet it remains popular in ophthalmic research. The purpose of this article was to survey the use of the Bonferroni correction in research articles published in three optometric journals, viz. Ophthalmic & Physiological Optics, Optometry & Vision Science, and Clinical & Experimental Optometry, and to provide advice to authors contemplating multiple testing. RECENT FINDINGS: Some authors ignored the problem of multiple testing while others used the method uncritically with no rationale or discussion. A variety of methods of correcting p values were employed, the Bonferroni method being the single most popular. Bonferroni was used in a variety of circumstances, most commonly to correct the experiment-wise error rate when using multiple 't' tests or as a post-hoc procedure to correct the family-wise error rate following analysis of variance (anova). Some studies quoted adjusted p values incorrectly or gave an erroneous rationale. SUMMARY: Whether or not to use the Bonferroni correction depends on the circumstances of the study. It should not be used routinely and should be considered if: (1) a single test of the 'universal null hypothesis' (Ho ) that all tests are not significant is required, (2) it is imperative to avoid a type I error, and (3) a large number of tests are carried out without preplanned hypotheses.
Resumo:
A synchronization scheme for a two-channel phase sensitive amplifier is implemented based on the injection-locking of single InP quantum-dash mode-locked laser. Error free performance with penalty <1 dB is demonstrated for both channels. © 2011 Optical Society of America.
Resumo:
For more than a century it has been known that the eye is not a perfect optical system, but rather a system that suffers from aberrations beyond conventional prescriptive descriptions of defocus and astigmatism. Whereas traditional refraction attempts to describe the error of the eye with only two parameters, namely sphere and cylinder, measurements of wavefront aberrations depict the optical error with many more parameters. What remains questionable is the impact these additional parameters have on visual function. Some authors have argued that higher-order aberrations have a considerable effect on visual function and in certain cases this effect is significant enough to induce amblyopia. This has been referred to as ‘higher-order aberration-associated amblyopia’. In such cases, correction of higher-order aberrations would not restore visual function. Others have reported that patients with binocular asymmetric aberrations display an associated unilateral decrease in visual acuity and, if the decline in acuity results from the aberrations alone, such subjects may have been erroneously diagnosed as amblyopes. In these cases, correction of higher-order aberrations would restore visual function. This refractive entity has been termed ‘aberropia’. In order to investigate these hypotheses, the distribution of higher-order aberrations in strabismic, anisometropic and idiopathic amblyopes, and in a group of visual normals, was analysed both before and after wavefront-guided laser refractive correction. The results show: (i) there is no significant asymmetry in higher-order aberrations between amblyopic and fixing eyes prior to laser refractive treatment; (ii) the mean magnitude of higher-order aberrations is similar within the amblyopic and visually normal populations; (iii) a significant improvement in visual acuity can be realised for adult amblyopic patients utilising wavefront-guided laser refractive surgery and a modest increase in contrast sensitivity was observed for the amblyopic eye of anisometropes following treatment (iv) an overall trend towards increased higher-order aberrations following wavefront-guided laser refractive treatment was observed for both visually normal and amblyopic eyes. In conclusion, while the data do not provide any direct evidence for the concepts of either ‘aberropia’ or ‘higher-order aberration-associated amblyopia’, it is clear that gains in visual acuity and contrast sensitivity may be realised following laser refractive treatment of the amblyopic adult eye. Possible mechanisms by which these gains are realised are discussed.
Resumo:
Purpose: To investigate the relationship between pupil diameter and refractive error and how refractive correction, target luminance, and accommodation modulate this relationship. Methods: Sixty emmetropic, myopic, and hyperopic subjects (age range, 18 to 35 years) viewed an illuminated target (luminance: 10, 100, 200, 400, 1000, 2000, and 4100 cd/m2) within a Badal optical system, at 0 diopters (D) and −3 D vergence, with and without refractive correction. Refractive error was corrected using daily disposable contact lenses. Pupil diameter and accommodation were recorded continuously using a commercially available photorefractor. Results: No significant difference in pupil diameter was found between the refractive groups at 0 D or −3 D target vergence, in the corrected or uncorrected conditions. As expected, pupil diameter decreased with increasing luminance. Target vergence had no significant influence on pupil diameter. In the corrected condition, at 0 D target vergence, the accommodation response was similar in all refractive groups. At −3 D target vergence, the emmetropic and myopic groups accommodated significantly more than the hyperopic group at all luminance levels. There was no correlation between accommodation response and pupil diameter or refractive error in any refractive group. In the uncorrected condition, the accommodation response was significantly greater in the hyperopic group than in the myopic group at all luminance levels, particularly for near viewing. In the hyperopic group, the accommodation response was significantly correlated with refractive error but not pupil diameter. In the myopic group, accommodation response level was not correlated with refractive error or pupil diameter. Conclusions: Refractive error has no influence on pupil diameter, irrespective of refractive correction or accommodative demand. This suggests that the pupil is controlled by the pupillary light reflex and is not driven by retinal blur.
Resumo:
In this thesis, we consider N quantum particles coupled to collective thermal quantum environments. The coupling is energy conserving and scaled in the mean field way. There is no direct interaction between the particles, they only interact via the common reservoir. It is well known that an initially disentangled state of the N particles will remain disentangled at times in the limit N -> [infinity]. In this thesis, we evaluate the η-body reduced density matrix (tracing over the reservoirs and the N - η remaining particles). We identify the main disentangled part of the reduced density matrix and obtain the first order correction term in 1/N. We show that this correction term is entangled. We also estimate the speed of convergence of the reduced density matrix as N -> [infinity]. Our model is exactly solvable and it is not based on numerical approximation.
Resumo:
We recently published an article (García-Pérez & Alcalá- Quintana, 2010) reanalyzing data presented by Lapid, Ulrich, and Rammsayer (2008) and discussing a theoretical argument developed by Ulrich and Vorberg (2009). The purpose of this note is to correct an error in our study that has some theoretical importance, although it does not affect the conclusion that was raised. The error lies in that asymptote parameters reflecting lapses or finger errors should not enter the constraint relating the psychometric functions that describe performance when the comparison stimulus in a two-alternative forced choice (2AFC) discrimination task is presented in the first or second interval.
Resumo:
A new method for the evaluation of the efficiency of parabolic trough collectors, called Rapid Test Method, is investigated at the Solar Institut Jülich. The basic concept is to carry out measurements under stagnation conditions. This allows a fast and inexpensive process due to the fact that no working fluid is required. With this approach, the temperature reached by the inner wall of the receiver is assumed to be the stagnation temperature and hence the average temperature inside the collector. This leads to a systematic error which can be rectified through the introduction of a correction factor. A model of the collector is simulated with COMSOL Multipyisics to study the size of the correction factor depending on collector geometry and working conditions. The resulting values are compared with experimental data obtained at a test rig at the Solar Institut Jülich. These results do not match with the simulated ones. Consequentially, it was not pos-sible to verify the model. The reliability of both the model with COMSOL Multiphysics and of the measurements are analysed. The influence of the correction factor on the rapid test method is also studied, as well as the possibility of neglecting it by measuring the receiver’s inner wall temperature where it receives the least amount of solar rays. The last two chapters analyse the specific heat capacity as a function of pressure and tem-perature and present some considerations about the uncertainties on the efficiency curve obtained with the Rapid Test Method.
Resumo:
A turn on of a quantum dot (QD) semiconductor laser simultaneously operating at the ground state (GS) and excited state (ES) is investigated both experimentally and theoretically. We find experimentally that the slow passage through the two successive laser thresholds may lead to significant delays in the GS and ES turn ons. The difference between the turn-on times is measured as a function of the pump rate of change and reveals no clear power law. This has motivated a detailed analysis of rate equations appropriate for two-state lasing QD lasers. We find that the effective time of the GS turn on follows an -1/2 power law provided that the rate of change is not too small. The effective time of the ES transition follows an -1 power law, but its first order correction in ln is numerically significant. The two turn ons result from different physical mechanisms. The delay of the GS transition strongly depends on the slow growth of the dot population, whereas the ES transition only depends on the time needed to leave a repellent steady state.
Resumo:
Bangla OCR (Optical Character Recognition) is a long deserving software for Bengali community all over the world. Numerous e efforts suggest that due to the inherent complex nature of Bangla alphabet and its word formation process development of high fidelity OCR producing a reasonably acceptable output still remains a challenge. One possible way of improvement is by using post processing of OCR’s output; algorithms such as Edit Distance and the use of n-grams statistical information have been used to rectify misspelled words in language processing. This work presents the first known approach to use these algorithms to replace misrecognized words produced by Bangla OCR. The assessment is made on a set of fifty documents written in Bangla script and uses a dictionary of 541,167 words. The proposed correction model can correct several words lowering the recognition error rate by 2.87% and 3.18% for the character based n- gram and edit distance algorithms respectively. The developed system suggests a list of 5 (five) alternatives for a misspelled word. It is found that in 33.82% cases, the correct word is the topmost suggestion of 5 words list for n-gram algorithm while using Edit distance algorithm the first word in the suggestion properly matches 36.31% of the cases. This work will ignite rooms of thoughts for possible improvements in character recognition endeavour.
Resumo:
Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.
Resumo:
The corpuscular model describes black holes as leaky bound states of gravitons. To account for the role of matter, a coherent state is built and a semiclassical description is given to the gravitational field by connecting the classical source with the quantum state for gravitons. The properties of this state can be analysed with the help of an Unruh-DeWitt detector, coupled to the quantum state of the system. The presence of a detector in general regularises the usual diverging behaviour of the field in the deep ultraviolet region, and will allow us to probe the coherent state structure and the graviton emission. In particular, a Newtonian analogue of the Unruh effect will be discussed and the coherent state will be modified to properly account for the spherical symmetry of the potential at the level of the quantum state. This correction will ensure that vacuum contributions responsible for the Unruh thermal spectrum are present in a coherent state emission process.
Resumo:
This chapter provides a short review of quantum dots (QDs) physics, applications, and perspectives. The main advantage of QDs over bulk semiconductors is the fact that the size became a control parameter to tailor the optical properties of new materials. Size changes the confinement energy which alters the optical properties of the material, such as absorption, refractive index, and emission bands. Therefore, by using QDs one can make several kinds of optical devices. One of these devices transforms electrons into photons to apply them as active optical components in illumination and displays. Other devices enable the transformation of photons into electrons to produce QDs solar cells or photodetectors. At the biomedical interface, the application of QDs, which is the most important aspect in this book, is based on fluorescence, which essentially transforms photons into photons of different wavelengths. This chapter introduces important parameters for QDs' biophotonic applications such as photostability, excitation and emission profiles, and quantum efficiency. We also present the perspectives for the use of QDs in fluorescence lifetime imaging (FLIM) and Förster resonance energy transfer (FRET), so useful in modern microscopy, and how to take advantage of the usually unwanted blinking effect to perform super-resolution microscopy.
Resumo:
Fluorescence Correlation Spectroscopy (FCS) is an optical technique that allows the measurement of the diffusion coefficient of molecules in a diluted sample. From the diffusion coefficient it is possible to calculate the hydrodynamic radius of the molecules. For colloidal quantum dots (QDs) the hydrodynamic radius is valuable information to study interactions with other molecules or other QDs. In this chapter we describe the main aspects of the technique and how to use it to calculate the hydrodynamic radius of quantum dots (QDs).
Resumo:
77
Resumo:
Condensation processes are of key importance in nature and play a fundamental role in chemistry and physics. Owing to size effects at the nanoscale, it is conceptually desired to experimentally probe the dependence of condensate structure on the number of constituents one by one. Here we present an approach to study a condensation process atom-by-atom with the scanning tunnelling microscope, which provides a direct real-space access with atomic precision to the aggregates formed in atomically defined 'quantum boxes'. Our analysis reveals the subtle interplay of competing directional and nondirectional interactions in the emergence of structure and provides unprecedented input for the structural comparison with quantum mechanical models. This approach focuses on-but is not limited to-the model case of xenon condensation and goes significantly beyond the well-established statistical size analysis of clusters in atomic or molecular beams by mass spectrometry.