3 resultados para uneven lighting image correction
em CaltechTHESIS
Resumo:
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.
At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.
In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.
In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.
In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.
Resumo:
We have used the technique of non-redundant masking at the Palomar 200-inch telescope and radio VLBI imaging software to make optical aperture synthesis maps of two binary stars, β Corona Borealis and σ Herculis. The dynamic range of the map of β CrB, a binary star with a separation of 230 milliarcseconds is 50:1. For σ Her, we find a separation of 70 milliarcseconds and the dynamic range of our image is 30:1. These demonstrate the potential of the non-redundant masking technique for diffraction-limited imaging of astronomical objects with high dynamic range.
We find that the optimal integration time for measuring the closure phase is longer than that for measuring the fringe amplitude. There is not a close relationship between amplitude errors and phase errors, as is found in radio interferometry. Amplitude self calibration is less effective at optical wavelengths than at radio wavelengths. Primary beam sensitivity correction made in radio aperture synthesis is not necessary in optical aperture synthesis.
The effects of atmospheric disturbances on optical aperture synthesis have been studied by Monte Carlo simulations based on the Kolmogorov theory of refractive-index fluctuations. For the non-redundant masking with τ_c-sized apertures, the simulated fringe amplitude gives an upper bound of the observed fringe amplitude. A smooth transition is seen from the non-redundant masking regime to the speckle regime with increasing aperture size. The fractional reduction of the fringe amplitude according to the bandwidth is nearly independent of the aperture size. The limiting magnitude of optical aperture synthesis with τ_c-sized apertures and that with apertures larger than τ_c are derived.
Monte Carlo simulations are also made to study the sensitivity and resolution of the bispectral analysis of speckle interferometry. We present the bispectral modulation transfer function and its signal-to-noise ratio at high light levels. The results confirm the validity of the heuristic interferometric view of image-forming process in the mid-spatial-frequency range. The signal-to- noise ratio of the bispectrum at arbitrary light levels is derived in the mid-spatial-frequency range.
The non-redundant masking technique is suitable for imaging bright objects with high resolution and high dynamic range, while the faintest limit will be better pursued by speckle imaging.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.