16 resultados para Omission Error
em Chinese Academy of Sciences Institutional Repositories Grid Portal
Resumo:
It is well known that noise and detection error can affect the performances of an adaptive optics (AO) system. Effects of noise and detection error on the phase compensation effectiveness in a dynamic AO system are investigated by means of a pure numerical simulation in this paper. A theoretical model for numerically simulating effects of noise and detection error in a static AO system and a corresponding computer program were presented in a previous article. A numerical simulation of effects of noise and detection error is combined with our previous numeral simulation of a dynamic AO system in this paper and a corresponding computer program has been compiled. Effects of detection error, readout noise and photon noise are included and investigated by a numerical simulation for finding the preferred working conditions and the best performances in a practical dynamic AO system. An approximate model is presented as well. Under many practical conditions such approximate model is a good alternative to the more accurate one. A simple algorithm which can be used for reducing the effect of noise is presented as well. When signal to noise ratio is very low, such method can be used to improve the performances of a dynamic AO system.
Resumo:
In this paper, the gamma-gamma probability distribution is used to model turbulent channels. The bit error rate (BER) performance of free space optical (FSO) communication systems employing on-off keying (OOK) or subcarrier binary phase-shift keying (BPSK) modulation format is derived. A tip-tilt adaptive optics system is also incorporated with a FSO system using the above modulation formats. The tip-tilt compensation can alleviate effects of atmospheric turbulence and thereby improve the BER performance. The improvement is different for different turbulence strengths and modulation formats. In addition, the BER performance of communication systems employing subcarrier BPSK modulation is much better than that of compatible systems employing OOK modulation with or without tip-tilt compensation.
Resumo:
We analyse further the entanglement purification protocol proposed by Feng et al. (Phys. Lett. A 271 (2000) 44) in the case of imperfect local operations and measurements. It is found that this protocol allows of higher error threshold. Compared with the standard entanglement purification proposed by Bennett et al. [Phys. Rev. Lett. 76 (1996) 722], it turns out that this protocol is remarkably robust against the influences of imperfect local operations and measurements.
Resumo:
In the process of interferometric testing, the measurement result is influenced by the system structure, which reduces the measurement accuracy. To obtain an accurate test result, it is necessary to analyze the test system, and build the relationship between the measurement error and the system parameters. In this paper, the influences of the system elements which include the collimated lens and the standard surface on the interferometric testing are analyzed, the expressions of phase distribution and wavefront error on the detector are obtained, the method to remove some element errors is introduced, and the optimization structure relationships are given. (C) 2006 Elsevier GmbH. All rights reserved.
Resumo:
Based on the generalized Huygens-Fresnel diffraction integral theory and the stationary-phase method, we analyze the influence on diffraction-free beam patterns of an elliptical manufacture error in an axicon. The numerical simulation is compared with the beam patterns photographed by using a CCD camera. Theoretical simulation and experimental results indicate that the intensity of the central spot decreases with increasing elliptical manufacture defect and propagation distance. Meanwhile, the bright rings around the central spot are gradually split into four or more symmetrical bright spots. The experimental results fit the theoretical simulation very well. (C) 2008 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Only the first- order Doppler frequency shift is considered in current laser dual- frequency interferometers; however; the second- order Doppler frequency shift should be considered when the measurement corner cube ( MCC) moves at high velocity or variable velocity because it can cause considerable error. The influence of the second- order Doppler frequency shift on interferometer error is studied in this paper, and a model of the second- order Doppler error is put forward. Moreover, the model has been simulated with both high velocity and variable velocity motion. The simulated results show that the second- order Doppler error is proportional to the velocity of the MCC when it moves with uniform motion and the measured displacement is certain. When the MCC moves with variable motion, the second- order Doppler error concerns not only velocity but also acceleration. When muzzle velocity is zero the second- order Doppler error caused by an acceleration of 0.6g can be up to 2.5 nm in 0.4 s, which is not negligible in nanometric measurement. Moreover, when the muzzle velocity is nonzero, the accelerated motion may result in a greater error and decelerated motion may result in a smaller error.
Resumo:
The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.
Resumo:
A new method to measure reciprocal four-port structures, using a 16-term error model, is presented. The measurement is based on 5 two-port calibration standards connected to two of the ports, while the network analyzer is connected to the two remaining ports. Least-squares-fit data reduction techniques are used to lower error sensitivity. The effect of connectors is deembedded using closed-form equations. (C) 2007 Wiley Periodicals, Inc.
Resumo:
Formulation of a 16-term error model, based on the four-port ABCD-matrix and voltage and current variables, is outlined. Matrices A, B, C, and D are each 2 x 2 submatrices of the complete 4 x 4 error matrix. The corresponding equations are linear in terms of the error parameters, which simplifies the calibration process. The parallelism with the network analyzer calibration procedures and the requirement of five two-port calibration measurements are stressed. Principles for robust choice of equations are presented. While the formulation is suitable for any network analyzer measurement, it is expected to be a useful alternative for the nonlinear y-parameter approach used in intrinsic semiconductor electrical and noise parameter measurements and parasitics' deembedding.
Resumo:
In recognition-based user interface, users’ satisfaction is determined not only by recognition accuracy but also by effort to correct recognition errors. In this paper, we introduce a crossmodal error correction technique, which allows users to correct errors of Chinese handwriting recognition by speech. The focus of the paper is a multimodal fusion algorithm supporting the crossmodal error correction. By fusing handwriting and speech recognition, the algorithm can correct errors in both character extraction and recognition of handwriting. The experimental result indicates that the algorithm is effective and efficient. Moreover, the evaluation also shows the correction technique can help users to correct errors in handwriting recognition more efficiently than the other two error correction techniques.