961 resultados para Phase error
Resumo:
As there exist some problems with the previous laser diode (LD) real-time microvibration measurement interferometers, such as low accuracy, correction before every use, etc., in this paper, we propose a new technique to realize the real-time microvibration measurement by using the LD sinusoidal phase-modulating interferometer, analyze the measurement theory and error, and simulate the measurement accuracy. This interferometer utilizes a circuit to process the interference signal in order to obtain the vibration frequency and amplitude of the detective signal, and a computer is not necessary in it. The influence of the varying light intensity and light path difference on the measurement result can be eliminated. This technique is real-time, convenient, fast, and can enhance the measurement accuracy too. Experiments show that the repeatable measurement accuracy is less than 3.37 nm, and this interferometer can be applied to real-time microvibration measurement of the MEMS. (C) 2007 Elsevier GmbH. All rights reserved.
Resumo:
A set of recursive formulas for diffractive optical plates design is described. The pure-phase plates simulated by this method homogeneously concentrate more than 96% of the incident laser energy in the desired focal-plane region. The intensity focal-plane profile fits a lath-order super-Gaussian function and has a nearly perfect flat top. Its fit to the required profile measured in the mean square error is 3.576 x 10(-3). (C) 1996 Optical Society of America
Resumo:
In the process of interferometric testing, the measurement result is influenced by the system structure, which reduces the measurement accuracy. To obtain an accurate test result, it is necessary to analyze the test system, and build the relationship between the measurement error and the system parameters. In this paper, the influences of the system elements which include the collimated lens and the standard surface on the interferometric testing are analyzed, the expressions of phase distribution and wavefront error on the detector are obtained, the method to remove some element errors is introduced, and the optimization structure relationships are given. (C) 2006 Elsevier GmbH. All rights reserved.
Resumo:
Based on the generalized Huygens-Fresnel diffraction integral theory and the stationary-phase method, we analyze the influence on diffraction-free beam patterns of an elliptical manufacture error in an axicon. The numerical simulation is compared with the beam patterns photographed by using a CCD camera. Theoretical simulation and experimental results indicate that the intensity of the central spot decreases with increasing elliptical manufacture defect and propagation distance. Meanwhile, the bright rings around the central spot are gradually split into four or more symmetrical bright spots. The experimental results fit the theoretical simulation very well. (C) 2008 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Estimation of the far-field centre is carried out in beam auto-alignment. In this paper, the features of the far-field of a square beam are presented. Based on these features, a phase-only matched filter is designed, and the algorithm of centre estimation is developed. Using the simulated images with different kinds of noise and the 40 test images that are taken in sequence, the accuracy of this algorithm is estimated. Results show that the error is no more than one pixel for simulated noise images with a 99% probability, and the stability is restricted within one pixel for test images. Using the improved algorithm, the consumed time is reduced to 0.049 s.
Resumo:
Computational fluid dynamics (CFD) simulations are becoming increasingly widespread with the advent of more powerful computers and more sophisticated software. The aim of these developments is to facilitate more accurate reactor design and optimization methods compared to traditional lumped-parameter models. However, in order for CFD to be a trusted method, it must be validated using experimental data acquired at sufficiently high spatial resolution. This article validates an in-house CFD code by comparison with flow-field data obtained using magnetic resonance imaging (MRI) for a packed bed with a particle-to-column diameter ratio of 2. Flows characterized by inlet Reynolds numbers, based on particle diameter, of 27, 55, 111, and 216 are considered. The code used employs preconditioning to directly solve for pressure in low-velocity flow regimes. Excellent agreement was found between the MRI and CFD data with relative error between the experimentally determined and numerically predicted flow-fields being in the range of 3-9%. © 2012 American Institute of Chemical Engineers (AIChE).
Resumo:
This paper presents a long range and effectively error-free ultra high frequency (UHF) radio frequency identification (RFID) interrogation system. The system is based on a novel technique whereby two or more spatially separated transmit and receive antennas are used to enable greatly enhanced tag detection performance over longer distances using antenna diversity combined with frequency and phase hopping. The novel technique is first theoretically modelled using a Rician fading channel. It is shown that conventional RFID systems suffer from multi-path fading resulting in nulls in radio environments. We, for the first time, demonstrate that the nulls can be moved around by varying the phase and frequency of the interrogation signals in a multi-antenna system. As a result, much enhanced coverage can be achieved. A proof of principle prototype RFID system is built based on an Impinj R2000 transceiver. The demonstrator system shows that the new approach improves the tag detection accuracy from <50% to 100% and the tag backscatter signal strength by 10dB over a 20 m x 9 m area, compared with a conventional switched multi-antenna RFID system.
Resumo:
This paper presents a direct digital frequency synthesizer (DDFS) with a 16-bit accumulator, a fourth-order phase domain single-stage Delta Sigma interpolator, and a 300-MS/s 12-bit current-steering DAC based on the Q(2) Random Walk switching scheme. The Delta Sigma interpolator is used to reduce the phase truncation error and the ROM size. The implemented fourth-order single-stage Delta Sigma noise shaper reduces the effective phase bits by four and reduces the ROM size by 16 times. The DDFS prototype is fabricated in a 0.35-mu m CMOS technology with active area of 1.11 mm(2) including a 12-bit DAC. The measured DDFS spurious-free dynamic range (SFDR) is greater than 78 dB using a reduced ROM with 8-bit phase, 12-bit amplitude resolution and a size of 0.09 mm(2). The total power consumption of the DDFS is 200)mW with a 3.3-V power supply.
Resumo:
This paper presents a direct digital frequency synthesizer (DDFS) with a 16-bit accumulator, a fourth-order phase domain single-stage Delta Sigma interpolator, and a 300-MS/s 12-bit current-steering DAC based on the Q(2) Random Walk switching scheme. The Delta Sigma interpolator is used to reduce the phase truncation error and the ROM size. The implemented fourth-order single-stage Delta Sigma noise shaper reduces the effective phase bits by four and reduces the ROM size by 16 times. The DDFS prototype is fabricated in a 0.35-mu m CMOS technology with active area of 1.11 mm(2) including a 12-bit DAC. The measured DDFS spurious-free dynamic range (SFDR) is greater than 78 dB using a reduced ROM with 8-bit phase, 12-bit amplitude resolution and a size of 0.09 mm(2). The total power consumption of the DDFS is 200)mW with a 3.3-V power supply.
Resumo:
A novel algorithm of phase reconstruction based on the integral of phase gradient is presented. The algorithm directly derives two real-valued partial derivatives from three phase-shifted interferograms. Through integrating the phase derivatives, the desired phase is reconstructed. During the phase reconstruction process, there is no need for an extra rewrapping manipulation to ensure values of the phase derivatives lie in the interval [-pi, pi] as before, thus this algorithm can prevent error or distortion brought about by the phase unwrapping operation. Additionally, this algorithm is fast and easy to implement, and insensitive to the nonuniformity of the intensity distribution of the interferogram. The feasibility of the algorithm is demonstrated by both computer simulation and experiment.
New uniform algorithm to predict reversed phase retention values under different gradient conditions
Resumo:
A new numerical emulation algorithm was established to calculate retention parameters in RP-HPLC with several retention times under different linear or nonlinear binary gradient elution conditions and further predict the retention time under any other binary gradient conditions. A program was written according to this algorithm and nine solutes were used to test the program. The prediction results were excellent. The maximum relative error of predicted retention time was less than 0.45%. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A method for estimating the one-phase structure seminvariants (OPSSs) having values of 0 or pi has been proposed on the basis of the probabilistic theory of the three-phase structure invariants for a pair of isomorphous structures [Hauptman (1982). Acta Cryst. A38, 289-294]. The test calculations using error-free diffraction data of protein cytochrome c(550) and its PtCl42- derivative show that reliable estimates of a number of the OPSSs can be obtained. The reliability of the estimation increases with the increase of the differences between diffraction intensities of the native protein and its heavy-atom derivative. A means to estimate the parameters of the distribution from the diffraction ratio is suggested.
Resumo:
With the intermediate-complexity Zebiak-Cane model, we investigate the 'spring predictability barrier' (SPB) problem for El Nino events by tracing the evolution of conditional nonlinear optimal perturbation (CNOP), where CNOP is superimposed on the El Nino events and acts as the initial error with the biggest negative effect on the El Nino prediction. We show that the evolution of CNOP-type errors has obvious seasonal dependence and yields a significant SPB, with the most severe occurring in predictions made before the boreal spring in the growth phase of El Nino. The CNOP-type errors can be classified into two types: one possessing a sea-surface-temperature anomaly pattern with negative anomalies in the equatorial central-western Pacific, positive anomalies in the equatorial eastern Pacific, and a thermocline depth anomaly pattern with positive anomalies along the Equator, and another with patterns almost opposite to those of the former type. In predictions through the spring in the growth phase of El Nino, the initial error with the worst effect on the prediction tends to be the latter type of CNOP error, whereas in predictions through the spring in the decaying phase, the initial error with the biggest negative effect on the prediction is inclined to be the former type of CNOP error. Although the linear singular vector (LSV)-type errors also have patterns similar to the CNOP-type errors, they cover a more localized area than the CNOP-type errors and cause a much smaller prediction error, yielding a less significant SPB. Random errors in the initial conditions are also superimposed on El Nino events to investigate the SPB. We find that, whenever the predictions start, the random errors neither exhibit an obvious season-dependent evolution nor yield a large prediction error, and thus may not be responsible for the SPB phenomenon for El Nino events. These results suggest that the occurrence of the SPB is closely related to particular initial error patterns. The two kinds of CNOP-type error are most likely to cause a significant SPB. They have opposite signs and, consequently, opposite growth behaviours, a result which may demonstrate two dynamical mechanisms of error growth related to SPB: in one case, the errors grow in a manner similar to El Nino; in the other, the errors develop with a tendency opposite to El Nino. The two types of CNOP error may be most likely to provide the information regarding the 'sensitive area' of El Nino-Southern Oscillation (ENSO) predictions. If these types of initial error exist in realistic ENSO predictions and if a target method or a data assimilation approach can filter them, the ENSO forecast skill may be improved. Copyright (C) 2009 Royal Meteorological Society
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.
Resumo:
During the 1970’s and 1980’s, the late Dr Norman Holme undertook extensive towed sledge surveys in the English Channel and some in the Irish Sea. Only a minority of the resulting images were analysed and reported before his death in 1989 but logbooks, video and film material has been archived in the National Marine Biological Library (NMBL) in Plymouth. A scoping study was therefore commissioned by the Joint Nature Conservation Committee and as a part of the Mapping European Seabed Habitats (MESH) project to identify the value of the material archived and the procedure and cost to undertake further work. The results of the scoping study are: 1. NMBL archives hold 106 videotapes (reel-to-reel Sony HD format) and 59 video cassettes (including 15 from the Irish Sea) in VHS format together with 90 rolls of 35 mm colour transparency film (various lengths up to about 240 frames per film). These are stored in the Archive Room, either in a storage cabinet or in original film canisters. 2. Reel-to-reel material is extensive and had already been selectively copied to VHS cassettes. The cost of transferring it to an accepted ‘long-life’ medium (Betamax) would be approximately £15,000. It was not possible to view the tapes as a suitable machine was not located. The value of the tapes is uncertain but they are likely to become beyond salvation within one to two years. 3. Video cassette material is in good condition and is expected to remain so for several more years at least. Images viewed were generally of poor quality and the speed of tow often makes pictures blurred. No immediate action is required. 4. Colour transparency films are in good condition and the images are very clear. They provide the best source of information for mapping seabed biotopes. They should be scanned to digital format but inexpensive fast copying is problematic as there are no between-frame breaks between images and machines need to centre the image based on between-frame breaks. The minimum cost to scan all of the images commercially is approximately £6,000 and could be as much as £40,000 on some quotations. There is a further cost in coding and databasing each image and, all-in-all it would seem most economic to purchase a ‘continuous film’ scanner and undertake the work in-house. 5. Positional information in ships logs has been matched to films and to video tapes. Decca Chain co-ordinates recorded in the logbooks have been converted to latitude and longitude (degrees, minutes and seconds) and a further routine developed to convert to degrees and decimal degrees required for GIS mapping. However, it is unclear whether corrections to Decca positions were applied at the time the position was noted. Tow tracks have been mapped onto an electronic copy of a Hydrographic Office chart. 6. The positions of start and end of each tow were entered to a spread sheet so that they can be displayed on GIS or on a Hydrographic Office Chart backdrop. The cost of the Hydrographic Office chart backdrop at a scale of 1:75,000 for the whole area was £458 incl. VAT. 7. Viewing all of the video cassettes to note habitats and biological communities, even by an experienced marine biologist, would take at least in the order of 200 hours and is not recommended. English Channel towed sledge seabed images. Phase 1: scoping study and example analysis. 6 8. Once colour transparencies are scanned and indexed, viewing to identify seabed habitats and biological communities would probably take about 100 hours for an experienced marine biologist and is recommended. 9. It is expected that identifying biotopes along approximately 1 km lengths of each tow would be feasible although uncertainties about Decca co-ordinate corrections and exact positions of images most likely gives a ±250 m position error. More work to locate each image accurately and solve the Decca correction question would improve accuracy of image location. 10. Using codings (produced by Holme to identify different seabed types), and some viewing of video and transparency material, 10 biotopes have been identified, although more would be added as a result of full analysis. 11. Using the data available from the Holme archive, it is possible to populate various fields within the Marine Recorder database. The overall ‘survey’ will be ‘English Channel towed video sled survey’. The ‘events’ become the 104 tows. Each tow could be described as four samples, i.e. the start and end of the tow and two areas in the middle to give examples along the length of the tow. These samples would have their own latitude/longitude co-ordinates. The four samples would link to a GIS map. 12. Stills and video clips together with text information could be incorporated into a multimedia presentation, to demonstrate the range of level seabed types found along a part of the northern English Channel. More recent images taken during SCUBA diving of reef habitats in the same area as the towed sledge surveys could be added to the Holme images.