56 resultados para ERROR-CORRECTION
Resumo:
Polymethacrylate-based monolithic columns were prepared for capillary electrochromatography (CEC) by in situ copolymerization of butyl methacrylate (BMA), 2-acrylamido-2-methyl-1-propanesulfonic acid (AMPS), and ethylene dimethacrylate (EDMA) in the presence of a porogen in fused-silica capillaries of 100 mum I.D. The abnormal phenomenon that retention factors for neutral species decreases with applied voltage in CEC was observed. Capillary electrophoresis (CE) instruments usually require a period of time to increase voltage from 0 kV to desired value, which is called as ramp time. Such ramp time and any error in the determination of dead time should be taken into account during the accurate calculation of retention factors. After the correction of the retention factors, the plots of the corrected factors for alkylbenzene versus applied voltage were made, the absolute value of the plot slopes are less than 1.8 X 10(-4), Which indicates that the corrected retention times for neutral species do not show any dependence on applied voltage. Further, the plots of the corrected retention times for acidic and basic compounds versus the reciprocal of applied voltage were drawn, where the target compounds were eluted in neutral form. The very nice linearity of the plots was obtained. The linear correlation coefficients are over 0.999. Here, the slopes of the plots represent
Resumo:
Correction of spectral overlap interference in inductively coupled plasma atomic emission spectrometry by factor analysis is attempted. For the spectral overlap of two known lines, a data matrix can be composed from one or two pure spectra and a spectrum of the mixture. The data matrix is decomposed into a spectra matrix and a concentration matrix by target transformation factor analysis. The component concentration of interest in a binary mixture is obtained from the concentration matrix and interference from the other component is eliminated. This method is applied to correcting spectral interference of yttrium on the determination of copper and aluminium: satisfactory results are obtained. This method may also be applied to correcting spectral overlap interference for more than two lines. Like other methods of correcting spectral interferences, factor analysis can only be used for additive spectral overlap. Results obtained from measurements on copper/yttrium mixtures with different white noise added show that random errors in measurement data do not significantly affect the results of the correction method.
Resumo:
This work evaluates the effect of wavelength positioning errors in spectral scans on analytical results when the Kalman filtering technique is used for the correction of line interferences in inductively coupled plasma atomic emission spectrometry (ICP-AES). The results show that a positioning accuracy of 0.1 pm is required in order to obtain accurate and precise estimates for analyte concentrations. The positioning error in sample scans is more crucial than that in model scans. The relative bias in measured analyte concentration originating from a positioning error in a sample scan increases linearly with an increase in the magnitude of the error and the peak distance of the overlapping lines, but is inversely proportional to the signal-to-background ratio. By the use of an optimization procedure for the positions of scans with the innovations number as the criterion, the wavelength positioning error can be reduced and, correspondingly, the accuracy and precision of analytical results improved.
Resumo:
With the intermediate-complexity Zebiak-Cane model, we investigate the 'spring predictability barrier' (SPB) problem for El Nino events by tracing the evolution of conditional nonlinear optimal perturbation (CNOP), where CNOP is superimposed on the El Nino events and acts as the initial error with the biggest negative effect on the El Nino prediction. We show that the evolution of CNOP-type errors has obvious seasonal dependence and yields a significant SPB, with the most severe occurring in predictions made before the boreal spring in the growth phase of El Nino. The CNOP-type errors can be classified into two types: one possessing a sea-surface-temperature anomaly pattern with negative anomalies in the equatorial central-western Pacific, positive anomalies in the equatorial eastern Pacific, and a thermocline depth anomaly pattern with positive anomalies along the Equator, and another with patterns almost opposite to those of the former type. In predictions through the spring in the growth phase of El Nino, the initial error with the worst effect on the prediction tends to be the latter type of CNOP error, whereas in predictions through the spring in the decaying phase, the initial error with the biggest negative effect on the prediction is inclined to be the former type of CNOP error. Although the linear singular vector (LSV)-type errors also have patterns similar to the CNOP-type errors, they cover a more localized area than the CNOP-type errors and cause a much smaller prediction error, yielding a less significant SPB. Random errors in the initial conditions are also superimposed on El Nino events to investigate the SPB. We find that, whenever the predictions start, the random errors neither exhibit an obvious season-dependent evolution nor yield a large prediction error, and thus may not be responsible for the SPB phenomenon for El Nino events. These results suggest that the occurrence of the SPB is closely related to particular initial error patterns. The two kinds of CNOP-type error are most likely to cause a significant SPB. They have opposite signs and, consequently, opposite growth behaviours, a result which may demonstrate two dynamical mechanisms of error growth related to SPB: in one case, the errors grow in a manner similar to El Nino; in the other, the errors develop with a tendency opposite to El Nino. The two types of CNOP error may be most likely to provide the information regarding the 'sensitive area' of El Nino-Southern Oscillation (ENSO) predictions. If these types of initial error exist in realistic ENSO predictions and if a target method or a data assimilation approach can filter them, the ENSO forecast skill may be improved. Copyright (C) 2009 Royal Meteorological Society
Theoretical investigation on the adsorption of Ag+ and hydrated Ag+ cations on clean Si(111) surface
Resumo:
In this paper, the adsorption of Ag+ and hydrated Ag+ cations on clean Si(111) surface were investigated by using cluster (Gaussian 03) and periodic (DMol(3)) ab initio calculations. Si(111) surface was described with cluster models (Si14H17 and Si22H21) and a four-silicon layer slab with periodic boundary conditions. The effect of basis set superposition error (BSSE) was taken into account by applying the counterpoise correction. The calculated results indicated that the binding energies between hydrated Ag+ cations and clean Si(111) surface are large, suggesting a strong interaction between hydrated Ag+ cations and the semiconductor surface. With the increase of number, water molecules form hydrogen bond network with one another and only one water molecule binds directly to the Ag+ cation. The Ag+ cation in aqueous solution will safely attach to the clean Si(111) surface.
Resumo:
To model the adsorption of Na+ in aqueous solution on the semiconductor surface, the interactions of Na+ and Na+(H2O)(n) (n = 1-6) with a clean Si(111) surface were investigated by using hybrid density functional theory (B3LYP) and Moller-Plesset second-order perturbation (MP2) methods. The Si(111) surface was described with Si8H12, Si16H20, and Si22H21 Cluster models. The effect of the basis set superposition error (BSSE) was taken into account by applying the counterpoise (CP) correction. The calculated results indicated that the interactions between the Na+ cation and the dangling bonds of the Si(111) surface are primarily electrostatic with partial orbital interactions. The magnitude of the binding energies depends weakly on the adsorption sites and the size of the clusters. When water molecules are present, the interaction between the Nal and Si(I 11) surfaces weakens and the binding energy has the tendency to saturate. On a Si22H21 cluster described surface, the optimized Na+-surface distance for Na+(H2O)(5) adsorbed at on-top site is 4.16 angstrom and the CP-corrected binding energy (MP2) is -35.4 kJ/mol, implying a weakly adsorption of hydrated Na+ cation on clean Si(111) surface.
Resumo:
文中重点讨论了系统实现过程中,任务分解与行走命令下达,时序分配与同步,路标定位与行走误差修正.动态障碍感知,测定与响应和特别情况紧急处理等难题的解决策略及遇到的问题.
Resumo:
Theoretical research, laboratory test and field observation show that most of sediment rock has anisotropic features. It will produce some notable errors when applying isotropic methods such as prestack depth migration and velocity analysis to dada acquired under anisotropic condition; it also has a bad effect on geologic interpretation. Generally speaking, the vertical transverse isotropic media is a good approximation to geologic structure, thus it has an important realistic meaning for anisotropic prestack depth migration theory researching and precise complex geologic imaging if considering anisotropic effect of seismic wave propagation. There are two indispensable parts in prestack depth migration of realistic records, one is proper prestack depth migration algorithm, and the other is velocity analysis using prestack seismic data. The paper consists of the two aspects. Based on implicit finite difference research proposed by Dietrich Ristow et al (1997) about VTI media prestack depth migration, the paper proposed split-step Fourier prestack depth migration algorithm (VTISSF) and Fourier finite difference algorithm (VTIFFD) based on wave equation for VTI media, program are designed and the depth migration method are tested using synthetic model. The result shows that VTISSF is a stable algorithm, it generally gets a good result if the reflector dip is not very steep, while undermigration phenomena appeared in steep dips case; the VTIFFD algorithm bring us better result in steep dips with lower efficiency and frequency dispersion. For anisotropic prestack depth migration velocity analysis of VTI media, The paper discussed the basic hypothesis of VTI model in velocity analysis algorithm, basis of anisotropic prestack depth migration velocity analysis and travel time table calculation of VTI media in integral prestack depth migration. Then , analyzed the P-wave common imaging gather in the case of homogeneous velocity and vertically variable velocity . studied the residual correction in common imaging gather produced by media parameter error, analyzed the condition of flat event and correct depth in common imaging gather . In this case, the anisotropic model parameter vector is , is vertical velocity of a point at top surface, is vertical velocity gradient, and are anisotropic parameter. We can get vertical velocity gradient from seismic data; then the P-wave common imaging gather of VTI media whose velocity varies in vertical and horizontal direction, the relationship between media parameter and event residual time shift of common image gather are studied. We got the condition of flattening common imaging gather with correct depth. In this case the anisotropic model parameter vector is , is velocity gradient in horizontal direction. As a result, the vertical velocity grads can be decided uniquely, but horizontal velocity grads and anisotropic parameter can’t be distinguished if no priori information available, our method is to supply parameter by velocity scanning; then, as soon as is supplied we can get another four parameters of VTI media from seismic data. Based on above analysis, the paper discussed the feasibility of migration velocity analysis in vertically and horizontally varied VTI media, synthetic record of three models are used to test the velocity analysis method . Firstly, anisotropic velocity analysis test is done using a simple model with one block, then we used a model with multiple blocks, thirdly, we analyzed the anisotropic velocity using a part of Marmousi model. The model results show that this velocity analysis method is feasible and correct.
Resumo:
The ionogram acquired with the ionospheric vertical sounding method is the oldest data in the history of ionospheric research. Using of modern microelectronics and computer technology to digitalize, analyse and preserve the huge amount of historical film ionogram has become more and more important and urgent. This paper introduced the progress of the film ionogram digitalization by using digital image processing technologies to correct and repair film ionogram and convert them in an exchangeable format. An analysis and conversion software, basing on this method, has been developed for the film ionogram analysis, and then it introduces the application of this software by combining the SAO Explorer program for Wuhan film ionogram and pseudo-color ionogram in Yamagawa in Japan. It shows that our method is reliable,and the developed software is used friendly and provides a positive solution in digitalization and analysis of huge amount of historical film ionogram. Firstly, we briefly introduce the film ionogram and the process of its digitalization. By observing a amount of film ionogram, we obtain some common characteristics of the digitalized film ionogram following as: (1) the image rotation are caused by scanning; (2) the vertical axis of a large number of film ionogram exist more or less tilt and bending ; (3) coordinates of the film ionogram appear the non-uniformity phenomena result from the instability of driving motor rotation and the error of altitudinal cursor orientation. Moreover, based on the characteristics of the film ionogram and the SAO Explorer software which is widely used for the digital ionogram analysis in the world, a new method has been developed for film ionogram procession. The method contains the image geometric correction and film ionogram format conversion. The image geometric correction includes such as image rotation correction, vertical correction and coordinates scale correction. After geometric correction, the BMP file format images will be converted to the SBF file format images. Then, we also discuss the data format converting methods, which include two methods of the image data mapping basing on the normalization and logarithm, and the method of the preprocessing of the noise filtering and the threshold setting. Combining with SAO Explorer software, we successfully obtain ionospheric parameters and electron profile from the converted SBF file format digital ionograms. Based on the above method, we developed the software for the film ionogram to realize its correction analysis and conversion of the image format, and then give a introduction for its function and operation. Subsequently, the software are applied into the Wuhan film ionogram which separately observed in the high solar activity year and the low in 1980s last century. The results reveal the converted SBF digital ionogram almost preserve the all echo information of the film ionogram. Furthermore, we expressly discuss the application to the Wuhan film ionogram in 1958 in order to validate the applicability and credibility of the software. And it is showed that the important information of the film ionogram are maintained into the SBF digital ionogram. It is represented that there is credibility for conversion of the software when it applied in the older film ionogram. In sum, this software could apply to the digitalization and analysis of huge amount of historical film ionogram. Last, we extended the function of the software by bring some new conversion method and used it to apply to the pseudo-color ionogram of yamagawa in Japan. The results show that the converted ionogram information basically maintain the importantly ionogram information and the error of scaling of converted SBF file format image is almost acceptable, though there is no preprocessing for the original ionogram. Hence, we could extend the applicable range of the software and apply it to all kinds of simulative ionogram imaging by improving the method and software.
Resumo:
The CSAMT method is playing an important role in the exploration of geothermal and the pre-exploration in tunnel construction project recently. In order to instruct the interpretation technique for the field data, the forward method from ID to 3D and inversion method in ID and 2D are developed in this paper for the artificial source magnetotelluric in frequency domain. In general, the artificial source data are inverted only after the near field is corrected on the basis of the assumption of half-homogeneous space; however, this method is not suitable for the complex structure because the assumption is not valid any more. Recently the new idea about inversion scheme without near field correction is published in order to avoid the near field correction error. We try to discuss different inversion scheme in ID and 2D using the data without near field correction.The numerical integration method is used to do the forward modeling in ID CSAMT method o The infinite line source is used in the 2D finite-element forward modeling, where the near-field effect is occurred as in the CSAMT method because of using artificial source. The pseudo-delta function is used to modeling the source distribution, which reduces the singularity when solving the finite-element equations. The effect on the exploration area is discussed when anomalous body exists under the source or between the source and exploration area; A series of digital test show the 2D finite element method are correct, the results of modeling has important significant for CSAMT data interpretation. For 3D finite-element forward modeling, the finite-element equation is derived by Galerkin method and the divergence condition is add forcedly to the forward equation, the forward modeling result of the half homogeneous space model is correct.The new inversion idea without near field correction is followed to develop new inversion methods in ID and 2D in the paper. All of the inversion schemes use the data without near field correction, which avoid introducing errors caused by near field correction. The modified grid parameter method and the layer-by-layer inversion method are joined in the ID inversion scheme. The RRI method with artificial source are developed and finite-element inversion method are used in 2D inversion scheme. The inversion results using digital data and the field data are accordant to the model and the known geology data separately, which means the inversion without near field correction is accessible. The feasibility to invert the data only in exploration area is discussed when the anomalous body exists between the source and the exploration area.