901 resultados para Generalized Least-squares
Resumo:
本文首先介绍了视觉伺服的一般原理 .然后提出了一种模型无关的无定标视觉伺服控制方法 ,在这种方法中不需要机器人模型和摄像机模型 ,应用方差最小化的原理推导出了模型无关的无定标视觉伺服控制律 .此外还给出了图像雅可比矩阵的递推公式 .文章最后通过一个轨线跟踪的仿真实验验证了算法的正确性和有效性
Resumo:
介绍了一种新型的移动机器人激光全局定位系统。重点讨论了结构化环境中移动机器人的全局定位方法 ,提出了一种新的基于最小二乘法的迭代搜索定位算法。全方位移动机器人平台上进行的定位实验 ,证实了该算法的有效性。
Resumo:
根据主UUV观测系统测量的从UUV方位信息精度高、距离信息精度低的特点,将遗忘因子和位置权值构成的综合权值融入递推最小二乘算法(RLS)用于从UUV航行参数分析,避免采用EKF算法对观测噪声要求高的缺陷,克服数据饱和现象。同时对从UUV方位信息进行预处理以提高航行参数估计的收敛速度。仿真实验证明了方法的有效性。
Resumo:
本文提出了一种简化的多变量随机系统状态模型参数在线辨识方法。与最小二乘自适应递推算法比较,不仅需要辨识的参数减少,而且针对一类模型参数缓慢变化的系统,可以通过选择不同的遗忘因子序列来控制参数变化的幅度,解决了电力系统负荷预报中季节模型的老化问题。本方法基于带有随机噪声状态模型的典范型,大大节省了计算机的运算量和存贮容量,适于微处理机的在线应用。
Resumo:
本文讨论了在特殊信号下,非真阶非真滞后模型参数的最小二乘估计与纯滞后之间的关系,提出了一种新的辨识纯滞后的方法——降阶搜索算法。
Resumo:
针对当前模糊隶属函数构造方法中存在的问题,提出一种构造模糊隶属函数方法.采用最小二乘法拟合离散数据来获得隶属函数.为减小拟合误差,采用了3项措施以达到预期目标.所构建的隶属函数,对任意输入物理量可直接得到其对应模糊语言变量的隶属度,从而有效避免专家指定隶属度的主观臆断性及不一致性.该方法简单、求解精度高,具有广泛适用性和较强的应用价值.仿真结果证实了该方法的有效性.
Resumo:
As the most spectacular and youngest case of continental collision on the Earth, to investigate the crust and mantle of Tibetan plateau, and then to reveal its characters of structure and deformation, are most important to understand its deformation mechanism and deep process. A great number of surface wave data were initially collected from events occurred between 1980 and 2002, which were recorded by 13 broadband digital stations in Eurasia and India. Up to 1,525 source-station Rayleigh waveforms and 1,464 Love wave trains were analysed to obtain group velocity dispersions, accompanying with the detail and quantitative assessment of the fitness of the classic Ray Theory, errors from focal and measurements. Assuming the model region covered by a mesh of 2ox2o-sized grid-cells, we have used the damped least-squares approach and the SVD to carry out tomographic inversion, SV- and SH-wave velocity images of the crust and upper mantle beneath the Tibetan Plateau and surroundings are obtained, and then the radial anisotropy is computed from the Love-Rayleigh discrepancy. The main results demonstrate that follows, a) The Moho beneath the Tibetan Plateau presents an undulating shape that lies between 65 and 74 km, and a clear correlation between the elevations of the plateau and the Moho topography suggests that at least a great part of the highly raised plateau is isostatically compensated. b) The lithospheric root presents a depth that can be substantiated at ~140 km (Qiangtang Block) and exceptionally at ~180 km (Lhasa Block), and exhibits laterally varying fast velocity between 4.6 and 4.7 km/s, even ~4.8 km/s under northern Lhasa Block and Qiangtang Block, which may be correlated with the presence of a shield-like upper mantle beneath the Tibetan Plateau and therefore looked as one of the geophysical tests confirming the underthrusting of India, whose leading edge might have exceeded the Bangong-Nujiang Suture, even the Jinsha Suture. c) The asthenosphere is depicted by a low velocity channel at depths between 140 and 220 km with negative velocity gradient and velocities as low as 4.2 km/s; d) Areas in which transverse radial anisotropy is in excess of ~4% and 6% on the average anisotropy are found in the crust and upper mantle underlying most of the Plateau, and up to 8% in some places. The strength, spatial configuration and sign of radial anisotropy seem to indicate the existence of a regime of horizontal compressive forces in the frame of the convergent orogen at the same time that laterally varying lithospheric rheology and a differential movement as regards the compressive driving forces. e) Slow-velocity anomalies of 12% or more in southern Tibet and the eastern edge of the Plateau support the idea of a mechanically weak middle-to-lower crust and the existence of crustal flow in Tibet.
Resumo:
Reflectivity sequences extraction is a key part of impedance inversion in seismic exploration. Although many valid inversion methods exist, with crosswell seismic data, the frequency brand of seismic data can not be broadened to satisfy the practical need. It is an urgent problem to be solved. Pre-stack depth migration which developed in these years becomes more and more robust in the exploration. It is a powerful technology of imaging to the geological object with complex structure and its final result is reflectivity imaging. Based on the reflectivity imaging of crosswell seismic data and wave equation, this paper completed such works as follows: Completes the workflow of blind deconvolution, Cauchy criteria is used to regulate the inversion(sparse inversion). Also the precondition conjugate gradient(PCG) based on Krylov subspace is combined with to decrease the computation, improves the speed, and the transition matrix is not necessary anymore be positive and symmetric. This method is used to the high frequency recovery of crosswell seismic section and the result is satisfactory. Application of rotation transform and viterbi algorithm in the preprocess of equation prestack depth migration. In equation prestack depth migration, the grid of seismic dataset is required to be regular. Due to the influence of complex terrain and fold, the acquisition geometry sometimes becomes irregular. At the same time, to avoid the aliasing produced by the sparse sample along the on-line, interpolation should be done between tracks. In this paper, I use the rotation transform to make on-line run parallel with the coordinate, and also use the viterbi algorithm to complete the automatic picking of events, the result is satisfactory. 1. Imaging is a key part of pre-stack depth migration besides extrapolation. Imaging condition can influence the final result of reflectivity sequences imaging greatly however accurate the extrapolation operator is. The author does migration of Marmousi under different imaging conditions. And analyzes these methods according to the results. The results of computation show that imaging condition which stabilize source wave field and the least-squares estimation imaging condition in this paper are better than the conventional correlation imaging condition. The traditional pattern of "distributed computing and mass decision" is wisely adopted in the field of seismic data processing and becoming an obstacle of the promoting of the enterprise management level. Thus at the end of this paper, a systemic solution scheme, which employs the mode of "distributed computing - centralized storage - instant release", is brought forward, based on the combination of C/S and B/S release models. The architecture of the solution, the corresponding web technology and the client software are introduced. The application shows that the validity of this scheme.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
The seismic survey is the most effective prospecting geophysical method during exploration and development of oil/gas. The structure and the lithology of the geological body become increasingly complex now. So it must assure that the seismic section own upper resolution if we need accurately describe the targets. High signal/noise ratio is the precondition of high-resolution. As one important seismic data processing method, Stacking is an effective means to suppress the records noise. Broadening area of surface stacked is more important to enhance genuine reflection signals and suppressing unwanted energy in the form of coherent and random ambient noise. Common reflection surface stack is a macro-model independent seismic imaging method. Based on the similarity of CRP trace gathers in one coherent zone, CRS stack effectively improves S/N ratio by using more CMP trace gathers to stack. It is regarded as one important method of seismic data processing. Performing CRS stack depends on three attributes. However, the equation of CRS is invalid under condition of great offset. In this thesis, one method based on velocity model in depth domain is put forward. Ray tracing is used to determine the traveltime of CRP in one common reflection surface by the least squares method to regress the equation of CRS. Then we stack in the coherent seismic data set according to the traveltime, and get the zero offset section. In the end of flowchart of implementing CRS stack, one method using the dip angle to enhance the ratio of S/N is used. Application of the method on synthetic examples and field seismic records, the results of this method show an excellent performance of the algorithm both in accuracy and efficiency.
Resumo:
Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.
Resumo:
Template matching by means of cross-correlation is common practice in pattern recognition. However, its sensitivity to deformations of the pattern and the broad and unsharp peaks it produces are significant drawbacks. This paper reviews some results on how these shortcomings can be removed. Several techniques (Matched Spatial Filters, Synthetic Discriminant Functions, Principal Components Projections and Reconstruction Residuals) are reviewed and compared on a common task: locating eyes in a database of faces. New variants are also proposed and compared: least squares Discriminant Functions and the combined use of projections on eigenfunctions and the corresponding reconstruction residuals. Finally, approximation networks are introduced in an attempt to improve filter design by the introduction of nonlinearity.
Resumo:
We describe a new method for motion estimation and 3D reconstruction from stereo image sequences obtained by a stereo rig moving through a rigid world. We show that given two stereo pairs one can compute the motion of the stereo rig directly from the image derivatives (spatial and temporal). Correspondences are not required. One can then use the images from both pairs combined to compute a dense depth map. The motion estimates between stereo pairs enable us to combine depth maps from all the pairs in the sequence to form an extended scene reconstruction and we show results from a real image sequence. The motion computation is a linear least squares computation using all the pixels in the image. Areas with little or no contrast are implicitly weighted less so one does not have to explicitly apply a confidence measure.
Resumo:
For applications involving the control of moving vehicles, the recovery of relative motion between a camera and its environment is of high utility. This thesis describes the design and testing of a real-time analog VLSI chip which estimates the focus of expansion (FOE) from measured time-varying images. Our approach assumes a camera moving through a fixed world with translational velocity; the FOE is the projection of the translation vector onto the image plane. This location is the point towards which the camera is moving, and other points appear to be expanding outward from. By way of the camera imaging parameters, the location of the FOE gives the direction of 3-D translation. The algorithm we use for estimating the FOE minimizes the sum of squares of the differences at every pixel between the observed time variation of brightness and the predicted variation given the assumed position of the FOE. This minimization is not straightforward, because the relationship between the brightness derivatives depends on the unknown distance to the surface being imaged. However, image points where brightness is instantaneously constant play a critical role. Ideally, the FOE would be at the intersection of the tangents to the iso-brightness contours at these "stationary" points. In practice, brightness derivatives are hard to estimate accurately given that the image is quite noisy. Reliable results can nevertheless be obtained if the image contains many stationary points and the point is found that minimizes the sum of squares of the perpendicular distances from the tangents at the stationary points. The FOE chip calculates the gradient of this least-squares minimization sum, and the estimation is performed by closing a feedback loop around it. The chip has been implemented using an embedded CCD imager for image acquisition and a row-parallel processing scheme. A 64 x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a design goal of 200 mW of on-chip power, a top frame rate of 1000 frames/second, and a basic accuracy of 5%. A complete experimental system which estimates the FOE in real time using real motion and image scenes is demonstrated.
Resumo:
P-glycoprotein (P-gp), an ATP-binding cassette (ABC) transporter, functions as a biological barrier by extruding cytotoxic agents out of cells, resulting in an obstacle in chemotherapeutic treatment of cancer. In order to aid in the development of potential P-gp inhibitors, we constructed a quantitative structure-activity relationship (QSAR) model of flavonoids as P-gp inhibitors based on Bayesian-regularized neural network (BRNN). A dataset of 57 flavonoids collected from a literature binding to the C-terminal nucleotide-binding domain of mouse P-gp was compiled. The predictive ability of the model was assessed using a test set that was independent of the training set, which showed a standard error of prediction of 0.146 +/- 0.006 (data scaled from 0 to 1). Meanwhile, two other mathematical tools, back-propagation neural network (BPNN) and partial least squares (PLS) were also attempted to build QSAR models. The BRNN provided slightly better results for the test set compared to BPNN, but the difference was not significant according to F-statistic at p = 0.05. The PLS failed to build a reliable model in the present study. Our study indicates that the BRNN-based in silico model has good potential in facilitating the prediction of P-gp flavonoid inhibitors and might be applied in further drug design.