905 resultados para LEAST-SQUARES METHODS
Resumo:
本文提出了一种简化的多变量随机系统状态模型参数在线辨识方法。与最小二乘自适应递推算法比较,不仅需要辨识的参数减少,而且针对一类模型参数缓慢变化的系统,可以通过选择不同的遗忘因子序列来控制参数变化的幅度,解决了电力系统负荷预报中季节模型的老化问题。本方法基于带有随机噪声状态模型的典范型,大大节省了计算机的运算量和存贮容量,适于微处理机的在线应用。
Resumo:
本文讨论了在特殊信号下,非真阶非真滞后模型参数的最小二乘估计与纯滞后之间的关系,提出了一种新的辨识纯滞后的方法——降阶搜索算法。
Resumo:
As the most spectacular and youngest case of continental collision on the Earth, to investigate the crust and mantle of Tibetan plateau, and then to reveal its characters of structure and deformation, are most important to understand its deformation mechanism and deep process. A great number of surface wave data were initially collected from events occurred between 1980 and 2002, which were recorded by 13 broadband digital stations in Eurasia and India. Up to 1,525 source-station Rayleigh waveforms and 1,464 Love wave trains were analysed to obtain group velocity dispersions, accompanying with the detail and quantitative assessment of the fitness of the classic Ray Theory, errors from focal and measurements. Assuming the model region covered by a mesh of 2ox2o-sized grid-cells, we have used the damped least-squares approach and the SVD to carry out tomographic inversion, SV- and SH-wave velocity images of the crust and upper mantle beneath the Tibetan Plateau and surroundings are obtained, and then the radial anisotropy is computed from the Love-Rayleigh discrepancy. The main results demonstrate that follows, a) The Moho beneath the Tibetan Plateau presents an undulating shape that lies between 65 and 74 km, and a clear correlation between the elevations of the plateau and the Moho topography suggests that at least a great part of the highly raised plateau is isostatically compensated. b) The lithospheric root presents a depth that can be substantiated at ~140 km (Qiangtang Block) and exceptionally at ~180 km (Lhasa Block), and exhibits laterally varying fast velocity between 4.6 and 4.7 km/s, even ~4.8 km/s under northern Lhasa Block and Qiangtang Block, which may be correlated with the presence of a shield-like upper mantle beneath the Tibetan Plateau and therefore looked as one of the geophysical tests confirming the underthrusting of India, whose leading edge might have exceeded the Bangong-Nujiang Suture, even the Jinsha Suture. c) The asthenosphere is depicted by a low velocity channel at depths between 140 and 220 km with negative velocity gradient and velocities as low as 4.2 km/s; d) Areas in which transverse radial anisotropy is in excess of ~4% and 6% on the average anisotropy are found in the crust and upper mantle underlying most of the Plateau, and up to 8% in some places. The strength, spatial configuration and sign of radial anisotropy seem to indicate the existence of a regime of horizontal compressive forces in the frame of the convergent orogen at the same time that laterally varying lithospheric rheology and a differential movement as regards the compressive driving forces. e) Slow-velocity anomalies of 12% or more in southern Tibet and the eastern edge of the Plateau support the idea of a mechanically weak middle-to-lower crust and the existence of crustal flow in Tibet.
Resumo:
The seismic survey is the most effective prospecting geophysical method during exploration and development of oil/gas. The structure and the lithology of the geological body become increasingly complex now. So it must assure that the seismic section own upper resolution if we need accurately describe the targets. High signal/noise ratio is the precondition of high-resolution. As one important seismic data processing method, Stacking is an effective means to suppress the records noise. Broadening area of surface stacked is more important to enhance genuine reflection signals and suppressing unwanted energy in the form of coherent and random ambient noise. Common reflection surface stack is a macro-model independent seismic imaging method. Based on the similarity of CRP trace gathers in one coherent zone, CRS stack effectively improves S/N ratio by using more CMP trace gathers to stack. It is regarded as one important method of seismic data processing. Performing CRS stack depends on three attributes. However, the equation of CRS is invalid under condition of great offset. In this thesis, one method based on velocity model in depth domain is put forward. Ray tracing is used to determine the traveltime of CRP in one common reflection surface by the least squares method to regress the equation of CRS. Then we stack in the coherent seismic data set according to the traveltime, and get the zero offset section. In the end of flowchart of implementing CRS stack, one method using the dip angle to enhance the ratio of S/N is used. Application of the method on synthetic examples and field seismic records, the results of this method show an excellent performance of the algorithm both in accuracy and efficiency.
Resumo:
Template matching by means of cross-correlation is common practice in pattern recognition. However, its sensitivity to deformations of the pattern and the broad and unsharp peaks it produces are significant drawbacks. This paper reviews some results on how these shortcomings can be removed. Several techniques (Matched Spatial Filters, Synthetic Discriminant Functions, Principal Components Projections and Reconstruction Residuals) are reviewed and compared on a common task: locating eyes in a database of faces. New variants are also proposed and compared: least squares Discriminant Functions and the combined use of projections on eigenfunctions and the corresponding reconstruction residuals. Finally, approximation networks are introduced in an attempt to improve filter design by the introduction of nonlinearity.
Resumo:
We describe a new method for motion estimation and 3D reconstruction from stereo image sequences obtained by a stereo rig moving through a rigid world. We show that given two stereo pairs one can compute the motion of the stereo rig directly from the image derivatives (spatial and temporal). Correspondences are not required. One can then use the images from both pairs combined to compute a dense depth map. The motion estimates between stereo pairs enable us to combine depth maps from all the pairs in the sequence to form an extended scene reconstruction and we show results from a real image sequence. The motion computation is a linear least squares computation using all the pixels in the image. Areas with little or no contrast are implicitly weighted less so one does not have to explicitly apply a confidence measure.
Resumo:
For applications involving the control of moving vehicles, the recovery of relative motion between a camera and its environment is of high utility. This thesis describes the design and testing of a real-time analog VLSI chip which estimates the focus of expansion (FOE) from measured time-varying images. Our approach assumes a camera moving through a fixed world with translational velocity; the FOE is the projection of the translation vector onto the image plane. This location is the point towards which the camera is moving, and other points appear to be expanding outward from. By way of the camera imaging parameters, the location of the FOE gives the direction of 3-D translation. The algorithm we use for estimating the FOE minimizes the sum of squares of the differences at every pixel between the observed time variation of brightness and the predicted variation given the assumed position of the FOE. This minimization is not straightforward, because the relationship between the brightness derivatives depends on the unknown distance to the surface being imaged. However, image points where brightness is instantaneously constant play a critical role. Ideally, the FOE would be at the intersection of the tangents to the iso-brightness contours at these "stationary" points. In practice, brightness derivatives are hard to estimate accurately given that the image is quite noisy. Reliable results can nevertheless be obtained if the image contains many stationary points and the point is found that minimizes the sum of squares of the perpendicular distances from the tangents at the stationary points. The FOE chip calculates the gradient of this least-squares minimization sum, and the estimation is performed by closing a feedback loop around it. The chip has been implemented using an embedded CCD imager for image acquisition and a row-parallel processing scheme. A 64 x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a design goal of 200 mW of on-chip power, a top frame rate of 1000 frames/second, and a basic accuracy of 5%. A complete experimental system which estimates the FOE in real time using real motion and image scenes is demonstrated.
Resumo:
P-glycoprotein (P-gp), an ATP-binding cassette (ABC) transporter, functions as a biological barrier by extruding cytotoxic agents out of cells, resulting in an obstacle in chemotherapeutic treatment of cancer. In order to aid in the development of potential P-gp inhibitors, we constructed a quantitative structure-activity relationship (QSAR) model of flavonoids as P-gp inhibitors based on Bayesian-regularized neural network (BRNN). A dataset of 57 flavonoids collected from a literature binding to the C-terminal nucleotide-binding domain of mouse P-gp was compiled. The predictive ability of the model was assessed using a test set that was independent of the training set, which showed a standard error of prediction of 0.146 +/- 0.006 (data scaled from 0 to 1). Meanwhile, two other mathematical tools, back-propagation neural network (BPNN) and partial least squares (PLS) were also attempted to build QSAR models. The BRNN provided slightly better results for the test set compared to BPNN, but the difference was not significant according to F-statistic at p = 0.05. The PLS failed to build a reliable model in the present study. Our study indicates that the BRNN-based in silico model has good potential in facilitating the prediction of P-gp flavonoid inhibitors and might be applied in further drug design.
Resumo:
The heat capacities (C-p) of five types of gasohol (50 wt % ethanol and 50 wt % unleaded gasoline 93(#) (E50), 60 wt % ethanol and 40 wt % unleaded gasoline 93(#) (E60), 70 wt % ethanol and 30 wt % unleaded gasoline 93(#) (E70), 80 wt % ethanol and 20 wt % unleaded gasoline 93(#) (E80), and 90 wt % ethanol and 10 wt % unleaded gasoline 93(#) (E90), where the "93" denotes the octane number) were measured by adiabatic calorimetry in the temperature range of 78-320 K. A glass transition was observed at 95.61, 96.14, 96.56, 96.84, and 97.08 K for samples from the E50, E60, E70, E80, and E90 systems, respectively. A liquid-solid phase transition and a solid-liquid phase transition were observed in the respective temperature ranges of 118-153 and 155-163 K for E50, 117-150 and 151-164 K for E60, 115-154 and 154-166 K for E70, 113-152 and 152-167 K for E80, and 112-151 and 1581-167 K for E90. The polynomial equations of Cp and the excess heat capacities (C-p(E)), with respect to the thermodynamic temperature, were established through least-squares fitting. Based on the thermodynamic relationship and the equations obtained, the thermodynamic functions and the excess thermodynamic functions of the five gasohol samples were derived.
Resumo:
Rowland, J.J. and Taylor, J. (2002). Adaptive denoising in spectral analysis by genetic programming. Proc. IEEE Congress on Evolutionary Computation (part of WCCI), May 2002. pp 133-138. ISBN 0-7803-7281-6
Resumo:
Liu, Yonghuai. Automatic 3d free form shape matching using the graduated assignment algorithm. Pattern Recognition, vol. 38, no. 10, pp. 1615-1631, 2005.
Resumo:
D.J. Currie, M.H. Lee and R.W. Todd, 'Prediction of Physical Properties of Yeast Cell Suspensions using Dielectric Spectroscopy', Conference on Electrical Insulation and Dielectric Phenomena, (CEIDP 2006), Annual Report, pp 672 ? 675, October 15th -18th 2006, Kansas City, Missouri, USA. Organised by IEEE Dielectrics and Electrical Insulation Society.
Resumo:
This paper analyses the asymptotic properties of nonlinear least squares estimators of the long run parameters in a bivariate unbalanced cointegration framework. Unbalanced cointegration refers to the situation where the integration orders of the observables are different, but their corresponding balanced versions (with equal integration orders after filtering) are cointegrated in the usual sense. Within this setting, the long run linkage between the observables is driven by both the cointegrating parameter and the difference between the integration orders of the observables, which we consider to be unknown. Our results reveal three noticeable features. First, superconsistent (faster than √ n-consistent) estimators of the difference between memory parameters are achievable. Next, the joint limiting distribution of the estimators of both parameters is singular, and, finally, a modified version of the ‘‘Type II’’ fractional Brownian motion arises in the limiting theory. A Monte Carlo experiment and the discussion of an economic example are included.
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos de obtenção do grau de Mestre em Ciências da Comunicação, ramo de Marketing e Publicidade
Resumo:
Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.