990 resultados para predictive algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ocean wind speed and wind direction are estimated simultaneously using the normalized radar cross sections or' corresponding to two neighboring (25-km) blocks, within a given synthetic aperture radar (SAR) image, having slightly different incidence angles. This method is motivated by the methodology used for scatterometer data. The wind direction ambiguity is removed by using the direction closest to that given by a buoy or some other source of information. We demonstrate this method with 11 EN-VISAT Advanced SAR sensor images of the Gulf of Mexico and coastal waters of the North Atlantic. Estimated wind vectors are compared with wind measurements from buoys and scatterometer data. We show that this method can surpass other methods in some cases, even those with insufficient visible wind-induced streaks in the SAR images, to extract wind vectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet网络的时变时延及网络数据丢包严重影响了遥操作机器人系统的操作性能,甚至造成系统不稳定。为了解决这一问题,提出一种新的基于Internet的遥操作机器人系统控制结构。通过在主端对给定信息加入时间标签获得过去的系统回路时延,采用多元线性回归算法,预测下一时刻系统回路时延,然后在从端设计一个广义预测控制器控制远端机器人,从而改善时变时延对系统性能的影响。应用广义预测控制器产生的冗余控制信息,降低了网络数据丢包对系统的影响。最后根据预测控制稳定性定理,推导出系统的稳定性条件。仿真试验结果表明,该方法能有效解决时变时延以及网络数据丢包引起的性能下降问题。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

采用预测控制算法给出了一种带有时延补偿器的新的控制结构,分别在前向通道和反馈通道设计补偿器对网络时延进行补偿.实验结果表明:带有预测器及补偿器的新的控制结构可以改善系统的动态性能,并且能够保证系统在具有时延和数据丢失的环境下的稳定性.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

水下环境的复杂性以及自身模型的不确定性,给水下机器人的控制带来很大困难。针对水下机器人的特点和控制方面所存在的问题,提出了基于预测 校正控制策略的水下机器人神经网络自适应逆控制结构及训练算法。通过在线辨识系统的前向模型,估计出系统的Jacobian矩阵,然后采用预报误差法实现控制器的自适应。同时,为了提高系统对于外扰的鲁棒性,在伪线性回归算法的基础上,在评价函数中引入微分项。理论分析和仿真结果表明,与原来的算法相比,微分项的引入改善了系统对于外扰的鲁棒性和动态性能。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new scheduling algorithm for the flexible manufacturing cell is presented, which is a discrete time control method with fixed length control period combining with event interruption. At the flow control level we determine simultaneously the production mix and the proportion of parts to be processed through each route. The simulation results for a hypothetical manufacturing cell are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

本文提出了广义预测极点配置前馈自校正控制算法,计算机仿真结果表明,该算法控制质量好,能够消除系统可测扰动对输出的影响。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In exploration geophysics,velocity analysis and migration methods except reverse time migration are based on ray theory or one-way wave-equation. So multiples are regarded as noise and required to be attenuated. It is very important to attenuate multiples for structure imaging, amplitude preserving migration. So it is an interesting research in theory and application about how to predict and attenuate internal multiples effectively. There are two methods based on wave-equation to predict internal multiples for pre-stack data. One is common focus point method. Another is inverse scattering series method. After comparison of the two methods, we found that there are four problems in common focus point method: 1. dependence of velocity model; 2. only internal multiples related to a layer can be predicted every time; 3. computing procedure is complex; 4. it is difficult to apply it in complex media. In order to overcome these problems, we adopt inverse scattering series method. However, inverse scattering series method also has some problems: 1. computing cost is high; 2. it is difficult to predict internal multiples in the far offset; 3. it is not able to predict internal multiples in complex media. Among those problems, high computing cost is the biggest barrier in field seismic processing. So I present 1D and 1.5D improved algorithms for reducing computing time. In addition, I proposed a new algorithm to solve the problem which exists in subtraction, especially for surface related to multiples. The creative results of my research are following: 1. derived an improved inverse scattering series prediction algorithm for 1D. The algorithm has very high computing efficiency. It is faster than old algorithm about twelve times in theory and faster about eighty times for lower spatial complexity in practice; 2. derived an improved inverse scattering series prediction algorithm for 1.5D. The new algorithm changes the computing domain from pseudo-depth wavenumber domain to TX domain for predicting multiples. The improved algorithm demonstrated that the approach has some merits such as higher computing efficiency, feasibility to many kinds of geometries, lower predictive noise and independence to wavelet; 3. proposed a new subtraction algorithm. The new subtraction algorithm is not used to overcome nonorthogonality, but utilize the nonorthogonality's distribution in TX domain to estimate the true wavelet with filtering method. The method has excellent effectiveness in model testing. Improved 1D and 1.5D inverse scattering series algorithms can predict internal multiples. After filtering and subtracting among seismic traces in a window time, internal multiples can be attenuated in some degree. The proposed 1D and 1.5D algorithms have demonstrated that they are effective to the numerical and field data. In addition, the new subtraction algorithm is effective to the complex theoretic models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the principles-and-parameters model of language, the principle known as "free indexation'' plays an important part in determining the referential properties of elements such as anaphors and pronominals. This paper addresses two issues. (1) We investigate the combinatorics of free indexation. In particular, we show that free indexation must produce an exponential number of referentially distinct structures. (2) We introduce a compositional free indexation algorithm. We prove that the algorithm is "optimal.'' More precisely, by relating the compositional structure of the formulation to the combinatorial analysis, we show that the algorithm enumerates precisely all possible indexings, without duplicates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of minimizing a multivariate function is recurrent in many disciplines as Physics, Mathematics, Engeneering and, of course, Computer Science. In this paper we describe a simple nondeterministic algorithm which is based on the idea of adaptive noise, and that proved to be particularly effective in the minimization of a class of multivariate, continuous valued, smooth functions, associated with some recent extension of regularization theory by Poggio and Girosi (1990). Results obtained by using this method and a more traditional gradient descent technique are also compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A polynomial time algorithm (pruned correspondence search, PCS) with good average case performance for solving a wide class of geometric maximal matching problems, including the problem of recognizing 3D objects from a single 2D image, is presented. Efficient verification algorithms, based on a linear representation of location constraints, are given for the case of affine transformations among vector spaces and for the case of rigid 2D and 3D transformations with scale. Some preliminary experiments suggest that PCS is a practical algorithm. Its similarity to existing correspondence based algorithms means that a number of existing techniques for speedup can be incorporated into PCS to improve its performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present some extensions to the k-means algorithm for vector quantization that permit its efficient use in image segmentation and pattern classification tasks. It is shown that by introducing state variables that correspond to certain statistics of the dynamic behavior of the algorithm, it is possible to find the representative centers fo the lower dimensional maniforlds that define the boundaries between classes, for clouds of multi-dimensional, mult-class data; this permits one, for example, to find class boundaries directly from sparse data (e.g., in image segmentation tasks) or to efficiently place centers for pattern classification (e.g., with local Gaussian classifiers). The same state variables can be used to define algorithms for determining adaptively the optimal number of centers for clouds of data with space-varying density. Some examples of the applicatin of these extensions are also given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amorphous computing is the study of programming ultra-scale computing environments of smart sensors and actuators cite{white-paper}. The individual elements are identical, asynchronous, randomly placed, embedded and communicate locally via wireless broadcast. Aggregating the processors into groups is a useful paradigm for programming an amorphous computer because groups can be used for specialization, increased robustness, and efficient resource allocation. This paper presents a new algorithm, called the clubs algorithm, for efficiently aggregating processors into groups in an amorphous computer, in time proportional to the local density of processors. The clubs algorithm is well-suited to the unique characteristics of an amorphous computer. In addition, the algorithm derives two properties from the physical embedding of the amorphous computer: an upper bound on the number of groups formed and a constant upper bound on the density of groups. The clubs algorithm can also be extended to find the maximal independent set (MIS) and $Delta + 1$ vertex coloring in an amorphous computer in $O(log N)$ rounds, where $N$ is the total number of elements and $Delta$ is the maximum degree.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chow and Liu introduced an algorithm for fitting a multivariate distribution with a tree (i.e. a density model that assumes that there are only pairwise dependencies between variables) and that the graph of these dependencies is a spanning tree. The original algorithm is quadratic in the dimesion of the domain, and linear in the number of data points that define the target distribution $P$. This paper shows that for sparse, discrete data, fitting a tree distribution can be done in time and memory that is jointly subquadratic in the number of variables and the size of the data set. The new algorithm, called the acCL algorithm, takes advantage of the sparsity of the data to accelerate the computation of pairwise marginals and the sorting of the resulting mutual informations, achieving speed ups of up to 2-3 orders of magnitude in the experiments.