1000 resultados para Discretization Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is necessary to generate automorphism group of chemical graph in computer-aided structure eluciation. In this paper, an algorithm is developed by all-path topological symmetry algorithm to build automorphism group of chemical graph. A comparison of several topological symmetry algorithm reveals that all-path algorthm can yield correct of class of chemical graph. It lays a foundation for ESESOC system for computer-aided structure elucidation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It's important to identify ring in the process of structure elucidation. In this paper, all rings and the smallest set of smallest ring(SSSR) of structure are obtained from two-dimensional connection table. The results are satisfactory by using this algorithm in ESESOC expert system as constraint.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the development of our ESESOC system (Expert System for the Elucidation of the Structures of Organic Compounds), computer perception of topological symmetry is essential in searching for the canonical description of a molecular structure, removing the irredundant connections in the structure generation process, and specifying the number of peaks in C-13- and H-1-NMR spectra in the structure evaluation process. In the present paper, a new path identifier is introduced and an algorithm for detection of topological symmetry from a connection table is developed by the all-paths method. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the exhaustive and irredundant generation of candidate structures in ESESOC (Expert System for the Elucidation of the Structures of Organic Compounds), a new algorithm for computer perception of topological equivalence classes of the nodes (non-hydrog

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ocean wind speed and wind direction are estimated simultaneously using the normalized radar cross sections or' corresponding to two neighboring (25-km) blocks, within a given synthetic aperture radar (SAR) image, having slightly different incidence angles. This method is motivated by the methodology used for scatterometer data. The wind direction ambiguity is removed by using the direction closest to that given by a buoy or some other source of information. We demonstrate this method with 11 EN-VISAT Advanced SAR sensor images of the Gulf of Mexico and coastal waters of the North Atlantic. Estimated wind vectors are compared with wind measurements from buoys and scatterometer data. We show that this method can surpass other methods in some cases, even those with insufficient visible wind-induced streaks in the SAR images, to extract wind vectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new scheduling algorithm for the flexible manufacturing cell is presented, which is a discrete time control method with fixed length control period combining with event interruption. At the flow control level we determine simultaneously the production mix and the proportion of parts to be processed through each route. The simulation results for a hypothetical manufacturing cell are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation presents a series of irregular-grid based numerical technique for modeling seismic wave propagation in heterogeneous media. The study involves the generation of the irregular numerical mesh corresponding to the irregular grid scheme, the discretized version of motion equations under the unstructured mesh, and irregular-grid absorbing boundary conditions. The resulting numerical technique has been used in generating the synthetic data sets on the realistic complex geologic models that can examine the migration schemes. The motion equation discretization and modeling are based on Grid Method. The key idea is to use the integral equilibrium principle to replace the operator at each grid in Finite Difference scheme and variational formulation in Finite Element Method. The irregular grids of complex geologic model is generated by the Paving Method, which allow varying grid spacing according to meshing constraints. The grids have great quality at domain boundaries and contain equal quantities of nodes at interfaces, which avoids the interpolation of parameters and variables. The irregular grid absorbing boundary conditions is developed by extending the Perfectly Matched Layer method to the rotated local coordinates. The splitted PML equations of the first-order system is derived by using integral equilibrium principle. The proposed scheme can build PML boundary of arbitrary geometry in the computational domain, avoiding the special treatment at corners in a standard PML method and saving considerable memory and computation cost. The numerical implementation demonstrates the desired qualities of irregular grid based modeling technique. In particular, (1) smaller memory requirements and computational time are needed by changing the grid spacing according to local velocity; (2) Arbitrary surfaces and interface topographies are described accurately, thus removing the artificial reflection resulting from the stair approximation of the curved or dipping interfaces; (3) computational domain is significantly reduced by flexibly building the curved artificial boundaries using the irregular-grid absorbing boundary conditions. The proposed irregular grid approach is apply to reverse time migration as the extrapolation algorithm. It can discretize the smoothed velocity model by irregular grid of variable scale, which contributes to reduce the computation cost. The topography. It can also handle data set of arbitrary topography and no field correction is needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the principles-and-parameters model of language, the principle known as "free indexation'' plays an important part in determining the referential properties of elements such as anaphors and pronominals. This paper addresses two issues. (1) We investigate the combinatorics of free indexation. In particular, we show that free indexation must produce an exponential number of referentially distinct structures. (2) We introduce a compositional free indexation algorithm. We prove that the algorithm is "optimal.'' More precisely, by relating the compositional structure of the formulation to the combinatorial analysis, we show that the algorithm enumerates precisely all possible indexings, without duplicates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of minimizing a multivariate function is recurrent in many disciplines as Physics, Mathematics, Engeneering and, of course, Computer Science. In this paper we describe a simple nondeterministic algorithm which is based on the idea of adaptive noise, and that proved to be particularly effective in the minimization of a class of multivariate, continuous valued, smooth functions, associated with some recent extension of regularization theory by Poggio and Girosi (1990). Results obtained by using this method and a more traditional gradient descent technique are also compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A polynomial time algorithm (pruned correspondence search, PCS) with good average case performance for solving a wide class of geometric maximal matching problems, including the problem of recognizing 3D objects from a single 2D image, is presented. Efficient verification algorithms, based on a linear representation of location constraints, are given for the case of affine transformations among vector spaces and for the case of rigid 2D and 3D transformations with scale. Some preliminary experiments suggest that PCS is a practical algorithm. Its similarity to existing correspondence based algorithms means that a number of existing techniques for speedup can be incorporated into PCS to improve its performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present some extensions to the k-means algorithm for vector quantization that permit its efficient use in image segmentation and pattern classification tasks. It is shown that by introducing state variables that correspond to certain statistics of the dynamic behavior of the algorithm, it is possible to find the representative centers fo the lower dimensional maniforlds that define the boundaries between classes, for clouds of multi-dimensional, mult-class data; this permits one, for example, to find class boundaries directly from sparse data (e.g., in image segmentation tasks) or to efficiently place centers for pattern classification (e.g., with local Gaussian classifiers). The same state variables can be used to define algorithms for determining adaptively the optimal number of centers for clouds of data with space-varying density. Some examples of the applicatin of these extensions are also given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amorphous computing is the study of programming ultra-scale computing environments of smart sensors and actuators cite{white-paper}. The individual elements are identical, asynchronous, randomly placed, embedded and communicate locally via wireless broadcast. Aggregating the processors into groups is a useful paradigm for programming an amorphous computer because groups can be used for specialization, increased robustness, and efficient resource allocation. This paper presents a new algorithm, called the clubs algorithm, for efficiently aggregating processors into groups in an amorphous computer, in time proportional to the local density of processors. The clubs algorithm is well-suited to the unique characteristics of an amorphous computer. In addition, the algorithm derives two properties from the physical embedding of the amorphous computer: an upper bound on the number of groups formed and a constant upper bound on the density of groups. The clubs algorithm can also be extended to find the maximal independent set (MIS) and $Delta + 1$ vertex coloring in an amorphous computer in $O(log N)$ rounds, where $N$ is the total number of elements and $Delta$ is the maximum degree.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chow and Liu introduced an algorithm for fitting a multivariate distribution with a tree (i.e. a density model that assumes that there are only pairwise dependencies between variables) and that the graph of these dependencies is a spanning tree. The original algorithm is quadratic in the dimesion of the domain, and linear in the number of data points that define the target distribution $P$. This paper shows that for sparse, discrete data, fitting a tree distribution can be done in time and memory that is jointly subquadratic in the number of variables and the size of the data set. The new algorithm, called the acCL algorithm, takes advantage of the sparsity of the data to accelerate the computation of pairwise marginals and the sorting of the resulting mutual informations, achieving speed ups of up to 2-3 orders of magnitude in the experiments.