73 resultados para Postprocessing algorithms
Resumo:
A Latin square is pan-Hamiltonian if the permutation which defines row i relative to row j consists of a single cycle for every i j. A Latin square is atomic if all of its conjugates are pan-Hamiltonian. We give a complete enumeration of atomic squares for order 11, the smallest order for which there are examples distinct from the cyclic group. We find that there are seven main classes, including the three that were previously known. A perfect 1-factorization of a graph is a decomposition of that graph into matchings such that the union of any two matchings is a Hamiltonian cycle. Each pan-Hamiltonian Latin square of order n describes a perfect 1-factorization of Kn,n, and vice versa. Perfect 1-factorizations of Kn,n can be constructed from a perfect 1-factorization of Kn+1. Six of the seven main classes of atomic squares of order 11 can be obtained in this way. For each atomic square of order 11, we find the largest set of Mutually Orthogonal Latin Squares (MOLS) involving that square. We discuss algorithms for counting orthogonal mates, and discover the number of orthogonal mates possessed by the cyclic squares of orders up to 11 and by Parker's famous turn-square. We find that the number of atomic orthogonal mates possessed by a Latin square is not a main class invariant. We also define a new sort of Latin square, called a pairing square, which is mapped to its transpose by an involution acting on the symbols. We show that pairing squares are often orthogonal mates for symmetric Latin squares. Finally, we discover connections between our atomic squares and Franklin's diagonally cyclic self-orthogonal squares, and we correct a theorem of Longyear which uses tactical representations to identify self-orthogonal Latin squares in the same main class as a given Latin square.
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
In this paper we follow the BOID (Belief, Obligation, Intention, Desire) architecture to describe agents and agent types in Defeasible Logic. We argue, in particular, that the introduction of obligations can provide a new reading of the concepts of intention and intentionality. Then we examine the notion of social agent (i.e., an agent where obligations prevail over intentions) and discuss some computational and philosophical issues related to it. We show that the notion of social agent either requires more complex computations or has some philosophical drawbacks.
Resumo:
The reconstruction of a complex scene from multiple images is a fundamental problem in the field of computer vision. Volumetric methods have proven to be a strong alternative to traditional correspondence-based methods due to their flexible visibility models. In this paper we analyse existing methods for volumetric reconstruction and identify three key properties of voxel colouring algorithms: a water-tight surface model, a monotonic carving order, and causality. We present a new Voxel Colouring algorithm which embeds all reconstructions of a scene into a single output. While modelling exact visibility for arbitrary camera locations, Embedded Voxel Colouring removes the need for a priori threshold selection present in previous work. An efficient implementation is given along with results demonstrating the advantages of posteriori threshold selection.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
There are many techniques for electricity market price forecasting. However, most of them are designed for expected price analysis rather than price spike forecasting. An effective method of predicting the occurrence of spikes has not yet been observed in the literature so far. In this paper, a data mining based approach is presented to give a reliable forecast of the occurrence of price spikes. Combined with the spike value prediction techniques developed by the same authors, the proposed approach aims at providing a comprehensive tool for price spike forecasting. In this paper, feature selection techniques are firstly described to identify the attributes relevant to the occurrence of spikes. A simple introduction to the classification techniques is given for completeness. Two algorithms: support vector machine and probability classifier are chosen to be the spike occurrence predictors and are discussed in details. Realistic market data are used to test the proposed model with promising results.
Resumo:
We present a fast method for finding optimal parameters for a low-resolution (threading) force field intended to distinguish correct from incorrect folds for a given protein sequence. In contrast to other methods, the parameterization uses information from >10(7) misfolded structures as well as a set of native sequence-structure pairs. In addition to testing the resulting force field's performance on the protein sequence threading problem, results are shown that characterize the number of parameters necessary for effective structure recognition.
Resumo:
Evolution strategies are a class of general optimisation algorithms which are applicable to functions that are multimodal, nondifferentiable, or even discontinuous. Although recombination operators have been introduced into evolution strategies, the primary search operator is still mutation. Classical evolution strategies rely on Gaussian mutations. A new mutation operator based on the Cauchy distribution is proposed in this paper. It is shown empirically that the new evolution strategy based on Cauchy mutation outperforms the classical evolution strategy on most of the 23 benchmark problems tested in this paper. The paper also shows empirically that changing the order of mutating the objective variables and mutating the strategy parameters does not alter the previous conclusion significantly, and that Cauchy mutations with different scaling parameters still outperform the Gaussian mutation with self-adaptation. However, the advantage of Cauchy mutations disappears when recombination is used in evolution strategies. It is argued that the search step size plays an important role in determining evolution strategies' performance. The large step size of recombination plays a similar role as Cauchy mutation.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
Subcycling algorithms which employ multiple timesteps have been previously proposed for explicit direct integration of first- and second-order systems of equations arising in finite element analysis, as well as for integration using explicit/implicit partitions of a model. The author has recently extended this work to implicit/implicit multi-timestep partitions of both first- and second-order systems. In this paper, improved algorithms for multi-timestep implicit integration are introduced, that overcome some weaknesses of those proposed previously. In particular, in the second-order case, improved stability is obtained. Some of the energy conservation properties of the Newmark family of algorithms are shown to be preserved in the new multi-timestep extensions of the Newmark method. In the first-order case, the generalized trapezoidal rule is extended to multiple timesteps, in a simple way that permits an implicit/implicit partition. Explicit special cases of the present algorithms exist. These are compared to algorithms proposed previously. (C) 1998 John Wiley & Sons, Ltd.
Resumo:
Power system small signal stability analysis aims to explore different small signal stability conditions and controls, namely: (1) exploring the power system security domains and boundaries in the space of power system parameters of interest, including load flow feasibility, saddle node and Hopf bifurcation ones; (2) finding the maximum and minimum damping conditions; and (3) determining control actions to provide and increase small signal stability. These problems are presented in this paper as different modifications of a general optimization to a minimum/maximum, depending on the initial guesses of variables and numerical methods used. In the considered problems, all the extreme points are of interest. Additionally, there are difficulties with finding the derivatives of the objective functions with respect to parameters. Numerical computations of derivatives in traditional optimization procedures are time consuming. In this paper, we propose a new black-box genetic optimization technique for comprehensive small signal stability analysis, which can effectively cope with highly nonlinear objective functions with multiple minima and maxima, and derivatives that can not be expressed analytically. The optimization result can then be used to provide such important information such as system optimal control decision making, assessment of the maximum network's transmission capacity, etc. (C) 1998 Elsevier Science S.A. All rights reserved.
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.