826 resultados para graph matching algorithms
Resumo:
We consider brightness/contrast-invariant and rotation-discriminating template matching that searches an image to analyze A for a query image Q We propose to use the complex coefficients of the discrete Fourier transform of the radial projections to compute new rotation-invariant local features. These coefficients can be efficiently obtained via FFT. We classify templates in ""stable"" and ""unstable"" ones and argue that any local feature-based template matching may fail to find unstable templates. We extract several stable sub-templates of Q and find them in A by comparing the features. The matchings of the sub-templates are combined using the Hough transform. As the features of A are computed only once, the algorithm can find quickly many different sub-templates in A, and it is Suitable for finding many query images in A, multi-scale searching and partial occlusion-robust template matching. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This letter addresses the optimization and complexity reduction of switch-reconfigured antennas. A new optimization technique based on graph models is investigated. This technique is used to minimize the redundancy in a reconfigurable antenna structure and reduce its complexity. A graph modeling rule for switch-reconfigured antennas is proposed, and examples are presented.
Resumo:
The flowshop scheduling problem with blocking in-process is addressed in this paper. In this environment, there are no buffers between successive machines: therefore intermediate queues of jobs waiting in the system for their next operations are not allowed. Heuristic approaches are proposed to minimize the total tardiness criterion. A constructive heuristic that explores specific characteristics of the problem is presented. Moreover, a GRASP-based heuristic is proposed and Coupled with a path relinking strategy to search for better outcomes. Computational tests are presented and the comparisons made with an adaptation of the NEH algorithm and with a branch-and-bound algorithm indicate that the new approaches are promising. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
Understanding resource capture can help design appropriate species combinations, planting designs and management. Leaf area index (LAI) and its longevity are the most important factors defining dry matter production and thus growth and productivity. The ecophysiological modifications and yield of rubber (Hevea spp.) in an agroforestry system (AFS) with beans (Phaseolus vulgaris L.) were studied. The experiment was established in Southeast-Brazil, with three rubber cultivars: IAN 3087, RRIM 600 and RRIM 527. The AFS comprised double rows of rubber trees along with beans sown in autumn and winter seasons in 1999. There was about 50% higher rubber yield per tree in the AFS than the rubber monoculture. Trees within the AFS responded to higher solar radiation availability with higher LAI and total foliage area, allowing its greater interception. All three cultivars had higher LAI in the AFS than monoculture, reaching maximum values in the AFS between April and May of 3.17 for RRIM 527; 2.83 for RRIM 600 and 2.28 for IAN 3087. The maximum LAI values for monocrop rubber trees were: 2.65, 2.62 and 1.99, respectively, for each cultivar. Rubber production and LAI were positively correlated in both the AFS and monoculture but leaf fall of rubber trees in the AFS was delayed and total phytomass was larger. It is suggested that trees in the AFS were under exploited and could yield more without compromising their life cycle if the tapping system was intensified. This shows how knowledge of LAI can be used to manage tapping intensity in the field, leading to higher rubber yield.
Resumo:
This paper presents a new relative measure of signal complexity, referred to here as relative structural complexity, which is based on the matching pursuit (MP) decomposition. By relative, we refer to the fact that this new measure is highly dependent on the decomposition dictionary used by MP. The structural part of the definition points to the fact that this new measure is related to the structure, or composition, of the signal under analysis. After a formal definition, the proposed relative structural complexity measure is used in the analysis of newborn EEG. To do this, firstly, a time-frequency (TF) decomposition dictionary is specifically designed to compactly represent the newborn EEG seizure state using MP. We then show, through the analysis of synthetic and real newborn EEG data, that the relative structural complexity measure can indicate changes in EEG structure as it transitions between the two EEG states; namely seizure and background (non-seizure).
Resumo:
A Latin square is pan-Hamiltonian if the permutation which defines row i relative to row j consists of a single cycle for every i j. A Latin square is atomic if all of its conjugates are pan-Hamiltonian. We give a complete enumeration of atomic squares for order 11, the smallest order for which there are examples distinct from the cyclic group. We find that there are seven main classes, including the three that were previously known. A perfect 1-factorization of a graph is a decomposition of that graph into matchings such that the union of any two matchings is a Hamiltonian cycle. Each pan-Hamiltonian Latin square of order n describes a perfect 1-factorization of Kn,n, and vice versa. Perfect 1-factorizations of Kn,n can be constructed from a perfect 1-factorization of Kn+1. Six of the seven main classes of atomic squares of order 11 can be obtained in this way. For each atomic square of order 11, we find the largest set of Mutually Orthogonal Latin Squares (MOLS) involving that square. We discuss algorithms for counting orthogonal mates, and discover the number of orthogonal mates possessed by the cyclic squares of orders up to 11 and by Parker's famous turn-square. We find that the number of atomic orthogonal mates possessed by a Latin square is not a main class invariant. We also define a new sort of Latin square, called a pairing square, which is mapped to its transpose by an involution acting on the symbols. We show that pairing squares are often orthogonal mates for symmetric Latin squares. Finally, we discover connections between our atomic squares and Franklin's diagonally cyclic self-orthogonal squares, and we correct a theorem of Longyear which uses tactical representations to identify self-orthogonal Latin squares in the same main class as a given Latin square.
Resumo:
Despite many successes of conventional DNA sequencing methods, some DNAs remain difficult or impossible to sequence. Unsequenceable regions occur in the genomes of many biologically important organisms, including the human genome. Such regions range in length from tens to millions of bases, and may contain valuable information such as the sequences of important genes. The authors have recently developed a technique that renders a wide range of problematic DNAs amenable to sequencing. The technique is known as sequence analysis via mutagenesis (SAM). This paper presents a number of algorithms for analysing and interpreting data generated by this technique.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
Algorithms for explicit integration of structural dynamics problems with multiple time steps (subcycling) are investigated. Only one such algorithm, due to Smolinski and Sleith has proved to be stable in a classical sense. A simplified version of this algorithm that retains its stability is presented. However, as with the original version, it can be shown to sacrifice accuracy to achieve stability. Another algorithm in use is shown to be only statistically stable, in that a probability of stability can be assigned if appropriate time step limits are observed. This probability improves rapidly with the number of degrees of freedom in a finite element model. The stability problems are shown to be a property of the central difference method itself, which is modified to give the subcycling algorithm. A related problem is shown to arise when a constraint equation in time is introduced into a time-continuous space-time finite element model. (C) 1998 Elsevier Science S.A.
Resumo:
Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recent results have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers x(1), ..., x(m) for the equation s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, ... , s(m) are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.
Resumo:
In the light of Project MATCH, is it reasonable to accept the null hypothesis that there are no clinically signi® cant matching effects between patient characteristics and cognitive± behaviour therapy (CBT), motivational enhancement therapy (MET) and Twelve-Step facilitation therapy (TSF)? The Project MATCH investigators considered the null hypothesis but preferred the alternative hypothesis that further analysis may reveal combinations of patient and therapist characteristics that show more substantial matching effects than any of the variables that they have examined to date.1
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.