963 resultados para random search algorithms
Resumo:
This thesis describes design methodologies for frequency selective surfaces (FSSs) composed of periodic arrays of pre-fractals metallic patches on single-layer dielectrics (FR4, RT/duroid). Shapes presented by Sierpinski island and T fractal geometries are exploited to the simple design of efficient band-stop spatial filters with applications in the range of microwaves. Initial results are discussed in terms of the electromagnetic effect resulting from the variation of parameters such as, fractal iteration number (or fractal level), fractal iteration factor, and periodicity of FSS, depending on the used pre-fractal element (Sierpinski island or T fractal). The transmission properties of these proposed periodic arrays are investigated through simulations performed by Ansoft DesignerTM and Ansoft HFSSTM commercial softwares that run full-wave methods. To validate the employed methodology, FSS prototypes are selected for fabrication and measurement. The obtained results point to interesting features for FSS spatial filters: compactness, with high values of frequency compression factor; as well as stable frequency responses at oblique incidence of plane waves. This thesis also approaches, as it main focus, the application of an alternative electromagnetic (EM) optimization technique for analysis and synthesis of FSSs with fractal motifs. In application examples of this technique, Vicsek and Sierpinski pre-fractal elements are used in the optimal design of FSS structures. Based on computational intelligence tools, the proposed technique overcomes the high computational cost associated to the full-wave parametric analyzes. To this end, fast and accurate multilayer perceptron (MLP) neural network models are developed using different parameters as design input variables. These neural network models aim to calculate the cost function in the iterations of population-based search algorithms. Continuous genetic algorithm (GA), particle swarm optimization (PSO), and bees algorithm (BA) are used for FSSs optimization with specific resonant frequency and bandwidth. The performance of these algorithms is compared in terms of computational cost and numerical convergence. Consistent results can be verified by the excellent agreement obtained between simulations and measurements related to FSS prototypes built with a given fractal iteration
Resumo:
We have investigated and extensively tested three families of non-convex optimization approaches for solving the transmission network expansion planning problem: simulated annealing (SA), genetic algorithms (GA), and tabu search algorithms (TS). The paper compares the main features of the three approaches and presents an integrated view of these methodologies. A hybrid approach is then proposed which presents performances which are far better than the ones obtained with any of these approaches individually. Results obtained in tests performed with large scale real-life networks are summarized.
Resumo:
We have investigated and extensively tested three families of non-convex optimization approaches for solving the transmission network expansion planning problem: simulated annealing (SA), genetic algorithms (GA), and tabu search algorithms (TS). The paper compares the main features of the three approaches and presents an integrated view of these methodologies. A hybrid approach is then proposed which presents performances which are far better than the ones obtained with any of these approaches individually. Results obtained in tests performed with large scale real-life networks are summarized.
Resumo:
In this work, a heuristic model for integrated planning of primary distribution network and secondary distribution circuits is proposed. A Tabu Search (TS) algorithm is employed to solve the planning of primary distribution networks. Evolutionary Algorithms (EA) are used to solve the planning model of secondary networks. The planning integration of both networks is carried out by means a constructive heuristic taking into account a set of integration alternatives between these networks. These integration alternatives are treated in a hierarchical way. The planning of primary networks and secondary distribution circuits is carried out based on assessment of the effects of the alternative solutions in the expansion costs of both networks simultaneously. In order to evaluate this methodology, tests were performed for a real-life distribution system taking into account the primary and secondary networks.
Resumo:
This work proposes a methodology for optimized allocation of switches for automatic load transfer in distribution systems in order to improve the reliability indexes by restoring such systems which present voltage classes of 23 to 35 kV and radial topology. The automatic switches must be allocated on the system in order to transfer load remotely among the sources at the substations. The problem of switch allocation is formulated as nonlinear constrained mixed integer programming model subject to a set of economical and physical constraints. A dedicated Tabu Search (TS) algorithm is proposed to solve this model. The proposed methodology is tested for a large real-life distribution system. © 2011 IEEE.
Resumo:
O presente estudo realiza estimativas da condutividade térmica dos principais minerais formadores de rochas, bem como estimativas da condutividade média da fase sólida de cinco litologias básicas (arenitos, calcários, dolomitos, anidritas e litologias argilosas). Alguns modelos térmicos foram comparados entre si, possibilitando a verificação daquele mais apropriado para representar o agregado de minerais e fluidos que compõem as rochas. Os resultados obtidos podem ser aplicados a modelamentos térmicos os mais variados. A metodologia empregada baseia-se em um algoritmo de regressão não-linear denominado de Busca Aleatória Controlada. O comportamento do algoritmo é avaliado para dados sintéticos antes de ser usado em dados reais. O modelo usado na regressão para obter a condutividade térmica dos minerais é o modelo geométrico médio. O método de regressão, usado em cada subconjunto litológico, forneceu os seguintes valores para a condutividade térmica média da fase sólida: arenitos 5,9 ± 1,33 W/mK, calcários 3.1 ± 0.12 W/mK, dolomitos 4.7 ± 0.56 W/mK, anidritas 6.3 ± 0.27 W/mK e para litologias argilosas 3.4 ± 0.48 W/mK. Na sequência, são fornecidas as bases para o estudo da difusão do calor em coordenadas cilíndricas, considerando o efeito de invasão do filtrado da lama na formação, através de uma adaptação da simulação de injeção de poços proveniente das teorias relativas à engenharia de reservatório. Com isto, estimam-se os erros relativos sobre a resistividade aparente assumindo como referência a temperatura original da formação. Nesta etapa do trabalho, faz-se uso do método de diferenças finitas para avaliar a distribuição de temperatura poço-formação. A simulação da invasão é realizada, em coordenadas cilíndricas, através da adaptação da equação de Buckley-Leverett em coordenadas cartesianas. Efeitos como o aparecimento do reboco de lama na parede do poço, gravidade e pressão capilar não são levados em consideração. A partir das distribuições de saturação e temperatura, obtém-se a distribuição radial de resistividade, a qual é convolvida com a resposta radial da ferramenta de indução (transmissor-receptor) resultando na resistividade aparente da formação. Admitindo como referência a temperatura original da formação, são obtidos os erros relativos da resistividade aparente. Através da variação de alguns parâmetros, verifica-se que a porosidade e a saturação original da formação podem ser responsáveis por enormes erros na obtenção da resistividade, principalmente se tais "leituras" forem realizadas logo após a perfuração (MWD). A diferença de temperatura entre poço e formação é a principal causadora de tais erros, indicando que em situações onde esta diferença de temperatura seja grande, perfilagens com ferramentas de indução devam ser realizadas de um a dois dias após a perfuração do poço.
Resumo:
Anomalias gravimétricas ar-livre de perfis perpendiculares a margem continental do tipo passiva apresentam uma configuração padrão. Esta configuração é, satisfatoriamente, explicada por um modelo geofísico formado por uma distribuição de descontinuidades horizontais bidimensionais. Um processo automático de busca aleatória é proposto para a interpretação quantitativa dos dados. Através do método de poliedros flexíves (Simplex), os parâmetros principais do modelo - o contraste de densidade, a profundidade, o rejeito e a localização de cada descontinuidade, puderam ser encontrados, admitindo uma relação número de pontos/número de parâmetros, a determinar, conveniente. Sobre a região do talude, as anomalias ar-livre da margem continental podem ser explicadas por uma única descontinuidade horizontal (degrau simples); e tendo que a resposta dos dados gravimétricos no domínio do número de onda contém informações sobre esta anomalia, foi proposto um procedimento gráfico iterativo para a análise espectral deste sinal. Aplicando a transformada de Fourier é possível determinar a profundidade e o rejeito da descontinuidade, e conhecendo estes parâmetros a densidade é calculada unicamente. O objetivo básico do uso destes procedimentos seria combinar os dois métodos de interpretação nos domínios do espaço e do número de onda, com a finalidade de obter soluções vinculadas mais plausíveis quanto ao contexto geológico esperado para a área estudada. Os dois procedimentos de interpretação foram aplicados nas anomalias gravimétricas ar-livre da margem continental norte brasileira, setor nordeste, abrangendo os estados do Maranhão ao Rio Grande do Norte. As respectivas capacidade de resolução de cada procedimento foram então analisadas. Demonstrou-se que a inversão realizada diretamente no domínio do espaço é mais favorável na interpretação das anomalias ar-livre, embora o tratamento espectral seja relativamente mais simples.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Resumo:
Let G be a graph on n vertices with maximum degree ?. We use the Lovasz local lemma to show the following two results about colourings ? of the edges of the complete graph Kn. If for each vertex v of Kn the colouring ? assigns each colour to at most (n - 2)/(22.4?2) edges emanating from v, then there is a copy of G in Kn which is properly edge-coloured by ?. This improves on a result of Alon, Jiang, Miller, and Pritikin [Random Struct. Algorithms 23(4), 409433, 2003]. On the other hand, if ? assigns each colour to at most n/(51?2) edges of Kn, then there is a copy of G in Kn such that each edge of G receives a different colour from ?. This proves a conjecture of Frieze and Krivelevich [Electron. J. Comb. 15(1), R59, 2008]. Our proofs rely on a framework developed by Lu and Szekely [Electron. J. Comb. 14(1), R63, 2007] for applying the local lemma to random injections. In order to improve the constants in our results we use a version of the local lemma due to Bissacot, Fernandez, Procacci, and Scoppola [preprint, arXiv:0910.1824]. (c) 2011 Wiley Periodicals, Inc. Random Struct. Alg., 40, 425436, 2012
Resumo:
Context. CoRoT is a pioneering space mission whose primary goals are stellar seismology and extrasolar planets search. Its surveys of large stellar fields generate numerous planetary candidates whose lightcurves have transit-like features. An extensive analytical and observational follow-up effort is undertaken to classify these candidates. Aims. We present the list of planetary transit candidates from the CoRoT LRa01 star field in the Monoceros constellation toward the Galactic anti-center direction. The CoRoT observations of LRa01 lasted from 24 October 2007 to 3 March 2008. Methods. We acquired and analyzed 7470 chromatic and 3938 monochromatic lightcurves. Instrumental noise and stellar variability were treated with several filtering tools by different teams from the CoRoT community. Different transit search algorithms were applied to the lightcurves. Results. Fifty-one stars were classified as planetary transit candidates in LRa01. Thirty-seven (i.e., 73% of all candidates) are "good" planetary candidates based on photometric analysis only. Thirty-two (i.e., 87% of the "good" candidates) have been followed-up. At the time of writing twenty-two cases were solved and five planets were discovered: three transiting hot-Jupiters (CoRoT-5b, CoRoT-12b, and CoRoT-21b), the first terrestrial transiting planet (CoRoT-7b), and another planet in the same system (CoRoT-7c, detected by radial velocity survey only). Evidence of another non-transiting planet in the CoRoT-7 system, namely CoRoT-7d, was recently found as well.
Resumo:
We study quasi-random properties of k-uniform hypergraphs. Our central notion is uniform edge distribution with respect to large vertex sets. We will find several equivalent characterisations of this property and our work can be viewed as an extension of the well known Chung-Graham-Wilson theorem for quasi-random graphs. Moreover, let K(k) be the complete graph on k vertices and M(k) the line graph of the graph of the k-dimensional hypercube. We will show that the pair of graphs (K(k),M(k)) has the property that if the number of copies of both K(k) and M(k) in another graph G are as expected in the random graph of density d, then G is quasi-random (in the sense of the Chung-Graham-Wilson theorem) with density close to d. (C) 2011 Wiley Periodicals, Inc. Random Struct. Alg., 40, 1-38, 2012
Resumo:
While imperfect information games are an excellent model of real-world problems and tasks, they are often difficult for computer programs to play at a high level of proficiency, especially if they involve major uncertainty and a very large state space. Kriegspiel, a variant of chess making it similar to a wargame, is a perfect example: while the game was studied for decades from a game-theoretical viewpoint, it was only very recently that the first practical algorithms for playing it began to appear. This thesis presents, documents and tests a multi-sided effort towards making a strong Kriegspiel player, using heuristic searching, retrograde analysis and Monte Carlo tree search algorithms to achieve increasingly higher levels of play. The resulting program is currently the strongest computer player in the world and plays at an above-average human level.
Resumo:
In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^
Resumo:
To improve percolation modelling on soils the geometrical properties of the pore space must be understood; this includes porosity, particle and pore size distribution and connectivity of the pores. A study was conducted with a soil at different bulk densities based on 3D grey images acquired by X-ray computed tomography. The objective was to analyze the effect in percolation of aspects of pore network geometry and discuss the influence of the grey threshold applied to the images. A model based on random walk algorithms was applied to the images, combining five bulk densities with up to six threshold values per density. This allowed for a dynamical perspective of soil structure in relation to water transport through the inclusion of percolation speed in the analyses. To evaluate separately connectivity and isolate the effect of the grey threshold, a critical value of 35% of porosity was selected for every density. This value was the smallest at which total-percolation walks appeared for the all images of the same porosity and may represent a situation of percolation comparable among bulks densities. This criterion avoided an arbitrary decision in grey thresholds. Besides, a random matrix simulation at 35% of porosity with real images was used to test the existence of pore connectivity as a consequence of a non-random soil structure.