915 resultados para random search algorithms


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tese dout., Matemática, Investigação Operacional, Universidade do Algarve, 2009

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dans le domaine des neurosciences computationnelles, l'hypothèse a été émise que le système visuel, depuis la rétine et jusqu'au cortex visuel primaire au moins, ajuste continuellement un modèle probabiliste avec des variables latentes, à son flux de perceptions. Ni le modèle exact, ni la méthode exacte utilisée pour l'ajustement ne sont connus, mais les algorithmes existants qui permettent l'ajustement de tels modèles ont besoin de faire une estimation conditionnelle des variables latentes. Cela nous peut nous aider à comprendre pourquoi le système visuel pourrait ajuster un tel modèle; si le modèle est approprié, ces estimé conditionnels peuvent aussi former une excellente représentation, qui permettent d'analyser le contenu sémantique des images perçues. Le travail présenté ici utilise la performance en classification d'images (discrimination entre des types d'objets communs) comme base pour comparer des modèles du système visuel, et des algorithmes pour ajuster ces modèles (vus comme des densités de probabilité) à des images. Cette thèse (a) montre que des modèles basés sur les cellules complexes de l'aire visuelle V1 généralisent mieux à partir d'exemples d'entraînement étiquetés que les réseaux de neurones conventionnels, dont les unités cachées sont plus semblables aux cellules simples de V1; (b) présente une nouvelle interprétation des modèles du système visuels basés sur des cellules complexes, comme distributions de probabilités, ainsi que de nouveaux algorithmes pour les ajuster à des données; et (c) montre que ces modèles forment des représentations qui sont meilleures pour la classification d'images, après avoir été entraînés comme des modèles de probabilités. Deux innovations techniques additionnelles, qui ont rendu ce travail possible, sont également décrites : un algorithme de recherche aléatoire pour sélectionner des hyper-paramètres, et un compilateur pour des expressions mathématiques matricielles, qui peut optimiser ces expressions pour processeur central (CPU) et graphique (GPU).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present a P2P-based database sharing system that provides information sharing capabilities through keyword-based search techniques. Our system requires neither a global schema nor schema mappings between different databases, and our keyword-based search algorithms are robust in the presence of frequent changes in the content and membership of peers. To facilitate data integration, we introduce keyword join operator to combine partial answers containing different keywords into complete answers. We also present an efficient algorithm that optimize the keyword join operations for partial answer integration. Our experimental study on both real and synthetic datasets demonstrates the effectiveness of our algorithms, and the efficiency of the proposed query processing strategies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Search has become a hot topic in Internet computing, with rival search engines battling to become the de facto Web portal, harnessing search algorithms to wade through information on a scale undreamed of by early information retrieval (IR) pioneers. This article examines how search has matured from its roots in specialized IR systems to become a key foundation of the Web. The authors describe new challenges posed by the Web's scale, and show how search is changing the nature of the Web as much as the Web has changed the nature of search

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Consider the following problem: Forgiven graphs G and F(1),..., F(k), find a coloring of the edges of G with k colors such that G does not contain F; in color i. Rodl and Rucinski studied this problem for the random graph G,,, in the symmetric case when k is fixed and F(1) = ... = F(k) = F. They proved that such a coloring exists asymptotically almost surely (a.a.s.) provided that p <= bn(-beta) for some constants b = b(F,k) and beta = beta(F). This result is essentially best possible because for p >= Bn(-beta), where B = B(F, k) is a large constant, such an edge-coloring does not exist. Kohayakawa and Kreuter conjectured a threshold function n(-beta(F1,..., Fk)) for arbitrary F(1), ..., F(k). In this article we address the case when F(1),..., F(k) are cliques of different sizes and propose an algorithm that a.a.s. finds a valid k-edge-coloring of G(n,p) with p <= bn(-beta) for some constant b = b(F(1),..., F(k)), where beta = beta(F(1),..., F(k)) as conjectured. With a few exceptions, this algorithm also works in the general symmetric case. We also show that there exists a constant B = B(F,,..., Fk) such that for p >= Bn(-beta) the random graph G(n,p) a.a.s. does not have a valid k-edge-coloring provided the so-called KLR-conjecture holds. (C) 2008 Wiley Periodicals, Inc. Random Struct. Alg., 34, 419-453, 2009

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Alternative sampling procedures are compared to the pure random search method. It is shown that the efficiency of the algorithm can be improved with respect to the expected number of steps to reach an epsilon-neighborhood of the optimal point.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phasor Measurement Units (PMUs) optimized allocation allows control, monitoring and accurate operation of electric power distribution systems, improving reliability and service quality. Good quality and considerable results are obtained for transmission systems using fault location techniques based on voltage measurements. Based on these techniques and performing PMUs optimized allocation it is possible to develop an electric power distribution system fault locator, which provides accurate results. The PMUs allocation problem presents combinatorial features related to devices number that can be allocated, and also probably places for allocation. Tabu search algorithm is the proposed technique to carry out PMUs allocation. This technique applied in a 141 buses real-life distribution urban feeder improved significantly the fault location results. © 2004 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an efficient tabu search algorithm (TSA) to solve the problem of feeder reconfiguration of distribution systems. The main characteristics that make the proposed TSA particularly efficient are a) the way in which the neighborhood of the current solution was defined; b) the way in which the objective function value was estimated; and c) the reduction of the neighborhood using heuristic criteria. Four electrical systems, described in detail in the specialized literature, were used to test the proposed TSA. The result demonstrate that it is computationally very fast and finds the best solutions known in the specialized literature. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Consider a one-dimensional environment with N randomly distributed sites. An agent explores this random medium moving deterministically with a spatial memory μ. A crossover from local to global exploration occurs in one dimension at a well-defined memory value μ1=log2N. In its stochastic version, the dynamics is ruled by the memory and by temperature T, which affects the hopping displacement. This dynamics also shows a crossover in one dimension, obtained computationally, between exploration schemes, characterized yet by the trajectory size (Np) (aging effect). In this paper we provide an analytical approach considering the modified stochastic version where the parameter T plays the role of a maximum hopping distance. This modification allows us to obtain a general analytical expression for the crossover, as a function of the parameters μ, T, and Np. Differently from what has been proposed by previous studies, we find that the crossover occurs in any dimension d. These results have been validated by numerical experiments and may be of great value for fixing optimal parameters in search algorithms. © 2013 American Physical Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O método de empilhamento por Superfície de Reflexão Comum (SRC) produz seções simuladas de afastamento nulo (AN) por meio do somatório de eventos sísmicos dos dados de cobertura múltipla contidos nas superfícies de empilhamento. Este método não depende do modelo de velocidade do meio, apenas requer o conhecimento a priori da velocidade próxima a superfície. A simulação de seções AN por este método de empilhamento utiliza uma aproximação hiperbólica de segunda ordem do tempo de trânsito de raios paraxiais para definir a superfície de empilhamento ou operador de empilhamento SRC. Para meios 2D este operador depende de três atributos cinemáticos de duas ondas hipotéticas (ondas PIN e N), observados no ponto de emergência do raio central com incidência normal, que são: o ângulo de emergência do raio central com fonte-receptor nulo (β0) , o raio de curvatura da onda ponto de incidência normal (RPIN) e o raio de curvatura da onda normal (RN). Portanto, o problema de otimização no método SRC consiste na determinação, a partir dos dados sísmicos, dos três parâmetros (β0, RPIN, RN) ótimos associados a cada ponto de amostragem da seção AN a ser simulada. A determinação simultânea destes parâmetros pode ser realizada por meio de processos de busca global (ou otimização global) multidimensional, utilizando como função objetivo algum critério de coerência. O problema de otimização no método SRC é muito importante para o bom desempenho no que diz respeito a qualidade dos resultados e principalmente ao custo computacional, comparado com os métodos tradicionalmente utilizados na indústria sísmica. Existem várias estratégias de busca para determinar estes parâmetros baseados em buscas sistemáticas e usando algoritmos de otimização, podendo estimar apenas um parâmetro de cada vez, ou dois ou os três parâmetros simultaneamente. Levando em conta a estratégia de busca por meio da aplicação de otimização global, estes três parâmetros podem ser estimados através de dois procedimentos: no primeiro caso os três parâmetros podem ser estimados simultaneamente e no segundo caso inicialmente podem ser determinados simultaneamente dois parâmetros (β0, RPIN) e posteriormente o terceiro parâmetro (RN) usando os valores dos dois parâmetros já conhecidos. Neste trabalho apresenta-se a aplicação e comparação de quatro algoritmos de otimização global para encontrar os parâmetros SRC ótimos, estes são: Simulated Annealing (SA), Very Fast Simulated Annealing (VFSA), Differential Evolution (DE) e Controlled Rando Search - 2 (CRS2). Como resultados importantes são apresentados a aplicação de cada método de otimização e a comparação entre os métodos quanto a eficácia, eficiência e confiabilidade para determinar os melhores parâmetros SRC. Posteriormente, aplicando as estratégias de busca global para a determinação destes parâmetros, por meio do método de otimização VFSA que teve o melhor desempenho foi realizado o empilhamento SRC a partir dos dados Marmousi, isto é, foi realizado um empilhamento SRC usando dois parâmetros (β0, RPIN) estimados por busca global e outro empilhamento SRC usando os três parâmetros (β0, RPIN, RN) também estimados por busca global.