963 resultados para random search algorithms
Resumo:
Species-specific Random Amplified Polymorphic DNA-Polymerase chain Reaction (RAPD-PCR) markers were used to identify four species related to Anopheles (Nyssorhynchus) albitarsis Lynch-Arribàlzaga from 12 sites in Brazil and 4 in Venezuela. In a previous study (Wilkerson et al. 1995), which included sites in Paraguay and Argentina, these four species were designated "A", "B", "C" and "D". It was hypothesized that species A is An. (Nys.) albitarsis, species B is undescribed, species C is An. (Nys) marajoara Galvão and Damasceno and species D is An. (Nys.) deaneorum Rosa-Freitas. Species D, previously characterized by RAPD-PCR from a small sample from northern Argentina and southern Brazil, is reported here from the type locality of An. (Nys.) deaneorum, Guajará-Mirim, state of Rondônia, Brazil. Species C and D were found by RAPD-PCR to be sympatric at Costa Marques, state of Rondônia, Brazil. Species A and C have yet to be encountered at the same locality. The RAPD markers for species C were found to be conserved over 4,620 km; from Iguape, state of São Paulo, Brazil to rio Socuavo, state of Zulia, Venezuela. RAPD-PCR was determined to be an effective means for the identification of unknown species within this species complex.
Resumo:
I study large random assignment economies with a continuum of agents and a finite number of object types. I consider the existence of weak priorities discriminating among agents with respect to their rights concerning the final assignment. The respect for priorities ex ante (ex-ante stability) usually precludes ex-ante envy-freeness. Therefore I define a new concept of fairness, called no unjustified lower chances: priorities with respect to one object type cannot justify different achievable chances regarding another object type. This concept, which applies to the assignment mechanism rather than to the assignment itself, implies ex-ante envy-freeness among agents of the same priority type. I propose a variation of Hylland and Zeckhauser' (1979) pseudomarket that meets ex-ante stability, no unjustified lower chances and ex-ante efficiency among agents of the same priority type. Assuming enough richness in preferences and priorities, the converse is also true: any random assignment with these properties could be achieved through an equilibrium in a pseudomarket with priorities. If priorities are acyclical (the ordering of agents is the same for each object type), this pseudomarket achieves ex-ante efficient random assignments.
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
El trabajo realizado se divide en dos bloques bien diferenciados, ambos relacionados con el análisis de microarrays. El primer bloque consiste en agrupar las condiciones muestrales de todos los genes en grupos o clústers. Estas agrupaciones se obtienen al aplicar directamente sobre la microarray los siguientes algoritmos de agrupación: SOM,PAM,SOTA,HC y al aplicar sobre la microarray escalada con PC y MDS los siguientes algoritmos: SOM,PAM,SOTA,HC y K-MEANS. El segundo bloque consiste en realizar una búsqueda de genes basada en los intervalos de confianza de cada clúster de la agrupación activa. Las condiciones de búsqueda ajustadas por el usuario se validan para cada clúster respecto el valor basal 0 y respecto el resto de clústers, para estas validaciones se usan los intervalos de confianza. Estos dos bloques se integran en una aplicación web ya existente, el applet PCOPGene, alojada en el servidor: http://revolutionresearch.uab.es.
Resumo:
In a seminal paper [10], Weitz gave a deterministic fully polynomial approximation scheme for counting exponentially weighted independent sets (which is the same as approximating the partition function of the hard-core model from statistical physics) in graphs of degree at most d, up to the critical activity for the uniqueness of the Gibbs measure on the innite d-regular tree. ore recently Sly [8] (see also [1]) showed that this is optimal in the sense that if here is an FPRAS for the hard-core partition function on graphs of maximum egree d for activities larger than the critical activity on the innite d-regular ree then NP = RP. In this paper we extend Weitz's approach to derive a deterministic fully polynomial approximation scheme for the partition function of general two-state anti-ferromagnetic spin systems on graphs of maximum degree d, up to the corresponding critical point on the d-regular tree. The main ingredient of our result is a proof that for two-state anti-ferromagnetic spin systems on the d-regular tree, weak spatial mixing implies strong spatial mixing. his in turn uses a message-decay argument which extends a similar approach proposed recently for the hard-core model by Restrepo et al [7] to the case of general two-state anti-ferromagnetic spin systems.
Resumo:
This article analyzes empirically the main existing theories on income and population city growth: increasing returns to scale, locational fundamentals and random growth. To do this we implement a threshold nonlinearity test that extends standard linear growth regression models to a dataset on urban, climatological and macroeconomic variables on 1,175 U.S. cities. Our analysis reveals the existence of increasing returns when per-capita income levels are beyond $19; 264. Despite this, income growth is mostly explained by social and locational fundamentals. Population growth also exhibits two distinct equilibria determined by a threshold value of 116,300 inhabitants beyond which city population grows at a higher rate. Income and population growth do not go hand in hand, implying an optimal level of population beyond which income growth stagnates or deteriorates
Resumo:
Random mating is the null model central to population genetics. One assumption behind random mating is that individuals mate an infinite number of times. This is obviously unrealistic. Here we show that when each female mates a finite number of times, the effective size of the population is substantially decreased.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
Forensic examinations of ink have been performed since the beginning of the 20th century. Since the 1960s, the International Ink Library, maintained by the United States Secret Service, has supported those analyses. Until 2009, the search and identification of inks were essentially performed manually. This paper describes the results of a project designed to improve ink samples' analytical and search processes. The project focused on the development of improved standardization procedures to ensure the best possible reproducibility between analyses run on different HPTLC plates. The successful implementation of this new calibration method enabled the development of mathematical algorithms and of a software package to complement the existing ink library.