952 resultados para Evolutionary algorithms
Resumo:
We describe an investigation into how Massey University's Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University's pollen reference collection (2890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set. In addition to the Classifynder's native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples. © 2013 AIP Publishing LLC.
Resumo:
China has experienced considerable economic growth since 1978, which was accompanied by unprecedented growth in urbanization and, more recently, by associated rising urban housing and land banking issues. One such issue is that of land hoarding - where real estate developers purchase land to hold unused in the rising market for a future lucrative sale, often several years later. This practice is outlawed in China, where land use is controlled by increasingly strengthened Government policies and inspectors. Despite this, land hoarding continues apace, with the main culprits being the developers and inspectors working subversively. This resembles a game between two players - the inspector and the developer - which provides the setting for this paper in developing an evolutionary game theory model to provide insights into dealing with the dilemmas faced by the players. The logic and dilemma of land banking strategy and illegal land banking issues are analysed, along with the land inspector’s role from a game theory perspective by determining the replication dynamic mechanism and evolutionary stable strategies under the various conditions that the players face. The major factors influencing the actions of land inspectors, on the other hand, are the costs of inspection, no matter if it is strict or indolent, conflict costs, and income and penalties from corruption. From this, it is shown that, when the net loss for corruption (income from corruption minus the penalties for corruption and cost of strict inspections) is less than the cost of strict inspections, the final evolutionary stable strategy of the inspectors is to carry out indolent inspections. Then, whether penalising developers for hoarding is severe or not, the evolutionary strategy for the developer is to hoard. The implications for land use control mechanisms and associated developer-inspector actions and counteractions are then examined in the light of the model's properties.
Resumo:
A coverage algorithm is an algorithm that deploys a strategy as to how to cover all points in terms of a given area using some set of sensors. In the past decades a lot of research has gone into development of coverage algorithms. Initially, the focus was coverage of structured and semi-structured indoor areas, but with time and development of better sensors and introduction of GPS, the focus has turned to outdoor coverage. Due to the unstructured nature of an outdoor environment, covering an outdoor area with all its obstacles and simultaneously performing reliable localization is a difficult task. In this paper, two path planning algorithms suitable for solving outdoor coverage tasks are introduced. The algorithms take into account the kinematic constraints of an under-actuated car-like vehicle, minimize trajectory curvatures, and dynamically avoid detected obstacles in the vicinity, all in real-time. We demonstrate the performance of the coverage algorithm in the field by achieving 95% coverage using an autonomous tractor mower without the aid of any absolute localization system or constraints on the physical boundaries of the area.
Resumo:
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.
Resumo:
The "Humies" awards are an annual competition held in conjunction with the Genetic and Evolutionary Computation Conference (GECCO), in which cash prizes totalling $10,000 are awarded to the most human-competitive results produced by any form of evolutionary computation published in the previous year. This article describes the gold medal-winning entry from the 2012 "Humies" competition, based on the LUDI system for playing, evaluating and creating new board games. LUDI was able to demonstrate human-competitive results in evolving novel board games that have gone on to be commercially published, one of which, Yavalath, has been ranked in the top 2.5% of abstract board games ever invented. Further evidence of human-competitiveness was demonstrated in the evolved games implicitly capturing several principles of good game design, outperforming human designers in at least one case, and going on to inspire a new sub-genre of games.
Resumo:
This article aims to fill in the gap of the second-order accurate schemes for the time-fractional subdiffusion equation with unconditional stability. Two fully discrete schemes are first proposed for the time-fractional subdiffusion equation with space discretized by finite element and time discretized by the fractional linear multistep methods. These two methods are unconditionally stable with maximum global convergence order of $O(\tau+h^{r+1})$ in the $L^2$ norm, where $\tau$ and $h$ are the step sizes in time and space, respectively, and $r$ is the degree of the piecewise polynomial space. The average convergence rates for the two methods in time are also investigated, which shows that the average convergence rates of the two methods are $O(\tau^{1.5}+h^{r+1})$. Furthermore, two improved algorithms are constrcted, they are also unconditionally stable and convergent of order $O(\tau^2+h^{r+1})$. Numerical examples are provided to verify the theoretical analysis. The comparisons between the present algorithms and the existing ones are included, which show that our numerical algorithms exhibit better performances than the known ones.
Resumo:
Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.
Resumo:
In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic “Propagate”, which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme,optimal or near-optimal solutions can be identified.
Resumo:
This thesis in software engineering presents a novel automated framework to identify similar operations utilized by multiple algorithms for solving related computing problems. It provides a new effective solution to perform multi-application based algorithm analysis, employing fundamentally light-weight static analysis techniques compared to the state-of-art approaches. Significant performance improvements are achieved across the objective algorithms through enhancing the efficiency of the identified similar operations, targeting discrete application domains.
Resumo:
The efficient computation of matrix function vector products has become an important area of research in recent times, driven in particular by two important applications: the numerical solution of fractional partial differential equations and the integration of large systems of ordinary differential equations. In this work we consider a problem that combines these two applications, in the form of a numerical solution algorithm for fractional reaction diffusion equations that after spatial discretisation, is advanced in time using the exponential Euler method. We focus on the efficient implementation of the algorithm on Graphics Processing Units (GPU), as we wish to make use of the increased computational power available with this hardware. We compute the matrix function vector products using the contour integration method in [N. Hale, N. Higham, and L. Trefethen. Computing Aα, log(A), and related matrix functions by contour integrals. SIAM J. Numer. Anal., 46(5):2505–2523, 2008]. Multiple levels of preconditioning are applied to reduce the GPU memory footprint and to further accelerate convergence. We also derive an error bound for the convergence of the contour integral method that allows us to pre-determine the appropriate number of quadrature points. Results are presented that demonstrate the effectiveness of the method for large two-dimensional problems, showing a speedup of more than an order of magnitude compared to a CPU-only implementation.
Resumo:
Particle Swarm Optimization (PSO) is a biologically inspired computational search and optimization method based on the social behaviors of birds flocking or fish schooling. Although, PSO is represented in solving many well-known numerical test problems, but it suffers from the premature convergence. A number of basic variations have been developed due to solve the premature convergence problem and improve quality of solution founded by the PSO. This study presents a comprehensive survey of the various PSO-based algorithms. As part of this survey, the authors have included a classification of the approaches and they have identify the main features of each proposal. In the last part of the study, some of the topics within this field that are considered as promising areas of future research are listed.
Resumo:
Technological system evolution is marked by the uneven evolution of constituent sub-systems. Subsequently, system evolution is hampered by the resulting state of unevenness, or reverse salience, which results from the presence of the sub-system that delivers the lowest level of performance with respect to other sub-systems, namely, the reverse salient. In this paper, we develop absolute and proportional performance gap measures of reverse salience and, in turn, derive a typology of reverse salients that distinguishes alternative dynamics of change in the evolving system. We subsequently demonstrate the applicability of the measures and the typology through an illustrative empirical study of the PC (personal computer) technological system that functions as a gaming platform. Our empirical analysis demonstrates that patterns of temporal dynamics can be distinguished with the measurement of reverse salience, and that distinct paths of technological system evolution can be identified as different types of reverse salients emerge over time.
Resumo:
As connectivity analyses become more popular, claims are often made about how the brain's anatomical networks depend on age, sex, or disease. It is unclear how results depend on tractography methods used to compute fiber networks. We applied 11 tractography methods to high angular resolution diffusion images of the brain (4-Tesla 105-gradient HARDI) from 536 healthy young adults. We parcellated 70 cortical regions, yielding 70×70 connectivity matrices, encoding fiber density. We computed popular graph theory metrics, including network efficiency, and characteristic path lengths. Both metrics were robust to the number of spherical harmonics used to model diffusion (4th-8th order). Age effects were detected only for networks computed with the probabilistic Hough transform method, which excludes smaller fibers. Sex and total brain volume affected networks measured with deterministic, tensor-based fiber tracking but not with the Hough method. Each tractography method includes different fibers, which affects inferences made about the reconstructed networks.
Resumo:
Termites have colonized many habitats and are among the most abundant animals in tropical ecosystems, which they modify considerably through their actions. The timing of their rise in abundance and of the dispersal events that gave rise to modern termite lineages is not well understood. To shed light on termite origins and diversification, we sequenced the mitochondrial genome of 48 termite species and combined them with 18 previously sequenced termite mitochondrial genomes for phylogenetic and molecular clock analyses using multiple fossil calibrations. The 66 genomes represent most major clades of termites. Unlike previous phylogenetic studies based on fewer molecular data, our phylogenetic tree is fully resolved for the lower termites. The phylogenetic positions of Macrotermitinae and Apicotermitinae are also resolved as the basal groups in the higher termites, but in the crown termitid groups, including Termitinae + Syntermitinae + Nasutitermitinae + Cubitermitinae, the position of some nodes remains uncertain. Our molecular clock tree indicates that the lineages leading to termites and Cryptocercus roaches diverged 170 Ma (153-196 Ma 95% confidence interval [CI]), that modern Termitidae arose 54 Ma (46-66 Ma 95% CI), and that the crown termitid group arose 40 Ma (35-49 Ma 95% CI). This indicates that the distribution of basal termite clades was influenced by the final stages of the breakup of Pangaea. Our inference of ancestral geographic ranges shows that the Termitidae, which includes more than 75% of extant termite species, most likely originated in Africa or Asia, and acquired their pantropical distribution after a series of dispersal and subsequent diversification events.