8 resultados para Scientific Algorithms. Evolutionary Computation. Metaheuristics. Car Renter Salesman Problem
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
This work addresses the evolution of an artificial neural network (ANN) to assist in the problem of indoor robotic localization. We investigate the design and building of an autonomous localization system based on information gathered from wireless networks (WN). The article focuses on the evolved ANN, which provides the position of a robot in a space, as in a Cartesian coordinate system, corroborating with the evolutionary robotic research area and showing its practical viability. The proposed system was tested in several experiments, evaluating not only the impact of different evolutionary computation parameters but also the role of the transfer functions on the evolution of the ANN. Results show that slight variations in the parameters lead to significant differences on the evolution process and, therefore, in the accuracy of the robot position.
Resumo:
This talk combines concepts of the copying of manuscripts in the age before mechanical reproduction (as described in W. Benjamin’s famous Artwork-essay), of the emergence of non-vertical evolution (as to be found in textual ‘contamination’ and in horizontal gene transfer alike), and of the electronic reproduction of such phenomena (as presented in scholarly digital editions and phylogenetic trees). The guiding idea is that ‘copying’, ‘emergence’, and ‘(digital) reproduction’ enable variation depending on particular physical or biological forms, on contextual or environmental conditions, as well as on user habits or receptive fields. – Is it possible to develop a theory of copying and reproduction on this base? The material of the talk will be drawn from the Parzival-Project, a critical electronic edition of an Arthurian romance, composed by the German poet Wolfram von Eschenbach shortly after 1200 and transmitted in over eighty witnesses.
Resumo:
Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Resumo:
Oceanic islands have been a test ground for evolutionary theory, but here, we focus on the possibilities for evolutionary study created by offshore islands. These can be colonized through various means and by a wide range of species, including those with low dispersal capabilities. We use morphology, modern and ancient sequences of cytochrome b (cytb) and microsatellite genotypes to examine colonization history and evolutionary change associated with occupation of the Orkney archipelago by the common vole (Microtus arvalis), a species found in continental Europe but not in Britain. Among possible colonization scenarios, our results are most consistent with human introduction at least 5100 bp (confirmed by radiocarbon dating). We used approximate Bayesian computation of population history to infer the coast of Belgium as the possible source and estimated the evolutionary timescale using a Bayesian coalescent approach. We showed substantial morphological divergence of the island populations, including a size increase presumably driven by selection and reduced microsatellite variation likely reflecting founder events and genetic drift. More surprisingly, our results suggest that a recent and widespread cytb replacement event in the continental source area purged cytb variation there, whereas the ancestral diversity is largely retained in the colonized islands as a genetic ‘ark’. The replacement event in the continental M. arvalis was probably triggered by anthropogenic causes (land-use change). Our studies illustrate that small offshore islands can act as field laboratories for studying various evolutionary processes over relatively short timescales, informing about the mainland source area as well as the island.
Resumo:
Well-known data mining algorithms rely on inputs in the form of pairwise similarities between objects. For large datasets it is computationally impossible to perform all pairwise comparisons. We therefore propose a novel approach that uses approximate Principal Component Analysis to efficiently identify groups of similar objects. The effectiveness of the approach is demonstrated in the context of binary classification using the supervised normalized cut as a classifier. For large datasets from the UCI repository, the approach significantly improves run times with minimal loss in accuracy.
Resumo:
BACKGROUND Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM Our aim was to challenge the validity of these software algorithms. METHODS We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes.
Resumo:
Sedimentary sequences in ancient or long-lived lakes can reach several thousands of meters in thickness and often provide an unrivalled perspective of the lake's regional climatic, environmental, and biological history. Over the last few years, deep-drilling projects in ancient lakes became increasingly multi- and interdisciplinary, as, among others, seismological, sedimentological, biogeochemical, climatic, environmental, paleontological, and evolutionary information can be obtained from sediment cores. However, these multi- and interdisciplinary projects pose several challenges. The scientists involved typically approach problems from different scientific perspectives and backgrounds, and setting up the program requires clear communication and the alignment of interests. One of the most challenging tasks, besides the actual drilling operation, is to link diverse datasets with varying resolution, data quality, and age uncertainties to answer interdisciplinary questions synthetically and coherently. These problems are especially relevant when secondary data, i.e., datasets obtained independently of the drilling operation, are incorporated in analyses. Nonetheless, the inclusion of secondary information, such as isotopic data from fossils found in outcrops or genetic data from extant species, may help to achieve synthetic answers. Recent technological and methodological advances in paleolimnology are likely to increase the possibilities of integrating secondary information. Some of the new approaches have started to revolutionize scientific drilling in ancient lakes, but at the same time, they also add a new layer of complexity to the generation and analysis of sediment-core data. The enhanced opportunities presented by new scientific approaches to study the paleolimnological history of these lakes, therefore, come at the expense of higher logistic, communication, and analytical efforts. Here we review types of data that can be obtained in ancient lake drilling projects and the analytical approaches that can be applied to empirically and statistically link diverse datasets to create an integrative perspective on geological and biological data. In doing so, we highlight strengths and potential weaknesses of new methods and analyses, and provide recommendations for future interdisciplinary deep-drilling projects.