99 resultados para Evolutionary Computation
Resumo:
Little is known about the microevolutionary processes shaping within river population genetic structure of aquatic organisms characterized by high levels of homing and spawning site fidelity. Using a microsatellite panel, we observed complex and highly significant levels of intrariver population genetic substructure and Isolation-by-Distance, in the Atlantic salmon stock of a large river system. Two evolutionary models have been considered explaining mechanisms promoting genetic substructuring in Atlantic salmon, the member-vagrant and metapopulation models. We show that both models can be simultaneously used to explain patterns and levels of population structuring within the Foyle system. We show that anthropogenic factors have had a large influence on contemporary population structure observed. In an analytical development, we found that the frequently used estimator of genetic differentiation, F-ST, routinely underestimated genetic differentiation by a factor three to four compared to the equivalent statistic Jost's D-est (Jost 2008). These statistics also showed a near-perfect correlation. Despite ongoing discussions regarding the usefulness of "adjusted" F-ST statistics, we argue that these could be useful to identify and quantify qualitative differences between populations, which are important from management and conservation perspectives as an indicator of existence of biologically significant variation among tributary populations or a warning of critical environmental damage.
Resumo:
Of the early modern writers on the division of labour, Bernard Mandeville alone extended it to all aspects of human activity and emphasised its role in a cumulative process of evolution in which each generation modified and built on what had been achieved by earlier generations. This required exploration of the mechanisms through which new knowledge was developed as well as the means by which knowledge was transmitted between the generations. The present article examines Mandeville’s treatment of these mechanisms and explores their theoretical origins. It examines Mandeville’s understanding of the role of the division of labour in facilitating discovery and learning and the role of education and imitation in transmitting social knowledge. It shows that, for Mandeville, innovators were people of ordinary capacity who were alert to the opportunities and challenges of their environment. As a result of specialisation, they possessed tacit knowledge which was actualised in what they did rather than in theoretical propositions. Mandeville’s evolutionary thought influenced subsequent writers on political economy and evolutionary social thinkers. It may also have had some influence on Charles Darwin, though it is not, in itself, Darwinian. © The Author 2013. Published by Oxford University Press on behalf of the Cambridge Political Economy Society. All rights reserved.
Resumo:
We introduce a family of Hamiltonian systems for measurement-based quantum computation with continuous variables. The Hamiltonians (i) are quadratic, and therefore two body, (ii) are of short range, (iii) are frustration-free, and (iv) possess a constant energy gap proportional to the squared inverse of the squeezing. Their ground states are the celebrated Gaussian graph states, which are universal resources for quantum computation in the limit of infinite squeezing. These Hamiltonians constitute the basic ingredient for the adiabatic preparation of graph states and thus open new venues for the physical realization of continuous-variable quantum computing beyond the standard optical approaches. We characterize the correlations in these systems at thermal equilibrium. In particular, we prove that the correlations across any multipartition are contained exactly in its boundary, automatically yielding a correlation area law. © 2011 American Physical Society.
Resumo:
Particle-in-cell (PIC) simulations of relativistic shocks are in principle capable of predicting the spectra of photons that are radiated incoherently by the accelerated particles. The most direct method evaluates the spectrum using the fields given by the Lienard-Wiechart potentials. However, for relativistic particles this procedure is computationally expensive. Here we present an alternative method that uses the concept of the photon formation length. The algorithm is suitable for evaluating spectra both from particles moving in a specific realization of a turbulent electromagnetic field or from trajectories given as a finite, discrete time series by a PIC simulation. The main advantage of the method is that it identifies the intrinsic spectral features and filters out those that are artifacts of the limited time resolution and finite duration of input trajectories.
Resumo:
This paper proposes a method to assess the small signal stability of a power system network by selective determination of the modal eigenvalues. This uses an accelerating polynomial transform, designed using approximate eigenvalues
obtained from a wavelet approximation. Application to the IEEE 14 bus network model produced computational savings of 20%,over the QR algorithm.
Resumo:
This paper introduces an algorithm that calculates the dominant eigenvalues (in terms of system stability) of a linear model and neglects the exact computation of the non-dominant eigenvalues. The method estimates all of the eigenvalues using wavelet based compression techniques. These estimates are used to find a suitable invariant subspace such that projection by this subspace will provide one containing the eigenvalues of interest. The proposed algorithm is exemplified by application to a power system model.
Resumo:
The Bi-directional Evolutionary Structural Optimisation (BESO) method is a numerical topology optimisation method developed for use in finite element analysis. This paper presents a particular application of the BESO method to optimise the energy absorbing capability of metallic structures. The optimisation objective is to evolve a structural geometry of minimum mass while ensuring that the kinetic energy of an impacting projectile is reduced to a level which prevents perforation. Individual elements in a finite element mesh are deleted when a prescribed damage criterion is exceeded. An energy absorbing structure subjected to projectile impact will fail once the level of damage results in a critical perforation size. It is therefore necessary to constrain an optimisation algorithm from producing such candidate solutions. An algorithm to detect perforation was implemented within a BESO framework which incorporated a ductile material damage model.
Resumo:
In this paper we present a design methodology for algorithm/architecture co-design of a voltage-scalable, process variation aware motion estimator based on significance driven computation. The fundamental premise of our approach lies in the fact that all computations are not equally significant in shaping the output response of video systems. We use a statistical technique to intelligently identify these significant/not-so-significant computations at the algorithmic level and subsequently change the underlying architecture such that the significant computations are computed in an error free manner under voltage over-scaling. Furthermore, our design includes an adaptive quality compensation (AQC) block which "tunes" the algorithm and architecture depending on the magnitude of voltage over-scaling and severity of process variations. Simulation results show average power savings of similar to 33% for the proposed architecture when compared to conventional implementation in the 90 nm CMOS technology. The maximum output quality loss in terms of Peak Signal to Noise Ratio (PSNR) was similar to 1 dB without incurring any throughput penalty.
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.
Resumo:
Molecular logic-based computation is a broad umbrella covering molecular sensors at its simplest level and logic gate arrays involving steadily increasing levels of parallel and serial integration. The fluorescent PET(photoinduced electron transfer) switching principle remains a loyal servant of this entire field. Applications arise from the convenient operation of molecular information processors in very small spaces.
Resumo:
Background: Pedigree reconstruction using genetic analysis provides a useful means to estimate fundamental population biology parameters relating to population demography, trait heritability and individual fitness when combined with other sources of data. However, there remain limitations to pedigree reconstruction in wild populations, particularly in systems where parent-offspring relationships cannot be directly observed, there is incomplete sampling of individuals, or molecular parentage inference relies on low quality DNA from archived material. While much can still be inferred from incomplete or sparse pedigrees, it is crucial to evaluate the quality and power of available genetic information a priori to testing specific biological hypotheses. Here, we used microsatellite markers to reconstruct a multi-generation pedigree of wild Atlantic salmon (Salmo salar L.) using archived scale samples collected with a total trapping system within a river over a 10 year period. Using a simulation-based approach, we determined the optimal microsatellite marker number for accurate parentage assignment, and evaluated the power of the resulting partial pedigree to investigate important evolutionary and quantitative genetic characteristics of salmon in the system.
Results: We show that at least 20 microsatellites (ave. 12 alleles/locus) are required to maximise parentage assignment and to improve the power to estimate reproductive success and heritability in this study system. We also show that 1.5 fold differences can be detected between groups simulated to have differing reproductive success, and that it is possible to detect moderate heritability values for continuous traits (h(2) similar to 0.40) with more than 80% power when using 28 moderately to highly polymorphic markers.
Conclusion: The methodologies and work flow described provide a robust approach for evaluating archived samples for pedigree-based research, even where only a proportion of the total population is sampled. The results demonstrate the feasibility of pedigree-based studies to address challenging ecological and evolutionary questions in free-living populations, where genealogies can be traced only using molecular tools, and that significant increases in pedigree assignment power can be achieved by using higher numbers of markers.
Resumo:
Repeated recolonization of freshwater environments following Pleistocene glaciations has played a major role in the evolution and adaptation of anadromous taxa. Located at the western fringe of Europe, Ireland and Britain were likely recolonized rapidly by anadromous fishes from the North Atlantic following the last glacial maximum (LGM). While the presence of unique mitochondrial haplotypes in Ireland suggests that a cryptic northern refugium may have played a role in recolonization, no explicit test of this hypothesis has been conducted. The three-spined stickleback is native and ubiquitous to aquatic ecosystems throughout Ireland, making it an excellent model species with which to examine the biogeographical history of anadromous fishes in the region. We used mitochondrial and microsatellite markers to examine the presence of divergent evolutionary lineages and to assess broad-scale patterns of geographical clustering among postglacially isolated populations. Our results confirm that Ireland is a region of secondary contact for divergent mitochondrial lineages and that endemic haplotypes occur in populations in Central and Southern Ireland. To test whether a putative Irish lineage arose from a cryptic Irish refugium, we used approximate Bayesian computation (ABC). However, we found no support for this hypothesis. Instead, the Irish lineage likely diverged from the European lineage as a result of postglacial isolation of freshwater populations by rising sea levels. These findings emphasize the need to rigorously test biogeographical hypothesis and contribute further evidence that postglacial processes may have shaped genetic diversity in temperate fauna.