925 resultados para Evolutionary computation
Genetic Variation among Major Human Geographic Groups Supports a Peculiar Evolutionary Trend in PAX9
Resumo:
A total of 172 persons from nine South Amerindian, three African and one Eskimo populations were studied in relation to the Paired box gene 9 (PAX9) exon 3 (138 base pairs) as well as its 5' and 3' flanking intronic segments (232 bp and 220 bp, respectively) and integrated with the information available for the same genetic region from individuals of different geographical origins. Nine mutations were scored in exon 3 and six in its flanking regions; four of them are new South American tribe-specific singletons. Exon3 nucleotide diversity is several orders of magnitude higher than its intronic regions. Additionally, a set of variants in the PAX9 and 101 other genes related with dentition can define at least some dental morphological differences between Sub-Saharan Africans and non-Africans, probably associated with adaptations after the modern human exodus from Africa. Exon 3 of PAX9 could be a good molecular example of how evolvability works.
Resumo:
Background: Discussion surrounding the settlement of the New World has recently gained momentum with advances in molecular biology, archaeology and bioanthropology. Recent evidence from these diverse fields is found to support different colonization scenarios. The currently available genetic evidence suggests a ""single migration'' model, in which both early and later Native American groups derive from one expansion event into the continent. In contrast, the pronounced anatomical differences between early and late Native American populations have led others to propose more complex scenarios, involving separate colonization events of the New World and a distinct origin for these groups. Methodology/Principal Findings: Using large samples of Early American crania, we: 1) calculated the rate of morphological differentiation between Early and Late American samples under three different time divergence assumptions, and compared our findings to the predicted morphological differentiation under neutral conditions in each case; and 2) further tested three dispersal scenarios for the colonization of the New World by comparing the morphological distances among early and late Amerindians, East Asians, Australo-Melanesians and early modern humans from Asia to geographical distances associated with each dispersion model. Results indicate that the assumption of a last shared common ancestor outside the continent better explains the observed morphological differences between early and late American groups. This result is corroborated by our finding that a model comprising two Asian waves of migration coming through Bering into the Americas fits the cranial anatomical evidence best, especially when the effects of diversifying selection to climate are taken into account. Conclusions: We conclude that the morphological diversity documented through time in the New World is best accounted for by a model postulating two waves of human expansion into the continent originating in East Asia and entering through Beringia.
Resumo:
The parallel mutation-selection evolutionary dynamics, in which mutation and replication are independent events, is solved exactly in the case that the Malthusian fitnesses associated to the genomes are described by the random energy model (REM) and by a ferromagnetic version of the REM. The solution method uses the mapping of the evolutionary dynamics into a quantum Ising chain in a transverse field and the Suzuki-Trotter formalism to calculate the transition probabilities between configurations at different times. We find that in the case of the REM landscape the dynamics can exhibit three distinct regimes: pure diffusion or stasis for short times, depending on the fitness of the initial configuration, and a spin-glass regime for large times. The dynamic transition between these dynamical regimes is marked by discontinuities in the mean-fitness as well as in the overlap with the initial reference sequence. The relaxation to equilibrium is described by an inverse time decay. In the ferromagnetic REM, we find in addition to these three regimes, a ferromagnetic regime where the overlap and the mean-fitness are frozen. In this case, the system relaxes to equilibrium in a finite time. The relevance of our results to information processing aspects of evolution is discussed.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a new methodology to estimate unbalanced harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The problem solving algorithm herein proposed makes use of data from various power quality meters, which can either be synchronized by high technology GPS devices or by using information from a fundamental frequency load flow, what makes the overall power quality monitoring system much less costly. The ES based harmonic estimation model is applied to a 14 bus network to compare its performance to a conventional Monte Carlo approach. It is also applied to a 50 bus subtransmission network in order to compare the three-phase and single-phase approaches as well as the robustness of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a new methodology to estimate harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The main advantage in using such a technique relies upon its modeling facilities as well as its potential to solve fairly complex problems. The problem-solving algorithm herein proposed makes use of data from various power-quality (PQ) meters, which can either be synchronized by high technology global positioning system devices or by using information from a fundamental frequency load flow. This second approach makes the overall PQ monitoring system much less costly. The algorithm is applied to an IEEE test network, for which sensitivity analysis is performed to determine how the parameters of the ES can be selected so that the algorithm performs in an effective way. Case studies show fairly promising results and the robustness of the proposed method.
Resumo:
In mapping the evolutionary process of online news and the socio-cultural factors determining this development, this paper has a dual purpose. First, in reworking the definition of “online communication”, it argues that despite its seemingly sudden emergence in the 1990s, the history of online news started right in the early days of the telegraphs and spread throughout the development of the telephone and the fax machine before becoming computer-based in the 1980s and Web-based in the 1990s. Second, merging macro-perspectives on the dynamic of media evolution by DeFleur and Ball-Rokeach (1989) and Winston (1998), the paper consolidates a critical point for thinking about new media development: that something technically feasible does not always mean that it will be socially accepted and/or demanded. From a producer-centric perspective, the birth and development of pre-Web online news forms have been more or less generated by the traditional media’s sometimes excessive hype about the power of new technologies. However, placing such an emphasis on technological potentials at the expense of their social conditions not only can be misleading but also can be detrimental to the development of new media, including the potential of today’s online news.
Resumo:
The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which show that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
The Lake Eacham rainbowfish (Melanotaenia eachamensis) was declared extinct in the wild in the late 1980s after it disappeared from its only known locality, an isolated crater lake in northeast Queensland. Doubts have been raised about whether this taxon is distinct from surrounding populations of the eastern rainbowfish (Melanotaenia splendida splendida). We examined the evolutionary distinctiveness of M. eachamensis, obtained from captive stocks, relative to M. s. splendida through analysis of variation in mtDNA sequences, nuclear microsatellites, and morphometric characters Captive M. eachamensis had mtDNAs that were highly divergent from those in most populations of M. s. splendida. A broader geographic survey using RFLPs revealed some populations initially identified as M. s. splendida, that carried eachamensis mtDNA, whereas some others had mixtures of eachamensis and splendida mtDNA. The presence of eachamensis-like mtDNA in these populations could in principle be due to (1) sorting of ancestral polymorphisms, (2) introgression of M. eachamensis mtDNA into M. s. splendida, or (3) incorrect species boundaries, such that some populations currently assigned to M. s. splendida are M. eachamensis or are mixtures of the two species. These alternatives hypotheses were evaluated through comparisons of four nuclear microsatellite loci and morphometrics and meristics. In analyses of both data sets, populations of M. s. splendida with eachamensis mtDNA were more similar to captive M. eachamensis than to M. s. splendida with splendida mtDNA, supporting hypothesis 3. These results are significant for the management of M. eachamensis in several respects. First the combined molecular and morphological evidence indicates that M. eachamensis is a distinct species and a discrete evolutionarily significant unit worthy of conservation effort. Second it appears that the species boundary between M. eachamensis and M. s. splendida has been misdiagnosed such that there are extant populations on the Atherton Tableland as well as areas where both forms coexist. Accordingly we suggest that M. eachamensis be listed as vulnerable, rather than critical (or extinct in the wild). Third, the discovery of extant but genetically divergent populations of M. eachamensis on the Atherton Tableland broadens the options for future reintroductions to Lake Eacham.
Resumo:
Motivation: Prediction methods for identifying binding peptides could minimize the number of peptides required to be synthesized and assayed, and thereby facilitate the identification of potential T-cell epitopes. We developed a bioinformatic method for the prediction of peptide binding to MHC class II molecules. Results: Experimental binding data and expert knowledge of anchor positions and binding motifs were combined with an evolutionary algorithm (EA) and an artificial neural network (ANN): binding data extraction --> peptide alignment --> ANN training and classification. This method, termed PERUN, was implemented for the prediction of peptides that bind to HLA-DR4(B1*0401). The respective positive predictive values of PERUN predictions of high-, moderate-, low- and zero-affinity binder-a were assessed as 0.8, 0.7, 0.5 and 0.8 by cross-validation, and 1.0, 0.8, 0.3 and 0.7 by experimental binding. This illustrates the synergy between experimentation and computer modeling, and its application to the identification of potential immunotheraaeutic peptides.
Resumo:
1, Studies of evolutionary temperature adaptation of muscle and locomotor performance in fish are reviewed with a focus on the Antarctic fauna living at subzero temperatures. 2. Only limited data are available to compare the sustained and burst swimming kinematics and performance of Antarctic, temperate and tropical species. Available data indicate that low temperatures limit maximum swimming performance and this is especially evident in fish larvae. 3, In a recent study, muscle performance in the Antarctic rock cod Notothenia coriiceps at 0 degrees C was found to be sufficient to produce maximum velocities during burst swimming that were similar to those seen in the sculpin Myoxocephalus scorpius at 10 degrees C, indicating temperature compensation of muscle and locomotor performance in the Antarctic fish. However, at 15 degrees C, sculpin produce maximum swimming velocities greater than N, coriiceps at 0 degrees C, 4, It is recommended that strict hypothesis-driven investigations using ecologically relevant measures of performance are undertaken to study temperature adaptation in Antarctic fish, Recent detailed phylogenetic analyses of the Antarctic fish fauna and their temperate relatives will allow a stronger experimental approach by helping to separate what is due to adaptation to the cold and what is due to phylogeny alone.