990 resultados para evolution algorithm
Resumo:
Inbreeding avoidance is often invoked to explain observed patterns of dispersal, and theoretical models indeed point to a possibly important role. However, while inbreeding load is usually assumed constant in these models, it is actually bound to vary dynamically under the combined influences of mutation, drift, and selection and thus to evolve jointly with dispersal. Here we report the results of individual-based stochastic simulations allowing such a joint evolution. We show that strongly deleterious mutations should play no significant role, owing to the low genomic mutation rate for such mutations. Mildly deleterious mutations, by contrast, may create enough heterosis to affect the evolution of dispersal as an inbreeding-avoidance mechanism, but only provided that they are also strongly recessive. If slightly recessive, they will spread among demes and accumulate at the metapopulation level, thus contributing to mutational load, but not to heterosis. The resulting loss of viability may then combine with demographic stochasticity to promote population fluctuations, which foster indirect incentives for dispersal. Our simulations suggest that, under biologically realistic parameter values, deleterious mutations have a limited impact on the evolution of dispersal, which on average exceeds by only one-third the values expected from kin-competition avoidance.
Resumo:
Purpose : Spirituality and religiousness have been shown to be highly prevalent in patients with schizophrenia. Religion can help instil a positive sense of self, decrease the impact of symptoms and provide social contacts. Religion may also be a source of suffering. In this context, this research explores whether religion remains stable over time. Methods : From an initial cohort of 115 out-patients, 80% completed the 3-years follow-up assessment. In order to study the evolution over time, a hierarchical cluster analysis using average linkage was performed on factorial scores at baseline and follow-up and their differences. A sensitivity analysis was secondarily performed to check if the outcome was influenced by other factors such as changes in mental states using mixed models. Results : Religion was stable over time for 63% patients; positive changes occurred for 20% (i.e., significant increase of religion as a resource or a transformation of negative religion to a positive one) and negative changes for 17% (i.e., decrease of religion as a resource or a transformation of positive religion to a negative one). Change in spirituality and/or religiousness was not associated with social or clinical status, but with reduced subjective quality of life and self-esteem; even after controlling for the influence of age, gender, quality of life and clinical factors at baseline. Conclusions : In this context of patients with chronic schizophrenia, religion appeared to be labile. Qualitative analyses showed that those changes expressed the struggles of patients and suggest that religious issues need to be discussed in clinical settings.
Resumo:
The genomic era has revealed that the large repertoire of observed animal phenotypes is dependent on changes in the expression patterns of a finite number of genes, which are mediated by a plethora of transcription factors (TFs) with distinct specificities. The dimerization of TFs can also increase the complexity of a genetic regulatory network manifold, by combining a small number of monomers into dimers with distinct functions. Therefore, studying the evolution of these dimerizing TFs is vital for understanding how complexity increased during animal evolution. We focus on the second largest family of dimerizing TFs, the basic-region leucine zipper (bZIP), and infer when it expanded and how bZIP DNA-binding and dimerization functions evolved during the major phases of animal evolution. Specifically, we classify the metazoan bZIPs into 19 families and confirm the ancient nature of at least 13 of these families, predating the split of the cnidaria. We observe fixation of a core dimerization network in the last common ancestor of protostomes-deuterostomes. This was followed by an expansion of the number of proteins in the network, but no major dimerization changes in interaction partners, during the emergence of vertebrates. In conclusion, the bZIPs are an excellent model with which to understand how DNA binding and protein interactions of TFs evolved during animal evolution.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
A haplotype is an m-long binary vector. The XOR-genotype of two haplotypes is the m-vector of their coordinate-wise XOR. We study the following problem: Given a set of XOR-genotypes, reconstruct their haplotypes so that the set of resulting haplotypes can be mapped onto a perfect phylogeny (PP) tree. The question is motivated by studying population evolution in human genetics, and is a variant of the perfect phylogeny haplotyping problem that has received intensive attention recently. Unlike the latter problem, in which the input is "full" genotypes, here we assume less informative input, and so may be more economical to obtain experimentally. Building on ideas of Gusfield, we show how to solve the problem in polynomial time, by a reduction to the graph realization problem. The actual haplotypes are not uniquely determined by that tree they map onto, and the tree itself may or may not be unique. We show that tree uniqueness implies uniquely determined haplotypes, up to inherent degrees of freedom, and give a sufficient condition for the uniqueness. To actually determine the haplotypes given the tree, additional information is necessary. We show that two or three full genotypes suffice to reconstruct all the haplotypes, and present a linear algorithm for identifying those genotypes.
Resumo:
We herein present a preliminary practical algorithm for evaluating complementary and alternative medicine (CAM) for children which relies on basic bioethical principles and considers the influence of CAM on global child healthcare. CAM is currently involved in almost all sectors of pediatric care and frequently represents a challenge to the pediatrician. The aim of this article is to provide a decision-making tool to assist the physician, especially as it remains difficult to keep up-to-date with the latest developments in the field. The reasonable application of our algorithm together with common sense should enable the pediatrician to decide whether pediatric (P)-CAM represents potential harm to the patient, and allow ethically sound counseling. In conclusion, we propose a pragmatic algorithm designed to evaluate P-CAM, briefly explain the underlying rationale and give a concrete clinical example.
Resumo:
Selostus: Hiilidioksidin kulku lumipeitteisessä ja paljaassa maassa
Resumo:
Optimizing collective behavior in multiagent systems requires algorithms to find not only appropriate individual behaviors but also a suitable composition of agents within a team. Over the last two decades, evolutionary methods have emerged as a promising approach for the design of agents and their compositions into teams. The choice of a crossover operator that facilitates the evolution of optimal team composition is recognized to be crucial, but so far, it has never been thoroughly quantified. Here, we highlight the limitations of two different crossover operators that exchange entire agents between teams: restricted agent swapping (RAS) that exchanges only corresponding agents between teams and free agent swapping (FAS) that allows an arbitrary exchange of agents. Our results show that RAS suffers from premature convergence, whereas FAS entails insufficient convergence. Consequently, in both cases, the exploration and exploitation aspects of the evolutionary algorithm are not well balanced resulting in the evolution of suboptimal team compositions. To overcome this problem, we propose combining the two methods. Our approach first applies FAS to explore the search space and then RAS to exploit it. This mixed approach is a much more efficient strategy for the evolution of team compositions compared to either strategy on its own. Our results suggest that such a mixed agent-swapping algorithm should always be preferred whenever the optimal composition of individuals in a multiagent system is unknown.
Resumo:
We present a numerical method for spectroscopic ellipsometry of thick transparent films. When an analytical expression for the dispersion of the refractive index (which contains several unknown coefficients) is assumed, the procedure is based on fitting the coefficients at a fixed thickness. Then the thickness is varied within a range (according to its approximate value). The final result given by our method is as follows: The sample thickness is considered to be the one that gives the best fitting. The refractive index is defined by the coefficients obtained for this thickness.
Resumo:
The Monte San Giorgio (Southern Alps, Ticino, Switzerland) is the most important locality in the world for vertebrates dating back to the Middle Triassic. For this reason it was registered in 2003 as a UNESCO World Heritage Site. One of the objectives of this doctoral thesis was to fill some of the cognitive gaps regarding the Ladinian succession, including in particular the San Giorgio Dolomite and the Meride Limestone. In order to achieve this, the entire succession, more than 600 metres thick, was measured and sampled. Biostratigraphic research based on new finds of fossil invertebrates and microfossils and on the palynological analysis of the entire section was integrated with single-zircon U-Pb dating of volcanic ash layers intercalated in the carbonate succession. This enabled a redefinition of the bio-chronostratigraphic and geochronologic framework of the succession, which encompasses a significantly shorter time interval than previously held. The Ladinian section extends from the E. curionii Ammonoid Zone (Early Fassanian) to the P. archelaus Ammonoid Zone (Early Longobardian). The age of the classic fossiliferous levels of the Meride Limestone, rich in organic matter and containing vertebrate fossils which are known all over the world, was defined in both biostratigraphic and geochronologic terms. The presumed stratigraphie significance of the pachypleurosaurid reptiles found in such levels is called into question by new finds. These fossiliferous horizons were found to correspond to the main volcanoclastic intervals of the Buchenstein Formation (Middle and Upper Pietra Verde). Thus, a correlation with the Bagolino Section (Italy) containing the GSSP for the base of the Ladinian was proposed. Bulk sedimentation rates in the studied succession average 200 m/Myr and therefore prove to be 20 times higher than those of the South-Alpine pelagic basins. These values express high carbonate productivity from the surrounding platforms on one hand, and on the other a marked subsidence of the basin. Only in the intervals consisting of laminated limestones did the sedimentation rates drop to average values of around 30 m/Myr. The distribution of organic and inorganic facies appears to be the consequence of relative variations in sea-level. The laminated and organic-matter- rich intervals of the Meride Limestone are linked to a relative sea-level drop which favoured dysoxic to anoxic bottom-water conditions, coupled with an increase in runoff, perhaps due to recurrent explosive volcanic activity. The transient development under dysoxic conditions of monospecific benthic meio-/macrofaunas was documented. Organic matter suggests a predominant origin due to benthic bacterial activity, as can be witnessed in alveolar structures typical of exopolymeric substances secreted by bacteria within microbial mats. A microbial contribution to the carbonate (peloidal) precipitation was documented. The protective effect exerted by these microbial mats is also indicated as the main taphonomic factor contributing to the excellent preservation of vertebrate fossils. A radiolarian assemblage discovered in the lower part of the section (earliest Ladinian, E. curionii Zone) suggests the transient existence of open-marine but not deep-water connections with the tethyan pelagic basins. It shows marked similarities to the faunas typical of the late Anisian, suggesting therefore a low resolution power provided by radiolarian biostratigraphy in recognizing the Anisian/Ladinian boundary. The present thesis describes a new species of conifer (Elatocladus cassinae), a new species of insect (Dasyleptus triassicus) and seven new species of radiolarians (Eptingium danieli, Eptingium neriae, Parentactinosphaera eoladinica, Sepsagon ticinensis, Sepsagon? valporinae, Novamuria wirzi and Pessagnollum? hexaspinosum). In addition, following revision of the type material of already existent taxa, four new genera of radiolarians are introduced: Bernoulliella, Eohexastylus, Ticinosphaera and Lahmosphaera.