940 resultados para Evolutionary optimization methods
Resumo:
Background The infraorder Anomura has long captivated the attention of evolutionary biologists due to its impressive morphological diversity and ecological adaptations. To date, 2500 extant species have been described but phylogenetic relationships at high taxonomic levels remain unresolved. Here, we reconstruct the evolutionary history—phylogeny, divergence times, character evolution and diversification—of this speciose clade. For this purpose, we sequenced two mitochondrial (16S and 12S) and three nuclear (H3, 18S and 28S) markers for 19 of the 20 extant families, using traditional Sanger and next-generation 454 sequencing methods. Molecular data were combined with 156 morphological characters in order to estimate the largest anomuran phylogeny to date. The anomuran fossil record allowed us to incorporate 31 fossils for divergence time analyses. Results Our best phylogenetic hypothesis (morphological + molecular data) supports most anomuran superfamilies and families as monophyletic. However, three families and eleven genera are recovered as para- and polyphyletic. Divergence time analysis dates the origin of Anomura to the Late Permian ~259 (224–296) MYA with many of the present day families radiating during the Jurassic and Early Cretaceous. Ancestral state reconstruction suggests that carcinization occurred independently 3 times within the group. The invasion of freshwater and terrestrial environments both occurred between the Late Cretaceous and Tertiary. Diversification analyses found the speciation rate to be low across Anomura, and we identify 2 major changes in the tempo of diversification; the most significant at the base of a clade that includes the squat-lobster family Chirostylidae. Conclusions Our findings are compared against current classifications and previous hypotheses of anomuran relationships. Many families and genera appear to be poly- or paraphyletic suggesting a need for further taxonomic revisions at these levels. A divergence time analysis provides key insights into the origins of major lineages and events and the timing of morphological (body form) and ecological (habitat) transitions. Living anomuran biodiversity is the product of 2 major changes in the tempo of diversification; our initial insights suggest that the acquisition of a crab-like form did not act as a key innovation.
Resumo:
Toll plazas have several toll payment types such as manual, automatic coin machines, electronic and mixed lanes. In places with high traffic flow, the presence of toll plaza causes a lot of traffic congestion; this creates a bottleneck for the traffic flow, unless the correct mix of payment types is in operation. The objective of this research is to determine the optimal lane configuration for the mix of the methods of payment so that the waiting time in the queue at the toll plaza is minimized. A queuing model representing the toll plaza system and a nonlinear integer program have been developed to determine the optimal mix. The numerical results show that the waiting time can be decreased at the toll plaza by changing the lane configuration. For the case study developed an improvement in the waiting time as high as 96.37 percent was noticed during the morning peak hour.
Resumo:
The profitability of momentum portfolios in the equity markets is derived from the continuation of stock returns over medium time horizons. The empirical evidence of momentum, however, is significantly different across markets around the world. The purpose of this dissertation is to: 1) help global investors determine the optimal selection and holding periods for momentum portfolios, 2) evaluate the profitability of the optimized momentum portfolios in different time periods and market states, 3) assess the investment strategy profits after considering transaction costs, and 4) interpret momentum returns within the framework of prior studies on investors’ behavior. Improving on the traditional practice of selecting arbitrary selection and holding periods, a genetic algorithm (GA) is employed. The GA performs a thorough and structured search to capture the return continuations and reversals patterns of momentum portfolios. Three portfolio formation methods are used: price momentum, earnings momentum, and earnings and price momentum and a non-linear optimization procedure (GA). The focus is on common equity of the U.S. and a select number of countries, including Australia, France, Germany, Japan, the Netherlands, Sweden, Switzerland and the United Kingdom. The findings suggest that the evolutionary algorithm increases the annualized profits of the U.S. momentum portfolios. However, the difference in mean returns is statistically significant only in certain cases. In addition, after considering transaction costs, both price and earnings and price momentum portfolios do not appear to generate abnormal returns. Positive risk-adjusted returns net of trading costs are documented solely during “up” markets for a portfolio long in prior winners only. The results on the international momentum effects indicate that the GA improves the momentum returns by 2 to 5% on an annual basis. In addition, the relation between momentum returns and exchange rate appreciation/depreciation is examined. The currency appreciation does not appear to influence significantly momentum profits. Further, the influence of the market state on momentum returns is not uniform across the countries considered. The implications of the above findings are discussed with a focus on the practical aspects of momentum investing, both in the U.S. and globally.
Resumo:
Acknowledgements This study was funded by a Natural Environment Research Council grant (NERC, project code: NBAF704). FML is funded by a NERC Doctoral Training Grant (Project Reference: NE/L50175X/1). RLS was an undergraduate student at the University of Aberdeen and benefitted from financial support from the School of Biological Sciences. DJM is indebted to Dr. Steven Weiss (University of Graz, Austria), Dr. Takashi Yada (National Research Institute of Fisheries Science, Japan), Dr. Robert Devlin (Fisheries and Oceans Canada, Canada), Prof. Samuel Martin (University of Aberdeen, UK), Mr. Neil Lincoln (Environment Agency, UK) and Prof. Colin Adams/Mr. Stuart Wilson (University of Glasgow, UK) for providing salmonid material or assisting with its sampling. We are grateful to staff at the Centre for Genomics Research (University of Liverpool, UK) (i.e. NERC Biomolecular Analysis Facility – Liverpool; NBAF-Liverpool) for performing sequence capture/Illumina sequencing and providing us with details on associated methods that were incorporated into the manuscript. Finally, we are grateful to the organizers of the Society of Experimental Biology Satellite meeting 'Genome-powered perspectives in integrative physiology and evolutionary biology' (held in Prague, July 2015) for inviting us to contribute to this special edition of Marine Genomics and hosting a really stimulating meeting.
Resumo:
Purpose: To investigate the effect of incorporating a beam spreading parameter in a beam angle optimization algorithm and to evaluate its efficacy for creating coplanar IMRT lung plans in conjunction with machine learning generated dose objectives.
Methods: Fifteen anonymized patient cases were each re-planned with ten values over the range of the beam spreading parameter, k, and analyzed with a Wilcoxon signed-rank test to determine whether any particular value resulted in significant improvement over the initially treated plan created by a trained dosimetrist. Dose constraints were generated by a machine learning algorithm and kept constant for each case across all k values. Parameters investigated for potential improvement included mean lung dose, V20 lung, V40 heart, 80% conformity index, and 90% conformity index.
Results: With a confidence level of 5%, treatment plans created with this method resulted in significantly better conformity indices. Dose coverage to the PTV was improved by an average of 12% over the initial plans. At the same time, these treatment plans showed no significant difference in mean lung dose, V20 lung, or V40 heart when compared to the initial plans; however, it should be noted that these results could be influenced by the small sample size of patient cases.
Conclusions: The beam angle optimization algorithm, with the inclusion of the beam spreading parameter k, increases the dose conformity of the automatically generated treatment plans over that of the initial plans without adversely affecting the dose to organs at risk. This parameter can be varied according to physician preference in order to control the tradeoff between dose conformity and OAR sparing without compromising the integrity of the plan.
Resumo:
BACKGROUND: Perioperative fluid therapy remains a highly debated topic. Its purpose is to maintain or restore effective circulating blood volume during the immediate perioperative period. Maintaining effective circulating blood volume and pressure are key components of assuring adequate organ perfusion while avoiding the risks associated with either organ hypo- or hyperperfusion. Relative to perioperative fluid therapy, three inescapable conclusions exist: overhydration is bad, underhydration is bad, and what we assume about the fluid status of our patients may be incorrect. There is wide variability of practice, both between individuals and institutions. The aims of this paper are to clearly define the risks and benefits of fluid choices within the perioperative space, to describe current evidence-based methodologies for their administration, and ultimately to reduce the variability with which perioperative fluids are administered. METHODS: Based on the abovementioned acknowledgements, a group of 72 researchers, well known within the field of fluid resuscitation, were invited, via email, to attend a meeting that was held in Chicago in 2011 to discuss perioperative fluid therapy. From the 72 invitees, 14 researchers representing 7 countries attended, and thus, the international Fluid Optimization Group (FOG) came into existence. These researches, working collaboratively, have reviewed the data from 162 different fluid resuscitation papers including both operative and intensive care unit populations. This manuscript is the result of 3 years of evidence-based, discussions, analysis, and synthesis of the currently known risks and benefits of individual fluids and the best methods for administering them. RESULTS: The results of this review paper provide an overview of the components of an effective perioperative fluid administration plan and address both the physiologic principles and outcomes of fluid administration. CONCLUSIONS: We recommend that both perioperative fluid choice and therapy be individualized. Patients should receive fluid therapy guided by predefined physiologic targets. Specifically, fluids should be administered when patients require augmentation of their perfusion and are also volume responsive. This paper provides a general approach to fluid therapy and practical recommendations.
Resumo:
Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.
Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.
In this work, we propose two methods for improving the efficiency of free energy calculations.
First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.
We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.
Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.
We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.
Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.
Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.
Resumo:
Valuable genetic variation for bean breeding programs is held within the common bean secondary gene pool which consists of Phaseolus albescens, P. coccineus, P. costaricensis, and P. dumosus. However, the use of close relatives for bean improvement is limited due to the lack of knowledge about genetic variation and genetic plasticity of many of these species. Characterisation and analysis of the genetic diversity is necessary among beans' wild relatives; in addition, conflicting phylogenies and relationships need to be understood and a hypothesis of a hybrid origin of P. dumosus needs to be tested. This thesis research was orientated to generate information about the patterns of relationships among the common bean secondary gene pool, with particular focus on the species Phaseolus dumosus. This species displays a set of characteristics of agronomic interest, not only for the direct improvement of common bean but also as a source of valuable genes for adaptation to climate change. Here I undertake the first comprehensive study of the genetic diversity of P. dumosus as ascertained from both nuclear and chloroplast genome markers. A germplasm collection of the ancestral forms of P. dumosus together with wild, landrace and cultivar representatives of all other species of the common bean secondary gene pool, were used to analyse genetic diversity, phylogenetic relationships and structure of P. dumosus. Data on molecular variation was generated from sequences of cpDNA loci accD-psaI spacer, trnT-trnL spacer, trnL intron and rps14-psaB spacer and from the nrDNA the ITS region. A whole genome DArT array was developed and used for the genotyping of P. dumosus and its closes relatives. 4208 polymorphic markers were generated in the DArT array and from those, 742 markers presented a call rate >95% and zero discordance. DArT markers revealed a moderate genetic polymorphism among P. dumosus samples (13% of polymorphic loci), while P. coccineus presented the highest level of polymorphism (88% of polymorphic loci). At the cpDNA one ancestral haplotype was detected among all samples of all species in the secondary genepool. The ITS region of P. dumosus revealed high homogeneity and polymorphism bias to P. coccineus genome. Phylogenetic reconstructions made with Maximum likelihood and Bayesian methods confirmed previously reported discrepancies among the nuclear and chloroplast genomes of P. dumosus. The outline of relationships by hybridization networks displayed a considerable number of interactions within and between species. This research provides compelling evidence that P. dumosus arose from hybridisation between P. vulgaris and P. coccineus and confirms that P. costaricensis has likely been involved in the genesis or backcrossing events (or both) in the history of P. dumosus. The classification of the specie P. persistentus was analysed based on cpDNA and ITS sequences, the results found this species to be highly related to P. vulgaris but not too similar to P. leptostachyus as previously proposed. This research demonstrates that wild types of the secondary genepool carry a significant genetic variation which makes this a valuable genetic resource for common bean improvement. The DArT array generated in this research is a valuable resource for breeding programs since it has the potential to be used in several approaches including genotyping, discovery of novel traits, mapping and marker-trait associations. Efforts should be made to search for potential populations of P. persistentus and to increase the collection of new populations of P. dumosus, P. albescens and P. costaricensis that may provide valuable traits for introgression into common bean and other Phaseolus crops.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
The lack of flexibility in logistic systems currently on the market leads to the development of new innovative transportation systems. In order to find the optimal configuration of such a system depending on the current goal functions, for example minimization of transport times and maximization of the throughput, various mathematical methods of multi-criteria optimization are applicable. In this work, the concept of a complex transportation system is presented. Furthermore, the question of finding the optimal configuration of such a system through mathematical methods of optimization is considered.
Resumo:
This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
Cancer remains an undetermined question for modern medicine. Every year millions of people ranging from children to adult die since the modern treatment is unable to meet the challenge. Research must continue in the area of new biomarkers for tumors. Molecular biology has evolved during last years; however, this knowledge has not been applied into the medicine. Biological findings should be used to improve diagnostics and treatment modalities. In this thesis, human formalin-fixed paraffin embedded colorectal and breast cancer samples were used to optimize the double immunofluorescence staining protocol. Also, immunohistochemistry was performed in order to visualize expression patterns of each biomarker. Concerning double immunofluorescence, feasibility of primary antibodies raised in different and same host species was also tested. Finally, established methods for simultaneous multicolor immunofluorescence imaging of formalin-fixed paraffin embedded specimens were applied for the detection of pairs of potential biomarkers of colorectal cancer (EGFR, pmTOR, pAKT, Vimentin, Cytokeratin Pan, Ezrin, E-cadherin) and breast cancer (Securin, PTTG1IP, Cleaved caspase 3, ki67).
Resumo:
The synthesis and optimization of two Li-ion solid electrolytes were studied in this work. Different combinations of precursors were used to prepare La0.5Li0.5TiO3 via mechanosynthesis. Despite the ability to form a perovskite phase by the mechanochemical reaction it was not possible to obtain a pure La0.5Li0.5TiO3 phase by this process. Of all the seven combinations of precursors and conditions tested, the one where La2O3, Li2CO3 and TiO2 were milled for 480min (LaOLiCO-480) showed the best results, with trace impurity phases still being observed. The main impurity phase was that of La2O3 after mechanosynthesis (22.84%) and Li2TiO3 after calcination (4.20%). Two different sol-gel methods were used to substitute boron on the Zr-site of Li1+xZr2-xBx(PO4)3 or the P-site of Li1+6xZr2(P1-xBxO4)3, with the doping being achieved on the Zr-site using a method adapted from Alamo et al (1989). The results show that the Zr-site is the preferential mechanism for B doping of LiZr2(PO4)3 and not the P-site. Rietveld refinement of the unit-cell parameters was performed and it was verified by consideration of Vegard’s law that it is possible to obtain phase purity up to x = 0.05. This corresponds with the phases present in the XRD data, that showed the additional presence of the low temperature (monoclinic) phase for the powder sintered at 1200ºC for 12h of compositions with x ≥ 0.075. The compositions inside the solid solution undergo the phase transition from triclinic (PDF#01-074-2562) to rhombohedral (PDF#01-070-6734) when heating from 25 to 100ºC, as reported in the literature for the base composition. Despite several efforts, it was not possible to obtain dense pellets and with physical integrity after sintering, requiring further work in order to obtain dense pellets for the electrochemical characterisation of Li Zr2(PO4)3 and Li1.05Zr1.95B0.05(PO4)3.