982 resultados para Key cutting algorithm
Resumo:
Nested clade phylogeographic analysis (NCPA) is a popular method for reconstructing the demographic history of spatially distributed populations from genetic data. Although some parts of the analysis are automated, there is no unique and widely followed algorithm for doing this in its entirety, beginning with the data, and ending with the inferences drawn from the data. This article describes a method that automates NCPA, thereby providing a framework for replicating analyses in an objective way. To do so, a number of decisions need to be made so that the automated implementation is representative of previous analyses. We review how the NCPA procedure has evolved since its inception and conclude that there is scope for some variability in the manual application of NCPA. We apply the automated software to three published datasets previously analyzed manually and replicate many details of the manual analyses, suggesting that the current algorithm is representative of how a typical user will perform NCPA. We simulate a large number of replicate datasets for geographically distributed, but entirely random-mating, populations. These are then analyzed using the automated NCPA algorithm. Results indicate that NCPA tends to give a high frequency of false positives. In our simulations we observe that 14% of the clades give a conclusive inference that a demographic event has occurred, and that 75% of the datasets have at least one clade that gives such an inference. This is mainly due to the generation of multiple statistics per clade, of which only one is required to be significant to apply the inference key. We survey the inferences that have been made in recent publications and show that the most commonly inferred processes (restricted gene flow with isolation by distance and contiguous range expansion) are those that are commonly inferred in our simulations. However, published datasets typically yield a richer set of inferences with NCPA than obtained in our random-mating simulations, and further testing of NCPA with models of structured populations is necessary to examine its accuracy.
Resumo:
Despite advances in tissue culture techniques, propagation by leafy, softwood cuttings is the preferred, practical system for vegetative reproduction of many tree and shrub species. Species are frequently defined as 'difficult'- or 'easy-to-root' when propagated by conventional cuttings. Speed of rooting is often linked with ease of propagation, and slow-to-root species may be 'difficult' precisely because tissues deteriorate prior to the formation of adventitious roots. Even when roots form, limited development of these may impair the establishment of a cutting. In this study we used softwood cuttings of cashew (Anacardium occidentale), a species considered as 'difficult-to-root'. We aimed to test the hypothesis that speed, and extent of early rooting, is critical in determining success with this species; and that the potential to form adventitious roots will decrease with time in the propagation environment. Using two genotypes, initial rooting rates were examined in the presence or absence of exogenous auxin. In cuttings that formed adventitious roots, either entire roots or root tips were removed, to determine if further root formation/development was feasible. To investigate if subsequent root responses were linked to phytohormone action, a number of cuttings were also treated with either exogenous auxin (indole-3-butyric acid-IBA) or cytokinin (zeatin). Despite the reputation of Anacardium as being 'difficult-to-root', we found high rooting rates in two genotypes (AC 10 and CCP 1001). Removing adventitious roots from cuttings and returning them to the propagation environment, resulted in subsequent re-rooting. Indeed, individual cuttings could develop new adventitious roots on four to five separate occasions over a 9 week period. Data showed that rooting potential increased, not decreased with time in the propagation environment and that cutting viability was unaffected. Root expression was faster (8-15 days) after the removal of previous roots compared to when the cuttings were first stuck (21 days). Exposing cuttings to IBA at the time of preparation, improved initial rooting in AC 10, but not in CCP 1001. Application of IBA once roots had formed had little effect on subsequent development, but zeatin reduced root length and promoted root number and dry matter accumulation. These results challenge our hypothesis, and indicate that rooting potential remains high in Anacardium. The precise mechanisms that regulate the number of adventitious roots expressed, remain to be determined. Nevertheless, results indicate that rooting potential can be high in 'difficult-to-root' species, and suggest that providing supportive environments is the key to expressing this potential. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Objectives and methods: An influenza B virus plasmid-based rescue system was used to introduce site-specific mutations, previously observed in neuraminidase (NA) inhibitor-resistant viruses, into the NA protein of six recombinant viruses. Three mutations observed only among in vitro selected zanamivir-resistant influenza A mutants were introduced into the B/Beijing/1/87 virus NA protein, to change residue E116 to glycine, alanine or aspartic acid. Residue E116 was also mutated to valine, a mutation found in the clinic among oseltamivir-resistant viruses. An arginine to lysine change at position 291 (292 N2 numbering) mimicked that seen frequently in influenza A N2 clinical isolates resistant to oseltamivir. Similarly, an arginine to lysine change at position 149 (152 in N2 numbering) was made to reproduce the change found in the only reported zanamivir-resistant clinical isolate of influenza B virus. In vitro selection and prolonged treatment in the clinic leads to resistance pathways that require compensatory mutations in the haemagglutinin gene, but these appear not to be important for mutants isolated from immunocompetent patients. The reverse genetics system was therefore used to generate mutants containing only the NA mutation. Results and conclusions: With the exception of a virus containing the E116G mutation, mutant viruses were attenuated to different levels in comparison with wild-type virus. This attenuation was a result of altered NA activity or stability depending on the introduced mutation. Mutant viruses displayed increased resistance to zanamivir, oseltamivir and peramivir, with certain viruses displaying cross-resistance to all three drugs.
Resumo:
Myosotic cameroonensis Cheek & R Becker (Boraginacene) is described from Cameroon. Its conservation status and taxonomic affinities are assessed and an updated key to the Tropical African species of the genus is presented.
Resumo:
We have developed a novel Hill-climbing genetic algorithm (GA) for simulation of protein folding. The program (written in C) builds a set of Cartesian points to represent an unfolded polypeptide's backbone. The dihedral angles determining the chain's configuration are stored in an array of chromosome structures that is copied and then mutated. The fitness of the mutated chain's configuration is determined by its radius of gyration. A four-helix bundle was used to optimise simulation conditions, and the program was compared with other, larger, genetic algorithms on a variety of structures. The program ran 50% faster than other GA programs. Overall, tests on 100 non-redundant structures gave comparable results to other genetic algorithms, with the Hill-climbing program running from between 20 and 50% faster. Examples including crambin, cytochrome c, cytochrome B and hemerythrin gave good secondary structure fits with overall alpha carbon atom rms deviations of between 5 and 5.6 Angstrom with an optimised hydrophobic term in the fitness function. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Ferritins are nearly ubiquitous iron storage proteins playing a fundamental role in iron metabolism. They are composed of 24 subunits forming a spherical protein shell encompassing a central iron storage cavity. The iron storage mechanism involves the initial binding and subsequent O-2-dependent oxidation of two Fe2+ ions located at sites A and B within the highly conserved dinuclear "ferroxidase center" in individual subunits. Unlike animal ferritins and the heme-containing bacterioferritins, the Escherichia coli ferritin possesses an additional iron-binding site (site C) located on the inner surface of the protein shell close to the ferroxidase center. We report the structures of five E. coli ferritin variants and their Fe3+ and Zn2+ (a redox-stable alternative for Fe2+) derivatives. Single carboxyl ligand replacements in sites A, B, and C gave unique effects on metal binding, which explain the observed changes in Fe2+ oxidation rates. Binding of Fe2+ at both A and B sites is clearly essential for rapid Fe2+ oxidation, and the linking of Fe-B(2+) to Fe-C(2+) enables the oxidation of three Fe2+ ions. The transient binding of Fe2+ at one of three newly observed Zn2+ sites may allow the oxidation of four Fe2+ by one dioxygen molecule.
Resumo:
Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.
Resumo:
The main objectives of this paper are to: firstly, identify key issues related to sustainable intelligent buildings (environmental, social, economic and technological factors); develop a conceptual model for the selection of the appropriate KPIs; secondly, test critically stakeholder's perceptions and values of selected KPIs intelligent buildings; and thirdly develop a new model for measuring the level of sustainability for sustainable intelligent buildings. This paper uses a consensus-based model (Sustainable Built Environment Tool- SuBETool), which is analysed using the analytical hierarchical process (AHP) for multi-criteria decision-making. The use of the multi-attribute model for priority setting in the sustainability assessment of intelligent buildings is introduced. The paper commences by reviewing the literature on sustainable intelligent buildings research and presents a pilot-study investigating the problems of complexity and subjectivity. This study is based upon a survey perceptions held by selected stakeholders and the value they attribute to selected KPIs. It is argued that the benefit of the new proposed model (SuBETool) is a ‘tool’ for ‘comparative’ rather than an absolute measurement. It has the potential to provide useful lessons from current sustainability assessment methods for strategic future of sustainable intelligent buildings in order to improve a building's performance and to deliver objective outcomes. Findings of this survey enrich the field of intelligent buildings in two ways. Firstly, it gives a detailed insight into the selection of sustainable building indicators, as well as their degree of importance. Secondly, it tesst critically stakeholder's perceptions and values of selected KPIs intelligent buildings. It is concluded that the priority levels for selected criteria is largely dependent on the integrated design team, which includes the client, architects, engineers and facilities managers.
Resumo:
The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.
Resumo:
An exploratory model for cutting is presented which incorporates fracture toughness as well as the commonly considered effects of plasticity and friction. The periodic load fluctuations Been in cutting force dynamometer tests are predicted, and considerations of chatter and surface finish follow. A non-dimensional group is put forward to classify different regimes of material response to machining. It leads to tentative explanations for the difficulties of cutting materials such as ceramics and brittlo polymers, and also relates to the formation of discontinuous chips. Experiments on a range of solids with widely varying toughness/strength ratios generally agree with the analysis.
Resumo:
A review is given of the mechanics of cutting, ranging from the slicing of thin floppy offcuts (where there is negligible elasticity and no permanent deformation of the offcut) to the machining of ductile metals (where there is severe permanent distortion of the offcut/chip). Materials scientists employ the former conditions to determine the fracture toughness of ‘soft’ solids such as biological materials and foodstuffs. In contrast, traditional analyses of metalcutting are based on plasticity and friction only, and do not incorporate toughness. The machining theories are inadequate in a number of ways but a recent paper has shown that when ductile work of fracture is included many, if not all, of the shortcomings are removed. Support for the new analysis is given by examination of FEM simulations of metalcutting which reveal that a ‘separation criterion’ has to be employed at the tool tip. Some consideration shows that the separation criteria are versions of void-initiation-growth-and-coalescence models employed in ductile fracture mechanics. The new analysis shows that cutting forces for ductile materials depend upon the fracture toughness as well as plasticity and friction, and reveals a simple way of determining both toughness and flow stress from cutting experiments. Examples are given for a wide range of materials including metals, polymers and wood, and comparison is made with the same properties independently determined using conventional testpieces. Because cutting can be steady state, a new way is presented for simultaneously measuring toughness and flow stress at controlled speeds and strain rates.
Resumo:
Why it is easier to cut with even the sharpest knife when 'pressing down and sliding' than when merely 'pressing down alone' is explained. A variety of cases of cutting where the blade and workpiece have different relative motions is analysed and it is shown that the greater the 'slice/push ratio' xi given by ( blade speed parallel to the cutting edge/blade speed perpendicular to the cutting edge), the lower the cutting forces. However, friction limits the reductions attainable at the highest.. The analysis is applied to the geometry of a wheel cutting device (delicatessan slicer) and experiments with a cheddar cheese and a salami using such an instrumented device confirm the general predictions. (C) 2004 Kluwer Academic Publishers.