55 resultados para efficient algorithms
Resumo:
Peanut, one of the world's most important oilseed crops, has a narrow germplasm base and lacks sources of resistance to several major diseases. The species is considered recalcitrant to transformation, with few confirmed transgenic plants upon particle bombardment or Agrobacterium treatment. Reported transformation methods are limited by low efficiency, cultivar specificity, chimeric or infertile transformants, or availability of explants. Here we present a method to efficiently transform cultivars in both botanical types of peanut, by (1) particle bombardment into embryogenic callus derived from mature seeds, (2) escape-free (not stepwise) selection for hygromycin B resistance, (3) brief osmotic desiccation followed by sequential incubation on charcoal and cytokinin-containing media; resulting in efficient conversion of transformed somatic embryos into fertile, non-chimeric, transgenic plants. The method produces three to six independent transformants per bombardment of 10 cm(2) embryogenic callus. Potted, transgenic plant lines can be regenerated within 9 months of callus initiation, or 6 months after bombardment. Transgene copy number ranged from one to 20 with multiple integration sites. There was ca. 50% coexpression of hph and luc or uidA genes coprecipitated on separate plasmids. Reporter gene (luc) expression was confirmed in T-1 progeny from each of six tested independent transformants. Insufficient seeds were produced under containment conditions to determine segregation ratios. The practicality of the technique for efficient cotransformation with selected and unselected genes is demonstrated using major commercial peanut varieties in Australia (cv. NC-7, a virginia market type) and Indonesia (cv. Gajah, a spanish market type).
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.
Resumo:
Giles and Goss (1980) have suggested that, if a futures market provides a forward pricing function, then it is an efficient market. In this article a simple test for whether the Australian Wool Futures market is efficient is proposed. The test is based on applying cointegration techniques to test the Law of One Price over a three, six, nine, and twelve month spread of futures prices. We found that the futures market is efficient for up to a six-month spread, but no further into the future. Because futures market prices can be used to predict spot prices up to six months in advance, woolgrowers can use the futures price to assess when they market their clip, but not for longer-term production planning decisions. (C) 1999 John Wiley & Sons, Inc.
Resumo:
Overcoming the phenomenon known as difficult synthetic sequences has been a major goal in solid-phase peptide synthesis for over 30 years. In this work the advantages of amide backbone-substitution in the solid-phase synthesis of difficult peptides are augmented by developing an activated N-alpha-acyl transfer auxiliary. Apart from disrupting troublesome intermolecular hydrogen-bonding networks, the primary function of the activated N-alpha-auxiliary was to facilitate clean and efficient acyl capture of large or beta-branched amino acids and improve acyl transfer yields to the secondary N-alpha-amine. We found o-hydroxyl-substituted nitrobenzyl (Hnb) groups were suitable N-alpha-auxiliaries for this purpose. The relative acyl transfer efficiency of the Hnb auxiliary was superior to the 2-hydroxy-4-methoxybenzyl (Hmb) auxiliary with protected amino acids of varying size. Significantly, this difference in efficiency was more pronounced between more sterically demanding amino acids. The Hnb auxiliary is readily incorporated at the N-alpha-amine during SPPS by reductive alkylation of its corresponding benzaldehyde derivative and conveniently removed by mild photolysis at 366 nm. The usefulness of the Hnb auxiliary for the improvement of coupling efficiencies in the chain-assembly of difficult peptides was demonstrated by the efficient Hnb-assisted Fmoc solid-phase synthesis of a known hindered difficult peptide sequence, STAT-91. This work suggests the Hnb auxiliary will significantly enhance our ability to synthesize difficult polypeptides and increases the applicability of amide-backbone substitution.
Resumo:
Realistic time frames in which management decisions are made often preclude the completion of the detailed analyses necessary for conservation planning. Under these circumstances, efficient alternatives may assist in approximating the results of more thorough studies that require extensive resources and time. We outline a set of concepts and formulas that may be used in lieu of detailed population viability analyses and habitat modeling exercises to estimate the protected areas required to provide desirable conservation outcomes for a suite of threatened plant species. We used expert judgment of parameters and assessment of a population size that results in a specified quasiextinction risk based on simple dynamic models The area required to support a population of this size is adjusted to take into account deterministic and stochastic human influences, including small-scale disturbance deterministic trends such as habitat loss, and changes in population density through processes such as predation and competition. We set targets for different disturbance regimes and geographic regions. We applied our methods to Banksia cuneata, Boronia keysii, and Parsonsia dorrigoensis, resulting in target areas for conservation of 1102, 733, and 1084 ha, respectively. These results provide guidance on target areas and priorities for conservation strategies.
Resumo:
Peptides that induce and recall T-cell responses are called T-cell epitopes. T-cell epitopes may be useful in a subunit vaccine against malaria. Computer models that simulate peptide binding to MHC are useful for selecting candidate T-cell epitopes since they minimize the number of experiments required for their identification. We applied a combination of computational and immunological strategies to select candidate T-cell epitopes. A total of 86 experimental binding assays were performed in three rounds of identification of HLA-All binding peptides from the six preerythrocytic malaria antigens. Thirty-six peptides were experimentally confirmed as binders. We show that the cyclical refinement of the ANN models results in a significant improvement of the efficiency of identifying potential T-cell epitopes. (C) 2001 by Elsevier Science Inc.
Resumo:
In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.
Resumo:
The efficient expression and purification of an interfacially active peptide (mLac21) was achieved by using bioprocess-centered molecular design (BMD), wherein key bioprocess considerations are addressed during the initial molecular biology work. The 21 amino acid mLac21 peptide sequence is derived from the lac repressor protein and is shown to have high affinity for the oil-water interface, causing a substantial reduction in interfacial tension following adsorption. The DNA coding for the peptide sequence was cloned into a modified pET-31(b) vector to permit the expression of mLac21 as a fusion to ketosteroid isomerase (KSI). Rational iterative molecular design, taking into account the need for a scaleable bioprocess flowsheet, led to a simple and efficient bioprocess yielding mLac21 at 86% purity following ion exchange chromatography (and >98% following chromatographic polishing). This case study demonstrates that it is possible to produce acceptably pure peptide for potential commodity applications using common scaleable bioprocess unit operations. Moreover, it is shown that BMD is a powerful strategy that can be deployed to reduce bioseparation complexity. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.
Resumo:
The concept of parameter-space size adjustment is pn,posed in order to enable successful application of genetic algorithms to continuous optimization problems. Performance of genetic algorithms with six different combinations of selection and reproduction mechanisms, with and without parameter-space size adjustment, were severely tested on eleven multiminima test functions. An algorithm with the best performance was employed for the determination of the model parameters of the optical constants of Pt, Ni and Cr.
Resumo:
We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.