174 resultados para selection methods
em University of Queensland eSpace - Australia
Resumo:
The H I Parkes All Sky Survey (HIPASS) is a blind extragalactic H I 21-cm emission-line survey covering the whole southern sky from declination -90degrees to +25degrees. The HIPASS catalogue (HICAT), containing 4315 H I-selected galaxies from the region south of declination +2degrees, is presented in Meyer et al. (Paper I). This paper describes in detail the completeness and reliability of HICAT, which are calculated from the recovery rate of synthetic sources and follow-up observations, respectively. HICAT is found to be 99 per cent complete at a peak flux of 84 mJy and an integrated flux of 9.4 Jy km. s(-1). The overall reliability is 95 per cent, but rises to 99 per cent for sources with peak fluxes >58 mJy or integrated flux >8.2 Jy km s(-1). Expressions are derived for the uncertainties on the most important HICAT parameters: peak flux, integrated flux, velocity width and recessional velocity. The errors on HICAT parameters are dominated by the noise in the HIPASS data, rather than by the parametrization procedure.
Resumo:
Although the aim of conservation planning is the persistence of biodiversity, current methods trade-off ecological realism at a species level in favour of including multiple species and landscape features. For conservation planning to be relevant, the impact of landscape configuration on population processes and the viability of species needs to be considered. We present a novel method for selecting reserve systems that maximize persistence across multiple species, subject to a conservation budget. We use a spatially explicit metapopulation model to estimate extinction risk, a function of the ecology of the species and the amount, quality and configuration of habitat. We compare our new method with more traditional, area-based reserve selection methods, using a ten-species case study, and find that the expected loss of species is reduced 20-fold. Unlike previous methods, we avoid designating arbitrary weightings between reserve size and configuration; rather, our method is based on population processes and is grounded in ecological theory.
Resumo:
We present new measurements of the luminosity function (LF) of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS) and the 2dF SDSS LRG and Quasar (2SLAQ) survey. We have carefully quantified, and corrected for, uncertainties in the K and evolutionary corrections, differences in the colour selection methods, and the effects of photometric errors, thus ensuring we are studying the same galaxy population in both surveys. Using a limited subset of 6326 SDSS LRGs (with 0.17 < z < 0.24) and 1725 2SLAQ LRGs (with 0.5 < z < 0.6), for which the matching colour selection is most reliable, we find no evidence for any additional evolution in the LRG LF, over this redshift range, beyond that expected from a simple passive evolution model. This lack of additional evolution is quantified using the comoving luminosity density of SDSS and 2SLAQ LRGs, brighter than M-0.2r - 5 log h(0.7) = - 22.5, which are 2.51 +/- 0.03 x 10(-7) L circle dot Mpc(-3) and 2.44 +/- 0.15 x 10(-7) L circle dot Mpc(-3), respectively (< 10 per cent uncertainty). We compare our LFs to the COMBO-17 data and find excellent agreement over the same redshift range. Together, these surveys show no evidence for additional evolution (beyond passive) in the LF of LRGs brighter than M-0.2r - 5 log h(0.7) = - 21 ( or brighter than similar to L-*).. We test our SDSS and 2SLAQ LFs against a simple 'dry merger' model for the evolution of massive red galaxies and find that at least half of the LRGs at z similar or equal to 0.2 must already have been well assembled (with more than half their stellar mass) by z similar or equal to 0.6. This limit is barely consistent with recent results from semi-analytical models of galaxy evolution.
Resumo:
There are many techniques for electricity market price forecasting. However, most of them are designed for expected price analysis rather than price spike forecasting. An effective method of predicting the occurrence of spikes has not yet been observed in the literature so far. In this paper, a data mining based approach is presented to give a reliable forecast of the occurrence of price spikes. Combined with the spike value prediction techniques developed by the same authors, the proposed approach aims at providing a comprehensive tool for price spike forecasting. In this paper, feature selection techniques are firstly described to identify the attributes relevant to the occurrence of spikes. A simple introduction to the classification techniques is given for completeness. Two algorithms: support vector machine and probability classifier are chosen to be the spike occurrence predictors and are discussed in details. Realistic market data are used to test the proposed model with promising results.
Resumo:
We describe a strategy for the selection and amplification of foreign gene expression in Chinese hamster ovary (CHO) cells employing a metallothionein gene-containing expression vector. This report describes an amplification procedure that results in an enrichment of clones exhibiting high levels of recombinant protein production and reduces the labour required for screening recombinant cell lines.
Resumo:
Background: A variety of methods for prediction of peptide binding to major histocompatibility complex (MHC) have been proposed. These methods are based on binding motifs, binding matrices, hidden Markov models (HMM), or artificial neural networks (ANN). There has been little prior work on the comparative analysis of these methods. Materials and Methods: We performed a comparison of the performance of six methods applied to the prediction of two human MHC class I molecules, including binding matrices and motifs, ANNs, and HMMs. Results: The selection of the optimal prediction method depends on the amount of available data (the number of peptides of known binding affinity to the MHC molecule of interest), the biases in the data set and the intended purpose of the prediction (screening of a single protein versus mass screening). When little or no peptide data are available, binding motifs are the most useful alternative to random guessing or use of a complete overlapping set of peptides for selection of candidate binders. As the number of known peptide binders increases, binding matrices and HMM become more useful predictors. ANN and HMM are the predictive methods of choice for MHC alleles with more than 100 known binding peptides. Conclusion: The ability of bioinformatic methods to reliably predict MHC binding peptides, and thereby potential T-cell epitopes, has major implications for clinical immunology, particularly in the area of vaccine design.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Most sugarcane breeding programs in Australia use large unreplicated trials to evaluate clones in the early stages of selection. Commercial varieties that are replicated provide a method of local control of soil fertility. Although such methods may be useful in detecting broad trends in the field, variation often occurs on a much smaller scale. Methods such as spatial analysis adjust a plot for variability by using information from immediate neighbours. These techniques are routinely used to analyse cereal data in Australia and have resulted in increased accuracy and precision in the estimates of variety effects. In this paper, spatial analyses in which the variability is decomposed into local, natural, and extraneous components are applied to early selection trials in sugarcane. Interplot competition in cane yield and trend in sugar content were substantial in many of the trials and there were often large differences in the selections between the spatial and current method used by the Bureau of Sugar Experiment Stations. A joint modelling approach for tonnes sugar per hectare in response to fertility trends and interplot competition is recommended.
Resumo:
Why does species richness vary so greatly across lineages? Traditionally, variation in species richness has been attributed to deterministic processes, although it is equally plausible that it may result from purely stochastic processes. We show that, based on the best available phylogenetic hypothesis, the pattern of cladogenesis among agamid lizards is not consistent with a random model, with some lineages having more species, and others fewer species, than expected by chance. We then use phylogenetic comparative methods to test six types of deterministic explanation for variation in species richness: body size, life history, sexual selection, ecological generalism, range size and latitude. Of eight variables we tested, only sexual size dimorphism and sexual dichromatism predicted species richness. Increases in species richness are associated with increases in sexual dichromatism but reductions in sexual size dimorphism. Consistent with recent comparative studies, we find no evidence that species richness is associated with small body size or high fecundity. Equally, we find no evidence that species richness covaries with ecological generalism, latitude or range size.
Resumo:
Plant breeders use many different breeding methods to develop superior cultivars. However, it is difficult, cumbersome, and expensive to evaluate the performance of a breeding method or to compare the efficiencies of different breeding methods within an ongoing breeding program. To facilitate comparisons, we developed a QU-GENE module called QuCim that can simulate a large number of breeding strategies for self-pollinated species. The wheat breeding strategy Selected Bulk used by CIMMYT's wheat breeding program was defined in QuCim as an example of how this is done. This selection method was simulated in QuCim to investigate the effects of deviations from the additive genetic model, in the form of dominance and epistasis, on selection outcomes. The simulation results indicate that the partial dominance model does not greatly influence genetic advance compared with the pure additive model. Genetic advance in genetic systems with overdominance and epistasis are slower than when gene effects are purely additive or partially dominant. The additive gene effect is an appropriate indicator of the change in gene frequency following selection when epistasis is absent. In the absence of epistasis, the additive variance decreases rapidly with selection. However, after several cycles of selection it remains relatively fixed when epistasis is present. The variance from partial dominance is relatively small and therefore hard to detect by the covariance among half sibs and the covariance among full sibs. The dominance variance from the overdominance model can be identified successfully, but it does not change significantly, which confirms that overdominance cannot be utilized by an inbred breeding program. QuCim is an effective tool to compare selection strategies and to validate some theories in quantitative genetics.
Resumo:
The emergence of antibiotic resistance among pathogenic and commensal bacteria has become a serious problem worldwide. The use and overuse of antibiotics in a number of settings are contributing to the development of antibiotic-resistant microorganisms. The class 1 and 2 integrase genes (intI1 and intI2, respectively) were identified in mixed bacterial cultures enriched from bovine feces by growth in buffered peptone water (BPW) followed by integrase-specific PCR. Integrase-positive bacterial colonies from the enrichment cultures were then isolated by using hydrophobic grid membrane filters and integrase-specific gene probes. Bacterial clones isolated by this technique were then confirmed to carry integrons by further testing by PCR and DNA sequencing. Integron-associated antibiotic resistance genes were detected in bacteria such as Escherichia coli, Aeromonas spp., Proteus spp., Morganella morganii, Shewanella spp., and urea-positive Providencia stuartii isolates from bovine fecal samples without the use of selective enrichment media containing antibiotics. Streptomycin and trimethoprim resistance were commonly associated with integrons. The advantages conferred by this methodology are that a wide variety of integron-containing bacteria may be simultaneously cultured in BPW enrichments and culture biases due to antibiotic selection can be avoided. Rapid and efficient identification, isolation, and characterization of antibiotic resistance-associated integrons are possible by this protocol. These methods will facilitate greater understanding of the factors that contribute to the presence and transfer of integron-associated antibiotic resistance genes in bacterial isolates from red meat production animals.
Resumo:
Objective: This study examined a sample of patients in Victoria, Australia, to identify factors in selection for conditional release from an initial hospitalization that occurred within 30 days of entry into the mental health system. Methods: Data were from the Victorian Psychiatric Case Register. All patients first hospitalized and conditionally released between 1990 and 2000 were identified (N = 8,879), and three comparison groups were created. Two groups were hospitalized within 30 days of entering the system: those who were given conditional release and those who were not. A third group was conditionally released from a hospitalization that occurred after or extended beyond 30 days after system entry. Logistic regression identified characteristics that distinguished the first group. Ordinary least-squares regression was used to evaluate the contribution of conditional release early in treatment to reducing inpatient episodes, inpatient days, days per episode, and inpatient days per 30 days in the system. Results: Conditional release early in treatment was used for 11 percent of the sample, or more than a third of those who were eligible for this intervention. Factors significantly associated with selection for early conditional release were those related to a better prognosis ( initial hospitalization at a later age and having greater than an 11th grade education), a lower likelihood of a diagnosis of dementia or schizophrenia, involuntary status at first inpatient admission, and greater community involvement ( being employed and being married). When the analyses controlled for these factors, use of conditional release early in treatment was significantly associated with a reduction in use of subsequent inpatient care.
Resumo:
We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with excellent properties. The approach is in- spired by the principles of the generalized cross entropy method. The pro- posed density estimation procedure has numerous advantages over the tra- ditional kernel density estimator methods. Firstly, for the first time in the nonparametric literature, the proposed estimator allows for a genuine incor- poration of prior information in the density estimation procedure. Secondly, the approach provides the first data-driven bandwidth selection method that is guaranteed to provide a unique bandwidth for any data. Lastly, simulation examples suggest the proposed approach outperforms the current state of the art in nonparametric density estimation in terms of accuracy and reliability.
Resumo:
This paper identifies research priorities in evaluating the ways in which "genomic medicine"-the use of genetic information to prevent and treat disease-may reduce tobacco-related harm by: (1) assisting more smokers to quit; (2) preventing non-smokers from beginning to smoke tobacco; and (3) reducing the harm caused by tobacco smoking. The method proposed to achieve the first aim is pharmacogenetics", the use of genetic information to optimise the selection of smoking-cessation programmes by screening smokers for polymorphisms that predict responses to different methods of smoking cessation. This method competes with the development of more effective forms of smoking cessation that involve vaccinating smokers against the effects of nicotine and using new pharmaceuticals (such as cannabinoid antagonists and nicotine agonists). The second and third aims are more speculative. They include: screening the population for genetic susceptibility to nicotine dependence and intervening (eg, by vaccinating children and adolescents against the effects of nicotine) to prevent smoking uptake, and screening the population for genetic susceptibility to tobacco-related diseases. A framework is described for future research on these policy options. This includes: epidemiological modelling and economic evaluation to specify the conditions under which these strategies are cost-effective; and social psychological research into the effect of providing genetic information on smokers' preparedness to quit, and the general views of the public on tobacco smoking.