962 resultados para single-tree selection
Resumo:
The Marbled Murrelet (Brachyramphus marmoratus) is a threatened alcid that nests almost exclusively in old-growth forests along the Pacific coast of North America. Nesting habitat has significant economic importance. Murrelet nests are extremely difficult and costly to find, which adds uncertainty to management and conservation planning. Models based on air photo interpretation of forest cover maps or assessments by low-level helicopter flights are currently used to rank presumed Marbled Murrelet nesting habitat quality in British Columbia. These rankings are assumed to correlate with nest usage and murrelet breeding productivity. Our goal was to find the models that best predict Marbled Murrelet nesting habitat in the ground-accessible portion of the two regions studied. We generated Resource Selection Functions (RSF) using logistic regression models of ground-based forest stand variables gathered at plots around 64 nests, located using radio-telemetry, versus 82 random habitat plots. The RSF scores are proportional to the probability of nests occurring in a forest patch. The best models differed somewhat between the two regions, but include both ground variables at the patch scale (0.2-2.0 ha), such as platform tree density, height and trunk diameter of canopy trees and canopy complexity, and landscape scale variables such as elevation, aspect, and slope. Collecting ground-based habitat selection data would not be cost-effective for widespread use in forestry management; air photo interpretation and low-level aerial surveys are much more efficient methods for ranking habitat suitability on a landscape scale. This study provides one method for ground-truthing the remote methods, an essential step made possible using the numerical RSF scores generated herein.
Resumo:
Given the non-monotonic form of the radiocarbon calibration curve, the precision of single C-14 dates on the calendar timescale will always be limited. One way around this limitation is through comparison of time-series, which should exhibit the same irregular patterning as the calibration curve. This approach can be employed most directly in the case of wood samples with many years growth present (but not able to be dated by dendrochronology), where the tree-ring series of unknown date can be compared against the similarly constructed C-14 calibration curve built from known-age wood. This process of curve-fitting has come to be called "wiggle-matching." In this paper, we look at the requirements for getting good precision by this method: sequence length, sampling frequency, and measurement precision. We also look at 3 case studies: one a piece of wood which has been independently dendrochronologically dated, and two others of unknown age relating to archaeological activity at Silchester, UK (Roman) and Miletos, Anatolia (relating to the volcanic eruption at Thera).
Resumo:
Genetic parameters and breeding values for dairy cow fertility were estimated from 62 443 lactation records. Two-trait analysis of fertility and milk yield was investigated as a method to estimate fertility breeding values when culling or selection based on milk yield in early lactation determines presence or absence of fertility observations in later lactations. Fertility traits were calving interval, intervals from calving to first service, calving to conception and first to last service, conception success to first service and number of services per conception. Milk production traits were 305-day milk, fat and protein yield. For fertility traits, range of estimates of heritability (h(2)) was 0.012 to 0.028 and of permanent environmental variance (c(2)) was 0.016 to 0.032. Genetic correlations (r(g)) among fertility traits were generally high ( > 0.70). Genetic correlations of fertility with milk production traits were unfavourable (range -0.11 to 0.46). Single and two-trait analyses of fertility were compared using the same data set. The estimates of h(2) and c(2) were similar for two types of analyses. However, there were differences between estimated breeding values and rankings for the same trait from single versus multi-trait analyses. The range for rank correlation was 0.69-0.83 for all animals in the pedigree and 0.89-0.96 for sires with more than 25 daughters. As single-trait method is biased due to selection on milk yield, a multi-trait evaluation of fertility with milk yield is recommended. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.
Resumo:
A prerequisite for the enrichment of antibodies screened from phage display libraries is their stable expression on a phage during multiple selection rounds. Thus, if stringent panning procedures are employed, selection is simultaneously driven by antigen affinity, stability and solubility. To take advantage of robust pre-selected scaffolds of such molecules, we grafted single-chain Fv (scFv) antibodies, previously isolated from a human phage display library after multiple rounds of in vitro panning on tumor cells, with the specificity of the clinically established murine monoclonal anti-CD22 antibody RFB4. We show that a panel of grafted scFvs retained the specificity of the murine monoclonal antibody, bound to the target antigen with high affinity (6.4-9.6 nM), and exhibited exceptional biophysical stability with retention of 89-93% of the initial binding activity after 6 days of incubation in human serum at 37degreesC. Selection of stable human scaffolds with high sequence identity to both the human germline and the rodent frameworks required only a small number of murine residues to be retained within the human frameworks in order to maintain the structural integrity of the antigen binding site. We expect this approach may be applicable for the rapid generation of highly stable humanized antibodies with low immunogenic potential.
Resumo:
A hybridised and Knowledge-based Evolutionary Algorithm (KEA) is applied to the multi-criterion minimum spanning tree problems. Hybridisation is used across its three phases. In the first phase a deterministic single objective optimization algorithm finds the extreme points of the Pareto front. In the second phase a K-best approach finds the first neighbours of the extreme points, which serve as an elitist parent population to an evolutionary algorithm in the third phase. A knowledge-based mutation operator is applied in each generation to reproduce individuals that are at least as good as the unique parent. The advantages of KEA over previous algorithms include its speed (making it applicable to large real-world problems), its scalability to more than two criteria, and its ability to find both the supported and unsupported optimal solutions.
Resumo:
Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.
Resumo:
Although tree nutrition has not been the primary focus of large climate change experiments on trees, we are beginning to understand its links to elevated atmospheric CO2 and temperature changes. This review focuses on the major nutrients, namely N and P, and deals with the effects of climate change on the processes that alter their cycling and availability. Current knowledge regarding biotic and abiotic agents of weathering, mobilization and immobilization of these elements will be discussed. To date, controlled environment studies have identified possible effects of climate change on tree nutrition. Only some of these findings, however, were verified in ecosystem scale experiments. Moreover, to be able to predict future effects of climate change on tree nutrition at this scale, we need to progress from studying effects of single factors to analysing interactions between factors such as elevated CO2, temperature or water availability.
Resumo:
One of the major aims of BCI research is devoted to achieving faster and more efficient control of external devices. The identification of individual tap events in a motor imagery BCI is therefore a desirable goal. EEG is recorded from subjects performing and imagining finger taps with their left and right hands. A Differential Evolution based feature selection wrapper is used in order to identify optimal features in the spatial and frequency domains for tap identification. Channel-frequency band combinations are found which allow differentiation of tap vs. no-tap control conditions for executed and imagined taps. Left vs. right hand taps may also be differentiated with features found in this manner. A sliding time window is then used to accurately identify individual taps in the executed tap and imagined tap conditions. Highly statistically significant classification accuracies are achieved with time windows of 0.5 s and more allowing taps to be identified on a single trial basis.
Resumo:
This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness.
Resumo:
Accurate single trial P300 classification lends itself to fast and accurate control of Brain Computer Interfaces (BCIs). Highly accurate classification of single trial P300 ERPs is achieved by characterizing the EEG via corresponding stationary and time-varying Wackermann parameters. Subsets of maximally discriminating parameters are then selected using the Network Clustering feature selection algorithm and classified with Naive-Bayes and Linear Discriminant Analysis classifiers. Hence the method is assessed on two different data-sets from BCI competitions and is shown to produce accuracies of between approximately 70% and 85%. This is promising for the use of Wackermann parameters as features in the classification of single-trial ERP responses.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
Motivation: Modelling the 3D structures of proteins can often be enhanced if more than one fold template is used during the modelling process. However, in many cases, this may also result in poorer model quality for a given target or alignment method. There is a need for modelling protocols that can both consistently and significantly improve 3D models and provide an indication of when models might not benefit from the use of multiple target-template alignments. Here, we investigate the use of both global and local model quality prediction scores produced by ModFOLDclust2, to improve the selection of target-template alignments for the construction of multiple-template models. Additionally, we evaluate clustering the resulting population of multi- and single-template models for the improvement of our IntFOLD-TS tertiary structure prediction method. Results: We find that using accurate local model quality scores to guide alignment selection is the most consistent way to significantly improve models for each of the sequence to structure alignment methods tested. In addition, using accurate global model quality for re-ranking alignments, prior to selection, further improves the majority of multi-template modelling methods tested. Furthermore, subsequent clustering of the resulting population of multiple-template models significantly improves the quality of selected models compared with the previous version of our tertiary structure prediction method, IntFOLD-TS.
Resumo:
Four commercially available, biostimulants sold under the trade names ‘Generate’, ‘Crop Set’, ‘Fulcrum’ and ‘Redicrop 2000’ were applied either as a root drench or foliar spray to three transplant-sensitive tree species, red oak(Quercus rubra), birch(Betula pendula) and beech (Fagus sylvatica) post transplanting. The short and long-term efficacy of the biostimulants on growth was quantified by recording root and shoot vigour at week 8 and 20. In addition, improvements in tree vitality were assessed by measurement of a chlorophyll a performance index based on leaf chlorophyll fluorescence emissions. Irrespective of species, no significant effect of mode of application (foliar spray versus root drench) was recorded on growth and vitality. The biostimulants Generate and Fulcrum increased growth of all three tree species. No significant effects on growth and chlorophyll fluorescence of birch and beech were recorded following applications of the biostimulants Crop Set and Redicrop 2000, however, significant increase in growth of red oak was recorded. Only the biostimulant Generate increased chlorophyll fluorescence values of all test species. Results show use of biostimulants can improve root and shoot vigour following transplanting. However, selection of an appropriate biostimulant is critical as effects on growth and vitality can vary widely between tree species possibly as a result of the differing active ingredient used in the formulation of the product.