838 resultados para Multiple methods framework
Resumo:
Relationship between organisms within an ecosystem is one of the main focuses in the study of ecology and evolution. For instance, host-parasite interactions have long been under close interest of ecology, evolutionary biology and conservation science, due to great variety of strategies and interaction outcomes. The monogenean ecto-parasites consist of a significant portion of flatworms. Gyrodactylus salaris is a monogenean freshwater ecto-parasite of Atlantic salmon (Salmo salar) whose damage can make fish to be prone to further bacterial and fungal infections. G. salaris is the only one parasite whose genome has been studied so far. The RNA-seq data analyzed in this thesis has already been annotated by using LAST. The RNA-seq data was obtained from Illumina sequencing i.e. yielded reads were assembled into 15777 transcripts. Last resulted in annotation of 46% transcripts and remaining were left unknown. This thesis work was started with whole data and annotation process was continued by the use of PANNZER, CDD and InterProScan. This annotation resulted in 56% successfully annotated sequences having parasite specific proteins identified. This thesis represents the first of Monogenean transcriptomic information which gives an important source for further research on this specie. Additionally, comparison of annotation methods interestingly revealed that description and domain based methods perform better than simple similarity search methods. Therefore it is more likely to suggest the use of these tools and databases for functional annotation. These results also emphasize the need for use of multiple methods and databases. It also highlights the need of more genomic information related to G. salaris.
Resumo:
To stay competitive, many employers are looking for creative and innovative employees to add value to their organization. However, current models of job performance overlook creative performance as an important criterion to measure in the workplace. The purpose of this dissertation is to conduct two separate but related studies on creative performance that aim to provide support that creative performance should be included in models of job performance, and ultimately included in performance evaluations in organizations. Study 1 is a meta-analysis on the relationship between creative performance and task performance, and the relationship between creative performance and organizational citizenship behavior (OCB). Overall, I found support for a medium to large corrected correlation for both the creative performance-task performance (ρ = .51) and creative performance-OCB (ρ = .49) relationships. Further, I also found that both rating-source and study location were significant moderators. Study 2 is a process model that includes creative performance alongside task performance and OCB as the outcome variables. I test a model in which both individual differences (specifically: conscientiousness, extraversion, proactive personality, and self-efficacy) and job characteristics (autonomy, feedback, and supervisor support) predict creative performance, task performance, and OCB through engagement as a mediator. In a sample of 299 employed individuals, I found that all the individual differences and job characteristics were positively correlated with all three performance criteria. I also looked at these relationships in a multiple regression framework and most of the individual differences and job characteristics still predicted the performance criteria. In the mediation analyses, I found support for engagement as a significant mediator of the individual differences-performance and job characteristics-performance relationships. Taken together, Study 1 and Study 2 support the notion that creative performance should be included in models of job performance. Implications for both researchers and practitioners alike are discussed.^
Resumo:
When competing strategies for development programs, clinical trial designs, or data analysis methods exist, the alternatives need to be evaluated in a systematic way to facilitate informed decision making. Here we describe a refinement of the recently proposed clinical scenario evaluation framework for the assessment of competing strategies. The refinement is achieved by subdividing key elements previously proposed into new categories, distinguishing between quantities that can be estimated from preexisting data and those that cannot and between aspects under the control of the decision maker from those that are determined by external constraints. The refined framework is illustrated by an application to a design project for an adaptive seamless design for a clinical trial in progressive multiple sclerosis.
Resumo:
The naturally occurring clonal diversity among field isolates of the major human malaria parasite Plasmodium vivax remained unexplored until the early 1990s, when improved molecular methods allowed the use of blood samples obtained directly from patients, without prior in vitro culture, for genotyping purposes. Here we briefly review the molecular strategies currently used to detect genetically distinct clones in patient-derived P. vivax samples, present evidence that multiple-clone P. vivax infections are commonly detected in areas with different levels of malaria transmission and discuss possible evolutionary and epidemiological consequences of the competition between genetically distinct clones in natural human infections. We suggest that, when two or more genetically distinct clones are present in the same host, intra-host competition for limited resources may select for P. vivax traits that represent major public health challenges, such as increased virulence, increased transmissibility and antimalarial drug resistance.
Resumo:
There are many techniques for electricity market price forecasting. However, most of them are designed for expected price analysis rather than price spike forecasting. An effective method of predicting the occurrence of spikes has not yet been observed in the literature so far. In this paper, a data mining based approach is presented to give a reliable forecast of the occurrence of price spikes. Combined with the spike value prediction techniques developed by the same authors, the proposed approach aims at providing a comprehensive tool for price spike forecasting. In this paper, feature selection techniques are firstly described to identify the attributes relevant to the occurrence of spikes. A simple introduction to the classification techniques is given for completeness. Two algorithms: support vector machine and probability classifier are chosen to be the spike occurrence predictors and are discussed in details. Realistic market data are used to test the proposed model with promising results.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
In the last decade, local image features have been widely used in robot visual localization. In order to assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image with those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, in this paper we compare several candidate combiners with respect to their performance in the visual localization task. For this evaluation, we selected the most popular methods in the class of non-trained combiners, namely the sum rule and product rule. A deeper insight into the potential of these combiners is provided through a discriminativity analysis involving the algebraic rules and two extensions of these methods: the threshold, as well as the weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. Furthermore, we address the process of constructing a model of the environment by describing how the model granularity impacts upon performance. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance, confirming the general agreement on the robustness of this rule in other classification problems. The voting method, whilst competitive with the product rule in its standard form, is shown to be outperformed by its modified versions.
Resumo:
Trabalho apresentado no âmbito do European Master in Computational Logics, como requisito parcial para obtenção do grau de Mestre em Computational Logics
Resumo:
The naturally occurring clonal diversity among field isolates of the major human malaria parasite Plasmodium vivax remained unexplored until the early 1990s, when improved molecular methods allowed the use of blood samples obtained directly from patients, without prior in vitro culture, for genotyping purposes. Here we briefly review the molecular strategies currently used to detect genetically distinct clones in patient-derived P. vivax samples, present evidence that multiple-clone P. vivax infections are commonly detected in areas with different levels of malaria transmission and discuss possible evolutionary and epidemiological consequences of the competition between genetically distinct clones in natural human infections. We suggest that, when two or more genetically distinct clones are present in the same host, intra-host competition for limited resources may select for P. vivax traits that represent major public health challenges, such as increased virulence, increased transmissibility and antimalarial drug resistance.
Resumo:
CD6 has recently been identified and validated as risk gene for multiple sclerosis (MS), based on the association of a single nucleotide polymorphism (SNP), rs17824933, located in intron 1. CD6 is a cell surface scavenger receptor involved in T-cell activation and proliferation, as well as in thymocyte differentiation. In this study, we performed a haptag SNP screen of the CD6 gene locus using a total of thirteen tagging SNPs, of which three were non-synonymous SNPs, and replicated the recently reported GWAS SNP rs650258 in a Spanish-Basque collection of 814 controls and 823 cases. Validation of the six most strongly associated SNPs was performed in an independent collection of 2265 MS patients and 2600 healthy controls. We identified association of haplotypes composed of two non-synonymous SNPs [rs11230563 (R225W) and rs2074225 (A257V)] in the 2(nd) SRCR domain with susceptibility to MS (P max(T) permutation = 1×10(-4)). The effect of these haplotypes on CD6 surface expression and cytokine secretion was also tested. The analysis showed significantly different CD6 expression patterns in the distinct cell subsets, i.e. - CD4(+) naïve cells, P = 0.0001; CD8(+) naïve cells, P<0.0001; CD4(+) and CD8(+) central memory cells, P = 0.01 and 0.05, respectively; and natural killer T (NKT) cells, P = 0.02; with the protective haplotype (RA) showing higher expression of CD6. However, no significant changes were observed in natural killer (NK) cells, effector memory and terminally differentiated effector memory T cells. Our findings reveal that this new MS-associated CD6 risk haplotype significantly modifies expression of CD6 on CD4(+) and CD8(+) T cells.
Resumo:
In medical imaging, merging automated segmentations obtained from multiple atlases has become a standard practice for improving the accuracy. In this letter, we propose two new fusion methods: "Global Weighted Shape-Based Averaging" (GWSBA) and "Local Weighted Shape-Based Averaging" (LWSBA). These methods extend the well known Shape-Based Averaging (SBA) by additionally incorporating the similarity information between the reference (i.e., atlas) images and the target image to be segmented. We also propose a new spatially-varying similarity-weighted neighborhood prior model, and an edge-preserving smoothness term that can be used with many of the existing fusion methods. We first present our new Markov Random Field (MRF) based fusion framework that models the above mentioned information. The proposed methods are evaluated in the context of segmentation of lymph nodes in the head and neck 3D CT images, and they resulted in more accurate segmentations compared to the existing SBA.
Resumo:
Analytical results harmonisation is investigated in this study to provide an alternative to the restrictive approach of analytical methods harmonisation which is recommended nowadays for making possible the exchange of information and then for supporting the fight against illicit drugs trafficking. Indeed, the main goal of this study is to demonstrate that a common database can be fed by a range of different analytical methods, whatever the differences in levels of analytical parameters between these latter ones. For this purpose, a methodology making possible the estimation and even the optimisation of results similarity coming from different analytical methods was then developed. In particular, the possibility to introduce chemical profiles obtained with Fast GC-FID in a GC-MS database is studied in this paper. By the use of the methodology, the similarity of results coming from different analytical methods can be objectively assessed and the utility in practice of database sharing by these methods can be evaluated, depending on profiling purposes (evidential vs. operational perspective tool). This methodology can be regarded as a relevant approach for database feeding by different analytical methods and puts in doubt the necessity to analyse all illicit drugs seizures in one single laboratory or to implement analytical methods harmonisation in each participating laboratory.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.