977 resultados para Elicitation, Expert Opinion, Regression
Resumo:
The concept of antibody-mediated targeting of antigenic MHC/peptide complexes on tumor cells in order to sensitize them to T-lymphocyte cytotoxicity represents an attractive new immunotherapy strategy. In vitro experiments have shown that an antibody chemically conjugated or fused to monomeric MHC/peptide can be oligomerized on the surface of tumor cells, rendering them susceptible to efficient lysis by MHC-peptide restricted specific T-cell clones. However, this strategy has not yet been tested entirely in vivo in immunocompetent animals. To this aim, we took advantage of OT-1 mice which have a transgenic T-cell receptor specific for the ovalbumin (ova) immunodominant peptide (257-264) expressed in the context of the MHC class I H-2K(b). We prepared and characterized conjugates between the Fab' fragment from a high-affinity monoclonal antibody to carcinoembryonic antigen (CEA) and the H-2K(b) /ova peptide complex. First, we showed in OT-1 mice that the grafting and growth of a syngeneic colon carcinoma line transfected with CEA could be specifically inhibited by systemic injections of the conjugate. Next, using CEA transgenic C57BL/6 mice adoptively transferred with OT-1 spleen cells and immunized with ovalbumin, we demonstrated that systemic injections of the anti-CEA-H-2K(b) /ova conjugate could induce specific growth inhibition and regression of well-established, palpable subcutaneous grafts from the syngeneic CEA-transfected colon carcinoma line. These results, obtained in a well-characterized syngeneic carcinoma model, demonstrate that the antibody-MHC/peptide strategy can function in vivo. Further preclinical experimental studies, using an anti-viral T-cell response, will be performed before this new form of immunotherapy can be considered for clinical use.
Resumo:
Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.
Resumo:
The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.
Resumo:
This study represents the most extensive analysis of batch-to-batch variations in spray paint samples to date. The survey was performed as a collaborative project of the ENFSI (European Network of Forensic Science Institutes) Paint and Glass Working Group (EPG) and involved 11 laboratories. Several studies have already shown that paint samples of similar color but from different manufacturers can usually be differentiated using an appropriate analytical sequence. The discrimination of paints from the same manufacturer and color (batch-to-batch variations) is of great interest and these data are seldom found in the literature. This survey concerns the analysis of batches from different color groups (white, papaya (special shade of orange), red and black) with a wide range of analytical techniques and leads to the following conclusions. Colored batch samples are more likely to be differentiated since their pigment composition is more complex (pigment mixtures, added pigments) and therefore subject to variations. These variations may occur during the paint production but may also occur when checking the paint shade in quality control processes. For these samples, techniques aimed at color/pigment(s) characterization (optical microscopy, microspectrophotometry (MSP), Raman spectroscopy) provide better discrimination than techniques aimed at the organic (binder) or inorganic composition (fourier transform infrared spectroscopy (FTIR) or elemental analysis (SEM - scanning electron microscopy and XRF - X-ray fluorescence)). White samples contain mainly titanium dioxide as a pigment and the main differentiation is based on the binder composition (Csingle bondH stretches) detected either by FTIR or Raman. The inorganic composition (elemental analysis) also provides some discrimination. Black samples contain mainly carbon black as a pigment and are problematic with most of the spectroscopic techniques. In this case, pyrolysis-GC/MS represents the best technique to detect differences. Globally, Py-GC/MS may show a high potential of discrimination on all samples but the results are highly dependent on the specific instrumental conditions used. Finally, the discrimination of samples when data was interpreted visually as compared to statistically using principal component analysis (PCA) yielded very similar results. PCA increases sensitivity and could perform better on specific samples, but one first has to ensure that all non-informative variation (baseline deviation) is eliminated by applying correct pre-treatments. Statistical treatments can be used on a large data set and, when combined with an expert's opinion, will provide more objective criteria for decision making.
Resumo:
In the fixed design regression model, additional weights areconsidered for the Nadaraya--Watson and Gasser--M\"uller kernel estimators.We study their asymptotic behavior and the relationships between new andclassical estimators. For a simple family of weights, and considering theIMSE as global loss criterion, we show some possible theoretical advantages.An empirical study illustrates the performance of the weighted estimatorsin finite samples.
Resumo:
In this paper we examine the determinants of wages and decompose theobserved differences across genders into the "explained by differentcharacteristics" and "explained by different returns components"using a sample of Spanish workers. Apart from the conditionalexpectation of wages, we estimate the conditional quantile functionsfor men and women and find that both the absolute wage gap and thepart attributed to different returns at each of the quantiles, farfrom being well represented by their counterparts at the mean, aregreater as we move up in the wage range.
Genetic Variations and Diseases in UniProtKB/Swiss-Prot: The Ins and Outs of Expert Manual Curation.
Resumo:
During the last few years, next-generation sequencing (NGS) technologies have accelerated the detection of genetic variants resulting in the rapid discovery of new disease-associated genes. However, the wealth of variation data made available by NGS alone is not sufficient to understand the mechanisms underlying disease pathogenesis and manifestation. Multidisciplinary approaches combining sequence and clinical data with prior biological knowledge are needed to unravel the role of genetic variants in human health and disease. In this context, it is crucial that these data are linked, organized, and made readily available through reliable online resources. The Swiss-Prot section of the Universal Protein Knowledgebase (UniProtKB/Swiss-Prot) provides the scientific community with a collection of information on protein functions, interactions, biological pathways, as well as human genetic diseases and variants, all manually reviewed by experts. In this article, we present an overview of the information content of UniProtKB/Swiss-Prot to show how this knowledgebase can support researchers in the elucidation of the mechanisms leading from a molecular defect to a disease phenotype.
Resumo:
The objective of this paper is to compare the performance of twopredictive radiological models, logistic regression (LR) and neural network (NN), with five different resampling methods. One hundred and sixty-seven patients with proven calvarial lesions as the only known disease were enrolled. Clinical and CT data were used for LR and NN models. Both models were developed with cross validation, leave-one-out and three different bootstrap algorithms. The final results of each model were compared with error rate and the area under receiver operating characteristic curves (Az). The neural network obtained statistically higher Az than LR with cross validation. The remaining resampling validation methods did not reveal statistically significant differences between LR and NN rules. The neural network classifier performs better than the one based on logistic regression. This advantage is well detected by three-fold cross-validation, but remains unnoticed when leave-one-out or bootstrap algorithms are used.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.
Resumo:
This paper shows how recently developed regression-based methods for thedecomposition of health inequality can be extended to incorporateindividual heterogeneity in the responses of health to the explanatoryvariables. We illustrate our method with an application to the CanadianNPHS of 1994. Our strategy for the estimation of heterogeneous responsesis based on the quantile regression model. The results suggest that thereis an important degree of heterogeneity in the association of health toexplanatory variables which, in turn, accounts for a substantial percentageof inequality in observed health. A particularly interesting finding isthat the marginal response of health to income is zero for healthyindividuals but positive and significant for unhealthy individuals. Theheterogeneity in the income response reduces both overall health inequalityand income related health inequality.
Resumo:
Summary points: - The bias introduced by random measurement error will be different depending on whether the error is in an exposure variable (risk factor) or outcome variable (disease) - Random measurement error in an exposure variable will bias the estimates of regression slope coefficients towards the null - Random measurement error in an outcome variable will instead increase the standard error of the estimates and widen the corresponding confidence intervals, making results less likely to be statistically significant - Increasing sample size will help minimise the impact of measurement error in an outcome variable but will only make estimates more precisely wrong when the error is in an exposure variable
Resumo:
[Vente. Art. 1856-11-19. Paris]
Resumo:
[Vente. Art. 1858-03-18. Paris]