112 resultados para Reading and Interpretation of Statistical Graphs
em Université de Lausanne, Switzerland
Resumo:
Research in autophagy continues to accelerate,(1) and as a result many new scientists are entering the field. Accordingly, it is important to establish a standard set of criteria for monitoring macroautophagy in different organisms. Recent reviews have described the range of assays that have been used for this purpose.(2,3) There are many useful and convenient methods that can be used to monitor macroautophagy in yeast, but relatively few in other model systems, and there is much confusion regarding acceptable methods to measure macroautophagy in higher eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers of autophagosomes versus those that measure flux through the autophagy pathway; thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from fully functional autophagy that includes delivery to, and degradation within, lysosomes (in most higher eukaryotes) or the vacuole (in plants and fungi). Here, we present a set of guidelines for the selection and interpretation of the methods that can be used by investigators who are attempting to examine macroautophagy and related processes, as well as by reviewers who need to provide realistic and reasonable critiques of papers that investigate these processes. This set of guidelines is not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to verify an autophagic response.
Resumo:
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.
Resumo:
Swiss death certification data over the period 1951-1984 for total cancer mortality and 30 major cancer sites in the population aged 25 to 74 years were analysed using a log-linear Poisson model with arbitrary constraints on the parameters to isolate the effects of birth cohort, calendar period of death and age. The overall pattern of total cancer mortality in males was stable for period values and showed some moderate decreases in cohort values restricted to the generations born after 1930. Cancer mortality trends were more favourable in females, with steady, though moderate, declines in both cohort and period values. According to the estimates from the model, the worst affected generation for male lung cancer was that born around 1910, and a flattening of trends or some moderate decline was observed for more recent cohorts, although this decline was considerably more limited than in other European countries. There were decreases in cohort and period values for stomach, intestine and oesophageal cancer in both sexes and (cervix) uteri in females. Increases were observed in both cohort and period trends for pancreas and liver in males and for several other neoplasms, including prostate, brain, leukaemias and lymphomas, restricted, however, for the latter sites, to the earlier cohorts and hence partly attributable to improved diagnosis and certification in the elderly. Although age values for lung cancer in females were around 10-times lower than in males, upward trends in female lung cancer cohort values were observed in subsequent cohorts and for period values from the late 1960's onwards. Therefore, future trends in female lung cancer mortality should continue to be monitored. The application of these age/period/cohort models thus provides a summary guide for the reading and interpretation of cancer mortality trends, although it cannot replace careful inspection of single age-specific rates.
Resumo:
Cell death is essential for a plethora of physiological processes, and its deregulation characterizes numerous human diseases. Thus, the in-depth investigation of cell death and its mechanisms constitutes a formidable challenge for fundamental and applied biomedical research, and has tremendous implications for the development of novel therapeutic strategies. It is, therefore, of utmost importance to standardize the experimental procedures that identify dying and dead cells in cell cultures and/or in tissues, from model organisms and/or humans, in healthy and/or pathological scenarios. Thus far, dozens of methods have been proposed to quantify cell death-related parameters. However, no guidelines exist regarding their use and interpretation, and nobody has thoroughly annotated the experimental settings for which each of these techniques is most appropriate. Here, we provide a nonexhaustive comparison of methods to detect cell death with apoptotic or nonapoptotic morphologies, their advantages and pitfalls. These guidelines are intended for investigators who study cell death, as well as for reviewers who need to constructively critique scientific reports that deal with cellular demise. Given the difficulties in determining the exact number of cells that have passed the point-of-no-return of the signaling cascades leading to cell death, we emphasize the importance of performing multiple, methodologically unrelated assays to quantify dying and dead cells.
Resumo:
Next-generation sequencing offers an unprecedented opportunity to jointly analyze cellular and viral transcriptional activity without prerequisite knowledge of the nature of the transcripts. SupT1 cells were infected with a vesicular stomatitis virus G envelope protein (VSV-G)-pseudotyped HIV vector. At 24 h postinfection, both cellular and viral transcriptomes were analyzed by serial analysis of gene expression followed by high-throughput sequencing (SAGE-Seq). Read mapping resulted in 33 to 44 million tags aligning with the human transcriptome and 0.23 to 0.25 million tags aligning with the genome of the HIV-1 vector. Thus, at peak infection, 1 transcript in 143 is of viral origin (0.7%), including a small component of antisense viral transcription. Of the detected cellular transcripts, 826 (2.3%) were differentially expressed between mock- and HIV-infected samples. The approach also assessed whether HIV-1 infection modulates the expression of repetitive elements or endogenous retroviruses. We observed very active transcription of these elements, with 1 transcript in 237 being of such origin, corresponding on average to 123,123 reads in mock-infected samples (0.40%) and 129,149 reads in HIV-1-infected samples (0.45%) mapping to the genomic Repbase repository. This analysis highlights key details in the generation and interpretation of high-throughput data in the setting of HIV-1 cellular infection.
Resumo:
Results of plasma or urinary amino acids are used for suspicion, confirmation or exclusion of diagnosis, monitoring of treatment, prevention and prognosis in inborn errors of amino acid metabolism. The concentrations in plasma or whole blood do not necessarily reflect the relevant metabolite concentrations in organs such as the brain or in cell compartments; this is especially the case in disorders that are not solely expressed in liver and/or in those which also affect nonessential amino acids. Basic biochemical knowledge has added much to the understanding of zonation and compartmentation of expressed proteins and metabolites in organs, cells and cell organelles. In this paper, selected old and new biochemical findings in PKU, urea cycle disorders and nonketotic hyperglycinaemia are reviewed; the aim is to show that integrating the knowledge gained in the last decades on enzymes and transporters related to amino acid metabolism allows a more extensive interpretation of biochemical results obtained for diagnosis and follow-up of patients and may help to pose new questions and to avoid pitfalls. The analysis and interpretation of amino acid measurements in physiological fluids should not be restricted to a few amino acids but should encompass the whole quantitative profile and include other pathophysiological markers. This is important if the patient appears not to respond as expected to treatment and is needed when investigating new therapies. We suggest that amino acid imbalance in the relevant compartments caused by over-zealous or protocol-driven treatment that is not adjusted to the individual patient's needs may prolong catabolism and must be corrected
Resumo:
BACKGROUND: The mutation status of the BRAF and KRAS genes has been proposed as prognostic biomarker in colorectal cancer. Of them, only the BRAF V600E mutation has been validated independently as prognostic for overall survival and survival after relapse, while the prognostic value of KRAS mutation is still unclear. We investigated the prognostic value of BRAF and KRAS mutations in various contexts defined by stratifications of the patient population. METHODS: We retrospectively analyzed a cohort of patients with stage II and III colorectal cancer from the PETACC-3 clinical trial (N = 1,423), by assessing the prognostic value of the BRAF and KRAS mutations in subpopulations defined by all possible combinations of the following clinico-pathological variables: T stage, N stage, tumor site, tumor grade and microsatellite instability status. In each such subpopulation, the prognostic value was assessed by log rank test for three endpoints: overall survival, relapse-free survival, and survival after relapse. The significance level was set to 0.01 for Bonferroni-adjusted p-values, and a second threshold for a trend towards statistical significance was set at 0.05 for unadjusted p-values. The significance of the interactions was tested by Wald test, with significance level of 0.05. RESULTS: In stage II-III colorectal cancer, BRAF mutation was confirmed a marker of poor survival only in subpopulations involving microsatellite stable and left-sided tumors, with higher effects than in the whole population. There was no evidence for prognostic value in microsatellite instable or right-sided tumor groups. We found that BRAF was also prognostic for relapse-free survival in some subpopulations. We found no evidence that KRAS mutations had prognostic value, although a trend was observed in some stratifications. We also show evidence of heterogeneity in survival of patients with BRAF V600E mutation. CONCLUSIONS: The BRAF mutation represents an additional risk factor only in some subpopulations of colorectal cancers, in others having limited prognostic value. However, in the subpopulations where it is prognostic, it represents a marker of much higher risk than previously considered. KRAS mutation status does not seem to represent a strong prognostic variable.
Resumo:
The introduction of the WHO FRAX® algorithms has facilitated the assessment of fracture risk on the basis of fracture probability. Its use in fracture risk prediction has strengths, but also limitations of which the clinician should be aware and are the focus of this review INTRODUCTION: The International Osteoporosis Foundation (IOF) and the International Society for Clinical Densitometry (ISCD) appointed a joint Task Force to develop resource documents in order to make recommendations on how to improve FRAX and better inform clinicians who use FRAX. The Task Force met in November 2010 for 3 days to discuss these topics which form the focus of this review. METHODS: This study reviews the resource documents and joint position statements of ISCD and IOF. RESULTS: Details on the clinical risk factors currently used in FRAX are provided, and the reasons for the exclusion of others are provided. Recommendations are made for the development of surrogate models where country-specific FRAX models are not available. CONCLUSIONS: The wish list of clinicians for the modulation of FRAX is large, but in many instances, these wishes cannot presently be fulfilled; however, an explanation and understanding of the reasons may be helpful in translating the information provided by FRAX into clinical practice.
Resumo:
A recurring task in the analysis of mass genome annotation data from high-throughput technologies is the identification of peaks or clusters in a noisy signal profile. Examples of such applications are the definition of promoters on the basis of transcription start site profiles, the mapping of transcription factor binding sites based on ChIP-chip data and the identification of quantitative trait loci (QTL) from whole genome SNP profiles. Input to such an analysis is a set of genome coordinates associated with counts or intensities. The output consists of a discrete number of peaks with respective volumes, extensions and center positions. We have developed for this purpose a flexible one-dimensional clustering tool, called MADAP, which we make available as a web server and as standalone program. A set of parameters enables the user to customize the procedure to a specific problem. The web server, which returns results in textual and graphical form, is useful for small to medium-scale applications, as well as for evaluation and parameter tuning in view of large-scale applications, requiring a local installation. The program written in C++ can be freely downloaded from ftp://ftp.epd.unil.ch/pub/software/unix/madap. The MADAP web server can be accessed at http://www.isrec.isb-sib.ch/madap/.
Resumo:
The International Society for Clinical Densitometry (ISCD) and the International Osteoporosis Foundation (IOF) convened the FRAX(®) Position Development Conference (PDC) in Bucharest, Romania, on November 14, 2010, following a two-day joint meeting of the ISCD and IOF on the "Interpretation and Use of FRAX(®) in Clinical Practice." These three days of critical discussion and debate, led by a panel of international experts from the ISCD, IOF and dedicated task forces, have clarified a number of important issues pertaining to the interpretation and implementation of FRAX(®) in clinical practice. The Official Positions resulting from the PDC are intended to enhance the quality and clinical utility of fracture risk assessment worldwide. Since the field of skeletal assessment is still evolving rapidly, some clinically important issues addressed at the PDCs are not associated with robust medical evidence. Accordingly, some Official Positions are based largely on expert opinion. Despite limitations inherent in such a process, the ISCD and IOF believe it is important to provide clinicians and technologists with the best distillation of current knowledge in the discipline of bone densitometry and provide an important focus for the scientific community to consider. This report describes the methodology and results of the ISCD-IOF PDC dedicated to FRAX(®).
Resumo:
This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.
Resumo:
The advent and application of high-resolution array-based comparative genome hybridization (array CGH) has led to the detection of large numbers of copy number variants (CNVs) in patients with developmental delay and/or multiple congenital anomalies as well as in healthy individuals. The notion that CNVs are also abundantly present in the normal population challenges the interpretation of the clinical significance of detected CNVs in patients. In this review we will illustrate a general clinical workflow based on our own experience that can be used in routine diagnostics for the interpretation of CNVs.