974 resultados para Data Interpretation, Statistical
Resumo:
BACKGROUND: Tuberculosis remains one of the world's deadliest transmissible diseases despite widespread use of the BCG vaccine. MTBVAC is a new live tuberculosis vaccine based on genetically attenuated Mycobacterium tuberculosis that expresses most antigens present in human isolates of M tuberculosis. We aimed to compare the safety of MTBVAC with BCG in healthy adult volunteers. METHODS: We did this single-centre, randomised, double-blind, controlled phase 1 study at the Centre Hospitalier Universitaire Vaudois (CHUV; Lausanne, Switzerland). Volunteers were eligible for inclusion if they were aged 18-45 years, clinically healthy, HIV-negative and tuberculosis-negative, and had no history of active tuberculosis, chemoprophylaxis for tuberculosis, or BCG vaccination. Volunteers fulfilling the inclusion criteria were randomly assigned to three cohorts in a dose-escalation manner. Randomisation was done centrally by the CHUV Pharmacy and treatments were masked from the study team and volunteers. As participants were recruited within each cohort, they were randomly assigned 3:1 to receive MTBVAC or BCG. Of the participants allocated MTBVAC, those in the first cohort received 5 × 10(3) colony forming units (CFU) MTBVAC, those in the second cohort received 5 × 10(4) CFU MTBVAC, and those in the third cohort received 5 × 10(5) CFU MTBVAC. In all cohorts, participants assigned to receive BCG were given 5 × 10(5) CFU BCG. Each participant received a single intradermal injection of their assigned vaccine in 0·1 mL sterile water in their non-dominant arm. The primary outcome was safety in all vaccinated participants. Secondary outcomes included whole blood cell-mediated immune response to live MTBVAC and BCG, and interferon γ release assays (IGRA) of peripheral blood mononuclear cells. This trial is registered with ClinicalTrials.gov, number NCT02013245. FINDINGS: Between Jan 23, 2013, and Nov 6, 2013, we enrolled 36 volunteers into three cohorts, each of which consisted of nine participants who received MTBVAC and three who received BCG. 34 volunteers completed the trial. The safety of vaccination with MTBVAC at all doses was similar to that of BCG, and vaccination did not induce any serious adverse events. All individuals were IGRA negative at the end of follow-up (day 210). After whole blood stimulation with live MTBVAC or BCG, MTBVAC was at least as immunogenic as BCG. At the same dose as BCG (5×10(5) CFU), although no statistical significance could be achieved, there were more responders in the MTBVAC group than in the BCG group, with a greater frequency of polyfunctional CD4+ central memory T cells. INTERPRETATION: To our knowledge, MTBVAC is the first live-attenuated M tuberculosis vaccine to reach clinical assessment, showing similar safety to BCG. MTBVAC seemed to be at least as immunogenic as BCG, but the study was not powered to investigate this outcome. Further plans to use more immunogenicity endpoints in a larger number of volunteers (adults and adolescents) are underway, with the aim to thoroughly characterise and potentially distinguish immunogenicity between MTBVAC and BCG in tuberculosis-endemic countries. Combined with an excellent safety profile, these data support advanced clinical development in high-burden tuberculosis endemic countries. FUNDING: Biofabri and Bill & Melinda Gates Foundation through the TuBerculosis Vaccine Initiative (TBVI).
Resumo:
Many European states apply score systems to evaluate the disability severity of non-fatal motor victims under the law of third-party liability. The score is a non-negative integer with an upper bound at 100 that increases with severity. It may be automatically converted into financial terms and thus also reflects the compensation cost for disability. In this paper, discrete regression models are applied to analyze the factors that influence the disability severity score of victims. Standard and zero-altered regression models are compared from two perspectives: an interpretation of the data generating process and the level of statistical fit. The results have implications for traffic safety policy decisions aimed at reducing accident severity. An application using data from Spain is provided.
Resumo:
Scholarship on the American Slave South generally agrees that John Eliot Cairnes's The Slave Power provided a highly biased interpretation of the functioning and long-term viability of the southern slave economy. Published shortly after the outbreak of the Civil War, its partisanship is partly attributed to its clearly stated goal to shift British support from the secession states to the states of the Union. Thus, it is generally agreed, Cairnes sifted his sources to obtain the desired outcome. A more balanced use of the sources at his possession would have provided a very different outcome. This paper will challenge this general assessment of Cairnes's book by examining in some detail two of Cairnes's most important sources: Frederic Law Olmsted's travelogues on the American Slave South and James D. B. De Bow's compilation of statistical data and essays in his Industrial Resources, etc., of the Southern and Western States (1852-53). By contrasting De Bow's use of statistical evidence with Olmsted's travelogues, my final purpose is to question the weight of evidence on the American Slave South. Cairnes aimed, I will argue, much more to balance the evidence than is generally acknowledged, but it is misleading to think that balancing a wide range of evidence washes out bias if this evidence itself is politically skewed, as is the rule rather than the exception.
Resumo:
Two enoxaparin dosage regimens are used as comparators to evaluate new anticoagulants for thromboprophylaxis in patients undergoing major orthopaedic surgery, but so far no satisfactory direct comparison between them has been published. Our objective was to compare the efficacy and safety of enoxaparin 3,000 anti-Xa IU twice daily and enoxaparin 4,000 anti-Xa IU once daily in this clinical setting by indirect comparison meta-analysis, using Bucher's method. We selected randomised controlled trials comparing another anticoagulant, placebo (or no treatment) with either enoxaparin regimen for venous thromboembolism prophylaxis after hip or knee replacement or hip fracture surgery, provided that the second regimen was assessed elsewhere versus the same comparator. Two authors independently evaluated study eligibility, extracted the data, and assessed the risk of bias. The primary efficacy outcome was the incidence of venous thomboembolism. The main safety outcome was the incidence of major bleeding. Overall, 44 randomised comparisons in 56,423 patients were selected, 35 being double-blind (54,117 patients). Compared with enoxaparin 4,000 anti-Xa IU once daily, enoxaparin 3,000 anti-Xa IU twice daily was associated with a reduced risk of venous thromboembolism (relative risk [RR]: 0.53, 95% confidence interval [CI]: 0.40 to 0.69), but an increased risk of major bleeding (RR: 2.01, 95% CI: 1.23 to 3.29). In conclusion, when interpreting the benefit-risk ratio of new anticoagulant drugs versus enoxaparin for thromboprophylaxis after major orthopaedic surgery, the apparently greater efficacy but higher bleeding risk of the twice-daily 3,000 anti-Xa IU enoxaparin regimen compared to the once-daily 4,000 anti-Xa IU regimen should be taken into account.
Resumo:
Social insects are promising model systems for epigenetics due to their immense morphological and behavioral plasticity. Reports that DNA methylation differs between the queen and worker castes in social insects [1-4] have implied a role for DNA methylation in regulating division of labor. To better understand the function of DNA methylation in social insects, we performed whole-genome bisulfite sequencing on brains of the clonal raider ant Cerapachys biroi, whose colonies alternate between reproductive (queen-like) and brood care (worker-like) phases [5]. Many cytosines were methylated in all replicates (on average 29.5% of the methylated cytosines in a given replicate), indicating that a large proportion of the C. biroi brain methylome is robust. Robust DNA methylation occurred preferentially in exonic CpGs of highly and stably expressed genes involved in core functions. Our analyses did not detect any differences in DNA methylation between the queen-like and worker-like phases, suggesting that DNA methylation is not associated with changes in reproduction and behavior in C. biroi. Finally, many cytosines were methylated in one sample only, due to either biological or experimental variation. By applying the statistical methods used in previous studies [1-4, 6] to our data, we show that such sample-specific DNA methylation may underlie the previous findings of queen- and worker-specific methylation. We argue that there is currently no evidence that genome-wide variation in DNA methylation is associated with the queen and worker castes in social insects, and we call for a more careful interpretation of the available data.
Resumo:
Due to the rise of criminal, civil and administrative judicial situations involving people lacking valid identity documents, age estimation of living persons has become an important operational procedure for numerous forensic and medicolegal services worldwide. The chronological age of a given person is generally estimated from the observed degree of maturity of some selected physical attributes by means of statistical methods. However, their application in the forensic framework suffers from some conceptual and practical drawbacks, as recently claimed in the specialised literature. The aim of this paper is therefore to offer an alternative solution for overcoming these limits, by reiterating the utility of a probabilistic Bayesian approach for age estimation. This approach allows one to deal in a transparent way with the uncertainty surrounding the age estimation process and to produce all the relevant information in the form of posterior probability distribution about the chronological age of the person under investigation. Furthermore, this probability distribution can also be used for evaluating in a coherent way the possibility that the examined individual is younger or older than a given legal age threshold having a particular legal interest. The main novelty introduced by this work is the development of a probabilistic graphical model, i.e. a Bayesian network, for dealing with the problem at hand. The use of this kind of probabilistic tool can significantly facilitate the application of the proposed methodology: examples are presented based on data related to the ossification status of the medial clavicular epiphysis. The reliability and the advantages of this probabilistic tool are presented and discussed.
Resumo:
In this paper we discuss the use of digital data by the Swiss Federal Criminal Court in a recent case of attempted homicide. We use this case to examine drawbacks for the defense when the presentation of scientific evidence is partial, especially when the only perspective mentioned is that of the prosecution. We tackle this discussion at two distinct levels. First, we pursue an essentially non-technical presentation of the topic by drawing parallels between the court's summing up of the case and flawed patterns of reasoning commonly seen in other forensic disciplines, such as DNA and particle traces (e.g., gunshot residues). Then, we propose a formal analysis of the case, using elements of probability and graphical probability models, to justify our main claim that the partial presentation of digital evidence poses a risk to the administration of justice in that it keeps vital information from the defense. We will argue that such practice constitutes a violation of general principles of forensic interpretation as established by forensic science literature and current recommendations by forensic science interest groups (e.g., the European Network of Forensic Science Institutes). Finally, we posit that argument construction and analysis using formal methods can help replace digital evidence appropriately into context and thus support a sound evaluation of the evidence.
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
This thesis was focussed on statistical analysis methods and proposes the use of Bayesian inference to extract information contained in experimental data by estimating Ebola model parameters. The model is a system of differential equations expressing the behavior and dynamics of Ebola. Two sets of data (onset and death data) were both used to estimate parameters, which has not been done by previous researchers in (Chowell, 2004). To be able to use both data, a new version of the model has been built. Model parameters have been estimated and then used to calculate the basic reproduction number and to study the disease-free equilibrium. Estimates of the parameters were useful to determine how well the model fits the data and how good estimates were, in terms of the information they provided about the possible relationship between variables. The solution showed that Ebola model fits the observed onset data at 98.95% and the observed death data at 93.6%. Since Bayesian inference can not be performed analytically, the Markov chain Monte Carlo approach has been used to generate samples from the posterior distribution over parameters. Samples have been used to check the accuracy of the model and other characteristics of the target posteriors.
Resumo:
When laboratory intercomparison exercises are conducted, there is no a priori dependence of the concentration of a certain compound determined in one laboratory to that determined by another(s). The same applies when comparing different methodologies. A existing data set of total mercury readings in fish muscle samples involved in a Brazilian intercomparison exercise was used to show that correlation analysis is the most effective statistical tool in this kind of experiments. Problems associated with alternative analytical tools such as mean or paired 't'-test comparison and regression analysis are discussed.
Resumo:
The stochastic convergence amongst Mexican Federal entities is analyzed in panel data framework. The joint consideration of cross-section dependence and multiple structural breaks is required to ensure that the statistical inference is based on statistics with good statistical properties. Once these features are accounted for, evidence in favour of stochastic convergence is found. Since stochastic convergence is a necessary, yet insufficient condition for convergence as predicted by economic growth models, the paper also investigates whether-convergence process has taken place. We found that the Mexican states have followed either heterogeneous convergence patterns or divergence process throughout the analyzed period.
Resumo:
It is well known that regression analyses involving compositional data need special attention because the data are not of full rank. For a regression analysis where both the dependent and independent variable are components we propose a transformation of the components emphasizing their role as dependent and independent variables. A simple linear regression can be performed on the transformed components. The regression line can be depicted in a ternary diagram facilitating the interpretation of the analysis in terms of components. An exemple with time-budgets illustrates the method and the graphical features
Resumo:
In any discipline, where uncertainty and variability are present, it is important to haveprinciples which are accepted as inviolate and which should therefore drive statisticalmodelling, statistical analysis of data and any inferences from such an analysis.Despite the fact that two such principles have existed over the last two decades andfrom these a sensible, meaningful methodology has been developed for the statisticalanalysis of compositional data, the application of inappropriate and/or meaninglessmethods persists in many areas of application. This paper identifies at least tencommon fallacies and confusions in compositional data analysis with illustrativeexamples and provides readers with necessary, and hopefully sufficient, arguments topersuade the culprits why and how they should amend their ways
Resumo:
Compositional data (concentrations) are common in geosciences. Neglecting its character mey lead to erroneous conclusions. Spurious correlation (K. Pearson, 1897) has disastrous consequences. On the basis of the pioneering work by J. Aitchison in the 1980s, a methodology free of these drawbacks is now available. The geometry of the sÃmplex allows the representation of compositions using orthogonal co-ordinares, to which usual statistical methods can be applied, thus facilating computation ans analysis. The use of (log) ratios precludes the interpretation of single concentrations disregarding their relative character. A hydro-chemical data set is used to illustrate the point
Resumo:
The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.