50 resultados para Peixoto’s theorem
Resumo:
Resume : L'utilisation de l'encre comme indice en sciences forensiques est décrite et encadrée par une littérature abondante, comprenant entre autres deux standards de l'American Society for Testing and Materials (ASTM). La grande majorité de cette littérature se préoccupe de l'analyse des caractéristiques physiques ou chimiques des encres. Les standards ASTM proposent quelques principes de base qui concernent la comparaison et l'interprétation de la valeur d'indice des encres en sciences forensiques. L'étude de cette littérature et plus particulièrement des standards ASTM, en ayant a l'esprit les développements intervenus dans le domaine de l'interprétation de l'indice forensique, montre qu'il existe un potentiel certain pour l'amélioration de l'utilisation de l'indice encre et de son impact dans l'enquête criminelle. Cette thèse propose d'interpréter l'indice encre en se basant sur le cadre défini par le théorème de Bayes. Cette proposition a nécessité le développement d'un système d'assurance qualité pour l'analyse et la comparaison d'échantillons d'encre. Ce système d'assurance qualité tire parti d'un cadre théorique nouvellement défini. La méthodologie qui est proposée dans ce travail a été testée de manière compréhensive, en tirant parti d'un set de données spécialement créer pour l'occasion et d'outils importés de la biométrie. Cette recherche répond de manière convaincante à un problème concret généralement rencontré en sciences forensiques. L'information fournie par le criminaliste, lors de l'examen de traces, est souvent bridée, car celui-ci essaie de répondre à la mauvaise question. L'utilisation d'un cadre théorique explicite qui définit et formalise le goal de l'examen criminaliste, permet de déterminer les besoins technologiques et en matière de données. Le développement de cette technologie et la collection des données pertinentes peut être justifiées économiquement et achevée de manière scientifique. Abstract : The contribution of ink evidence to forensic science is described and supported by an abundant literature and by two standards from the American Society for Testing and Materials (ASTM). The vast majority of the available literature is concerned with the physical and chemical analysis of ink evidence. The relevant ASTM standards mention some principles regarding the comparison of pairs of ink samples and the evaluation of their evidential value. The review of this literature and, more specifically, of the ASTM standards in the light of recent developments in the interpretation of forensic evidence has shown some potential improvements, which would maximise the benefits of the use of ink evidence in forensic science. This thesis proposes to interpret ink evidence using the widely accepted and recommended Bayesian theorem. This proposition has required the development of a new quality assurance process for the analysis and comparison of ink samples, as well as of the definition of a theoretical framework for ink evidence. The proposed technology has been extensively tested using a large dataset of ink samples and state of the art tools, commonly used in biometry. Overall, this research successfully answers to a concrete problem generally encountered in forensic science, where scientists tend to self-limit the usefulness of the information that is present in various types of evidence, by trying to answer to the wrong questions. The declaration of an explicit framework, which defines and formalises their goals and expected contributions to the criminal and civil justice system, enables the determination of their needs in terms of technology and data. The development of this technology and the collection of the data is then justified economically, structured scientifically and can be proceeded efficiently.
Resumo:
The identification of genetically homogeneous groups of individuals is a long standing issue in population genetics. A recent Bayesian algorithm implemented in the software STRUCTURE allows the identification of such groups. However, the ability of this algorithm to detect the true number of clusters (K) in a sample of individuals when patterns of dispersal among populations are not homogeneous has not been tested. The goal of this study is to carry out such tests, using various dispersal scenarios from data generated with an individual-based model. We found that in most cases the estimated 'log probability of data' does not provide a correct estimation of the number of clusters, K. However, using an ad hoc statistic DeltaK based on the rate of change in the log probability of data between successive K values, we found that STRUCTURE accurately detects the uppermost hierarchical level of structure for the scenarios we tested. As might be expected, the results are sensitive to the type of genetic marker used (AFLP vs. microsatellite), the number of loci scored, the number of populations sampled, and the number of individuals typed in each sample.
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.
Resumo:
OBJECTIVE: The reverse transcriptase inhibitor efavirenz is currently used at a fixed dose of 600 mg/d. However, dosage individualization based on plasma concentration monitoring might be indicated. This study aimed to assess the efavirenz pharmacokinetic profile and interpatient versus intrapatient variability in patients who are positive for human immunodeficiency virus, to explore the relationship between drug exposure, efficacy, and central nervous system toxicity and to build up a Bayesian approach for dosage adaptation. METHODS: The population pharmacokinetic analysis was performed by use of NONMEM based on plasma samples from a cohort of unselected patients receiving efavirenz. With the use of a 1-compartment model with first-order absorption, the influence of demographic and clinical characteristics on oral clearance and oral volume of distribution was examined. The average drug exposure during 1 dosing interval was estimated for each patient and correlated with markers of efficacy and toxicity. The population kinetic parameters and the variabilities were integrated into a Bayesian equation for dosage adaptation based on a single plasma sample. RESULTS: Data from 235 patients with a total of 719 efavirenz concentrations were collected. Oral clearance was 9.4 L/h, oral volume of distribution was 252 L, and the absorption rate constant was 0.3 h(-1). Neither the demographic covariates evaluated nor the comedications showed a clinically significant influence on efavirenz pharmacokinetics. A large interpatient variability was found to affect efavirenz relative bioavailability (coefficient of variation, 54.6%), whereas the intrapatient variability was small (coefficient of variation, 26%). An inverse correlation between average drug exposure and viral load and a trend with central nervous system toxicity were detected. This enabled the derivation of a dosing adaptation strategy suitable to bring the average concentration into a therapeutic target from 1000 to 4000 microg/L to optimize viral load suppression and to minimize central nervous system toxicity. CONCLUSIONS: The high interpatient and low intrapatient variability values, as well as the potential relationship with markers of efficacy and toxicity, support the therapeutic drug monitoring of efavirenz. However, further evaluation is needed before individualization of an efavirenz dosage regimen based on routine drug level monitoring should be recommended for optimal patient management.
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
The phylogeny and phylogeography of the Old World wood mice (subgenus Sylvaemus, genus Apodemus, Muridae) are well-documented. Nevertheless, the distributions of species, such as A. fulvipectus and A. ponticus remain dubious, as well as their phylogenetic relationships with A. sylvaticus. We analysed samples of Apodemus spp. across Europe using the mitochondrial cytochrome-b gene (cyt-b) and compared the DNA and amino-acid compositions of previously published sequences. The main result stemming from this study is the presence of a well-differentiated lineage of Sylvaemus including samples of various species (A. sylvaticus, A. fulvipectus, A. ponticus) from distant locations, which were revealed to be nuclear copies of the mitochondrial cyt-b. The presence of this cryptic pseudogene in published sequences is supported by different pathways. This has led to important errors in previous molecular trees and hence to partial misinterpretations in the phylogeny of Apodemus.
Resumo:
We characterize divergence times, intraspecific diversity and distributions for recently recognized lineages within the Hyla arborea species group, based on mitochondrial and nuclear sequences from 160 localities spanning its whole distribution. Lineages of H. arborea, H. orientalis, H. molleri have at least Pliocene age, supporting species level divergence. The genetically uniform Iberian H. molleri, although largely isolated by the Pyrenees, is parapatric to H. arborea, with evidence for successful hybridization in a small Aquitanian corridor (southwestern France), where the distribution also overlaps with H. meridionalis. The genetically uniform H. arborea, spread from Crete to Brittany, exhibits molecular signatures of a postglacial range expansion. It meets different mtDNA clades of H. orientalis in NE-Greece, along the Carpathians, and in Poland along the Vistula River (there including hybridization). The East-European H. orientalis is strongly structured genetically. Five geographic mitochondrial clades are recognized, with a molecular signature of postglacial range expansions for the clade that reached the most northern latitudes. Hybridization with H. savignyi is suggested in southwestern Turkey. Thus, cryptic diversity in these Pliocene Hyla lineages covers three extremes: a genetically poor, quasi-Iberian endemic (H. molleri), a more uniform species distributed from the Balkans to Western Europe (H. arborea), and a well-structured Asia Minor-Eastern European species (H. orientalis).
Resumo:
Gene duplication and neofunctionalization are known to be important processes in the evolution of phenotypic complexity. They account for important evolutionary novelties that confer ecological adaptation, such as the major histocompatibility complex (MHC), a multigene family crucial to the vertebrate immune system. In birds, two MHC class II β (MHCIIβ) exon 3 lineages have been recently characterized, and two hypotheses for the evolutionary history of MHCIIβ lineages were proposed. These lineages could have arisen either by 1) an ancient duplication and subsequent divergence of one paralog or by 2) recent parallel duplications followed by functional convergence. Here, we compiled a data set consisting of 63 MHCIIβ exon 3 sequences from six avian orders to distinguish between these hypotheses and to understand the role of selection in the divergent evolution of the two avian MHCIIβ lineages. Based on phylogenetic reconstructions and simulations, we show that a unique duplication event preceding the major avian radiations gave rise to two ancestral MHCIIβ lineages that were each likely lost once later during avian evolution. Maximum likelihood estimation shows that following the ancestral duplication, positive selection drove a radical shift from basic to acidic amino acid composition of a protein domain facing the α-chain in the MHCII α β-heterodimer. Structural analyses of the MHCII α β-heterodimer highlight that three of these residues are potentially involved in direct interactions with the α-chain, suggesting that the shift following duplication may have been accompanied by coevolution of the interacting α- and β-chains. These results provide new insights into the long-term evolutionary relationships among avian MHC genes and open interesting perspectives for comparative and population genomic studies of avian MHC evolution.
Resumo:
Individuals sampled in hybrid zones are usually analysed according to their sampling locality, morphology, behaviour or karyotype. But the increasing availability of genetic information more and more favours its use for individual sorting purposes and numerous assignment methods based on the genetic composition of individuals have been developed. The shrews of the Sorex araneus group offer good opportunities to test the genetic assignment on individuals identified by their karyotype. Here we explored the potential and efficiency of a Bayesian assignment method combined or not with a reference dataset to study admixture and individual assignment in the difficult context of two hybrid zones between karyotypic species of the Sorex araneus group. As a whole, we assigned more than 80% of the individuals to their respective karyotypic categories (i.e. 'pure' species or hybrids). This assignment level is comparable to what was obtained for the same species away from hybrid zones. Additionally, we showed that the assignment result for several individuals was strongly affected by the inclusion or not of a reference dataset. This highlights the importance of such comparisons when analysing hybrid zones. Finally, differences between the admixture levels detected in both hybrid zones support the hypothesis of an impact of chromosomal rearrangements on gene flow.
Resumo:
CD4 expression in HIV replication is paradoxical: HIV entry requires high cell-surface CD4 densities, but replication requires CD4 down-modulation. However, is CD4 density in HIV+ patients affected over time? Do changes in CD4 density correlate with disease progression? Here, we examined the role of CD4 density for HIV disease progression by longitudinally quantifying CD4 densities on CD4+ T cells and monocytes of ART-naive HIV+ patients with different disease progression rates. This was a retrospective study. We defined three groups of HIV+ patients by their rate of CD4+ T cell loss, calculated by the time between infection and reaching a CD4 level of 200 cells/microl: fast (<7.5 years), intermediate (7.5-12 years), and slow progressors (>12 years). Mathematical modeling permitted us to determine the maximum CD4+ T cell count after HIV seroconversion (defined as "postseroconversion CD4 count") and longitudinal profiles of CD4 count and density. CD4 densities were quantified on CD4+ T cells and monocytes from these patients and from healthy individuals by flow cytometry. Fast progressors had significantly lower postseroconversion CD4 counts than other progressors. CD4 density on T cells was lower in HIV+ patients than in healthy individuals and decreased more rapidly in fast than in slow progressors. Antiretroviral therapy (ART) did not normalize CD4 density. Thus, postseroconversion CD4 counts define individual HIV disease progression rates that may help to identify patients who might benefit most from early ART. Early discrimination of slow and fast progressors suggests that critical events during primary infection define long-term outcome. A more rapid CD4 density decrease in fast progressors might contribute to progressive functional impairments of the immune response in advanced HIV infection. The lack of an effect of ART on CD4 density implies a persistent dysfunctional immune response by uncontrolled HIV infection.
Resumo:
The utility of sequencing a second highly variable locus in addition to the spa gene (e.g., double-locus sequence typing [DLST]) was investigated to overcome limitations of a Staphylococcus aureus single-locus typing method. Although adding a second locus seemed to increase discriminatory power, it was not sufficient to definitively infer evolutionary relationships within a single multilocus sequence type (ST-5).
Resumo:
Testosterone abuse is conventionally assessed by the urinary testosterone/epitestosterone (T/E) ratio, levels above 4.0 being considered suspicious. A deletion polymorphism in the gene coding for UGT2B17 is strongly associated with reduced testosterone glucuronide (TG) levels in urine. Many of the individuals devoid of the gene would not reach a T/E ratio of 4.0 after testosterone intake. Future test programs will most likely shift from population based- to individual-based T/E cut-off ratios using Bayesian inference. A longitudinal analysis is dependent on an individual's true negative baseline T/E ratio. The aim was to investigate whether it is possible to increase the sensitivity and specificity of the T/E test by addition of UGT2B17 genotype information in a Bayesian framework. A single intramuscular dose of 500mg testosterone enanthate was given to 55 healthy male volunteers with either two, one or no allele (ins/ins, ins/del or del/del) of the UGT2B17 gene. Urinary excretion of TG and the T/E ratio was measured during 15 days. The Bayesian analysis was conducted to calculate the individual T/E cut-off ratio. When adding the genotype information, the program returned lower individual cut-off ratios in all del/del subjects increasing the sensitivity of the test considerably. It will be difficult, if not impossible, to discriminate between a true negative baseline T/E value and a false negative one without knowledge of the UGT2B17 genotype. UGT2B17 genotype information is crucial, both to decide which initial cut-off ratio to use for an individual, and for increasing the sensitivity of the Bayesian analysis.
Resumo:
Epidemiological processes leave a fingerprint in the pattern of genetic structure of virus populations. Here, we provide a new method to infer epidemiological parameters directly from viral sequence data. The method is based on phylogenetic analysis using a birth-death model (BDM) rather than the commonly used coalescent as the model for the epidemiological transmission of the pathogen. Using the BDM has the advantage that transmission and death rates are estimated independently and therefore enables for the first time the estimation of the basic reproductive number of the pathogen using only sequence data, without further assumptions like the average duration of infection. We apply the method to genetic data of the HIV-1 epidemic in Switzerland.
Resumo:
BACKGROUND: Mitochondrial DNA sequencing increasingly results in the recognition of genetically divergent, but morphologically cryptic lineages. Species delimitation approaches that rely on multiple lines of evidence in areas of co-occurrence are particularly powerful to infer their specific status. We investigated the species boundaries of two cryptic lineages of the land snail genus Trochulus in a contact zone, using mitochondrial and nuclear DNA marker as well as shell morphometrics. RESULTS: Both mitochondrial lineages have a distinct geographical distribution with a small zone of co-occurrence. In the same area, we detected two nuclear genotype clusters, each being highly significantly associated to one mitochondrial lineage. This association however had exceptions: a small number of individuals in the contact zone showed intermediate genotypes (4%) or cytonuclear disequilibrium (12%). Both mitochondrial lineage and nuclear cluster were statistically significant predictors for the shell shape indicating morphological divergence. Nevertheless, the lineage morphospaces largely overlapped (low posterior classification success rate of 69% and 78%, respectively): the two lineages are truly cryptic. CONCLUSION: The integrative approach using multiple lines of evidence supported the hypothesis that the investigated Trochulus lineages are reproductively isolated species. In the small contact area, however, the lineages hybridise to a limited extent. This detection of a hybrid zone adds an instance to the rare reported cases of hybridisation in land snails.
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid-solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently bench-marked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.