32 resultados para grouping estimators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phylogenetic reconstructions are a major component of many studies in evolutionary biology, but their accuracy can be reduced under certain conditions. Recent studies showed that the convergent evolution of some phenotypes resulted from recurrent amino acid substitutions in genes belonging to distant lineages. It has been suggested that these convergent substitutions could bias phylogenetic reconstruction toward grouping convergent phenotypes together, but such an effect has never been appropriately tested. We used computer simulations to determine the effect of convergent substitutions on the accuracy of phylogenetic inference. We show that, in some realistic conditions, even a relatively small proportion of convergent codons can strongly bias phylogenetic reconstruction, especially when amino acid sequences are used as characters. The strength of this bias does not depend on the reconstruction method but varies as a function of how much divergence had occurred among the lineages prior to any episodes of convergent substitutions. While the occurrence of this bias is difficult to predict, the risk of spurious groupings is strongly decreased by considering only 3rd codon positions, which are less subject to selection, as long as saturation problems are not present. Therefore, we recommend that, whenever possible, topologies obtained with amino acid sequences and 3rd codon positions be compared to identify potential phylogenetic biases and avoid evolutionarily misleading conclusions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: To determine the outcome of patients with brain metastasis (BM) from lung cancer treated with an external beam radiotherapy boost (RTB) after whole brain radiotherapy (WBRT). METHODS: A total of 53 BM patients with lung cancer were treated sequentially with WBRT and RTB between 1996 and 2008 according to our institutional protocol. Mean age was 58.8 years. The median KPS was 90. Median recursive partitioning analysis (RPA) and graded prognostic assessment (GPA) grouping were 2 and 2.5, respectively. Surgery was performed on 38 (71%) patients. The median number of BM was 1 (range, 1-3). Median WBRT and RTB combined dose was 39 Gy (range, 37.5-54). Median follow-up was 12.0 months. RESULTS: During the period of follow-up, 37 (70%) patients died. The median overall survival (OS) was 14.5 months. Only 13 patients failed in the brain. The majority of patients (n = 29) failed distantly. The 1-year OS, -local control, extracranial failure rates were 61.2%, 75.2% and 60.8%, respectively. On univariate analysis, improved OS was found to be significantly associated with total dose (< or = 39 Gy vs. > 39 Gy; p < 0.01), age < 65 (p < 0.01), absence of extracranial metastasis (p < 0.01), GPA > or = 2.5 (p = 0.01), KPS > or = 90 (p = 0.01), and RPA < 2 (p = 0.04). On multivariate analysis, total dose (p < 0.01) and the absence of extracranial metastasis (p = 0.03) retained statistical significance. CONCLUSIONS: The majority of lung cancer patients treated with WBRT and RTB progressed extracranially. There might be a subgroup of younger patients with good performance status and no extracranial disease who may benefit from dose escalation after WBRT to the metastatic site.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Arbuscular mycorrhizal fungi (AMF) are an ecologically important group of fungi. Previous studies showed the presence of divergent copies of beta-tubulin and V-type vacuolar H+-ATPase genes in AMF genomes and suggested horizontal gene transfer from host plants or mycoparasites to AMF. We sequenced these genes from DNA isolated from an in vitro cultured isolate of Glomus intraradices that was free of any obvious contaminants. We found two highly variable beta-tubulin sequences and variable H+-ATPase sequences. Despite this high variation, comparison of the sequences with those in gene banks supported a glomeromycotan origin of G. intraradices beta-tubulin and H+-ATPase sequences. Thus, our results are in sharp contrast with the previously reported polyphyletic origin of those genes. We present evidence that some highly divergent sequences of beta-tubulin and H+-ATPase deposited in the databases are likely to be contaminants. We therefore reject the prediction of horizontal transfer to AMF genomes. High differences in GC content between glomeromycotan sequences and sequences grouping in other lineages are shown and we suggest they can be used as an indicator to detect such contaminants. H+-ATPase phylogeny gave unexpected results and failed to resolve fungi as a natural group. beta-Tubulin phylogeny supported Glomeromeromycota as sister group of the Chytridiomycota. Contrasts between our results and trees previously generated using rDNA sequences are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé: Les récents progrès techniques de l'imagerie cérébrale non invasives ont permis d'améliorer la compréhension des différents systèmes fonctionnels cérébraux. Les approches multimodales sont devenues indispensables en recherche, afin d'étudier dans sa globalité les différentes caractéristiques de l'activité neuronale qui sont à la base du fonctionnement cérébral. Dans cette étude combinée d'imagerie par résonance magnétique fonctionnelle (IRMf) et d'électroencéphalographie (EEG), nous avons exploité le potentiel de chacune d'elles, soit respectivement la résolution spatiale et temporelle élevée. Les processus cognitifs, de perception et de mouvement nécessitent le recrutement d'ensembles neuronaux. Dans la première partie de cette thèse nous étudions, grâce à la combinaison des techniques IRMf et EEG, la réponse des aires visuelles lors d'une stimulation qui demande le regroupement d'éléments cohérents appartenant aux deux hémi-champs visuels pour en faire une seule image. Nous utilisons une mesure de synchronisation (EEG de cohérence) comme quantification de l'intégration spatiale inter-hémisphérique et la réponse BOLD (Blood Oxygenation Level Dependent) pour évaluer l'activité cérébrale qui en résulte. L'augmentation de la cohérence de l'EEG dans la bande beta-gamma mesurée au niveau des électrodes occipitales et sa corrélation linéaire avec la réponse BOLD dans les aires de VP/V4, reflète et visualise un ensemble neuronal synchronisé qui est vraisemblablement impliqué dans le regroupement spatial visuel. Ces résultats nous ont permis d'étendre la recherche à l'étude de l'impact que le contenu en fréquence des stimuli a sur la synchronisation. Avec la même approche, nous avons donc identifié les réseaux qui montrent une sensibilité différente à l'intégration des caractéristiques globales ou détaillées des images. En particulier, les données montrent que l'implication des réseaux visuels ventral et dorsal est modulée par le contenu en fréquence des stimuli. Dans la deuxième partie nous avons a testé l'hypothèse que l'augmentation de l'activité cérébrale pendant le processus de regroupement inter-hémisphérique dépend de l'activité des axones calleux qui relient les aires visuelles. Comme le Corps Calleux présente une maturation progressive pendant les deux premières décennies, nous avons analysé le développement de la fonction d'intégration spatiale chez des enfants âgés de 7 à 13 ans et le rôle de la myelinisation des fibres calleuses dans la maturation de l'activité visuelle. Nous avons combiné l'IRMf et la technique de MTI (Magnetization Transfer Imaging) afin de suivre les signes de maturation cérébrale respectivement sous l'aspect fonctionnel et morphologique (myelinisation). Chez lés enfants, les activations associées au processus d'intégration entre les hémi-champs visuels sont, comme chez l'adulte, localisées dans le réseau ventral mais se limitent à une zone plus restreinte. La forte corrélation que le signal BOLD montre avec la myelinisation des fibres du splenium est le signe de la dépendance entre la maturation des fonctions visuelles de haut niveau et celle des connections cortico-corticales. Abstract: Recent advances in non-invasive brain imaging allow the visualization of the different aspects of complex brain dynamics. The approaches based on a combination of imaging techniques facilitate the investigation and the link of multiple aspects of information processing. They are getting a leading tool for understanding the neural basis of various brain functions. Perception, motion, and cognition involve the formation of cooperative neuronal assemblies distributed over the cerebral cortex. In this research, we explore the characteristics of interhemispheric assemblies in the visual brain by taking advantage of the complementary characteristics provided by EEG (electroencephalography) and fMRI (Functional Magnetic Resonance Imaging) techniques. These are the high temporal resolution for EEG and high spatial resolution for fMRI. In the first part of this thesis we investigate the response of the visual areas to the interhemispheric perceptual grouping task. We use EEG coherence as a measure of synchronization and BOLD (Blood Oxygenar tion Level Dependent) response as a measure of the related brain activation. The increase of the interhemispheric EEG coherence restricted to the occipital electrodes and to the EEG beta band and its linear relation to the BOLD responses in VP/V4 area points to a trans-hemispheric synchronous neuronal assembly involved in early perceptual grouping. This result encouraged us to explore the formation of synchronous trans-hemispheric networks induced by the stimuli of various spatial frequencies with this multimodal approach. We have found the involvement of ventral and medio-dorsal visual networks modulated by the spatial frequency content of the stimulus. Thus, based on the combination of EEG coherence and fMRI BOLD data, we have identified visual networks with different sensitivity to integrating low vs. high spatial frequencies. In the second part of this work we test the hypothesis that the increase of brain activity during perceptual grouping depends on the activity of callosal axons interconnecting the visual areas that are involved. To this end, in children of 7-13 years, we investigated functional (functional activation with fMRI) and morphological (myelination of the corpus callosum with Magnetization Transfer Imaging (MTI)) aspects of spatial integration. In children, the activation associated with the spatial integration across visual fields was localized in visual ventral stream and limited to a part of the area activated in adults. The strong correlation between individual BOLD responses in .this area and the myelination of the splenial system of fibers points to myelination as a significant factor in the development of the spatial integration ability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Metabolic syndrome represents a grouping of risk factors closely linked to cardiovascular diseases and diabetes. At first, nuclear medicine has no direct application in cardiology at the level of primary prevention, but positron emission tomography is a non invasive imaging technique that can assess myocardial perfusion as well as the endothelium-dependent coronary vasomotion--a surrogate marker of cardiovascular event rate--thus finding an application in studying coronary physiopathology. As the prevalence of the metabolic syndrome is still unknown in Switzerland, we will estimate it from data available in the frame of a health promotion program. Based on the deleterious effect on the endothelium already observed with two components, we will estimate the number of persons at risk in Switzerland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The so-called "enchondromatoses" are skeletal disorders defined by the presence of ectopic cartilaginous tissue within bone tissue. The clinical and radiographic features of the different enchondromatoses are distinct, and grouping them does not reflect a common pathogenesis but simply a similar radiographic appearance and thus the need for a differential diagnosis. Recent advances in the understanding of their molecular and cellular bases confirm the heterogeneous nature of the different enchondromatoses. Some, like Ollier disease, Maffucci disease, metaphyseal chondromatosis with hydroxyglutaric aciduria, and metachondromatosis are produced by a dysregulation of chondrocyte proliferation, while others (such as spondyloenchondrodysplasia or dysspondyloenchondromatosis) are caused by defects in structure or metabolism of cartilage or bone matrix. In other forms (e.g., the dominantly inherited genochondromatoses), the basic defect remains to be determined. The classification, proposed by Spranger and associates in 1978 and tentatively revised twice, was based on the radiographic appearance, the anatomic sites involved, and the mode of inheritance. The new classification proposed here integrates the molecular genetic advances and delineates phenotypic families based on the molecular defects. Reference radiographs are provided to help in the diagnosis of the well-defined forms. In spite of advances, many cases remain difficult to diagnose and classify, implying that more variants remain to be defined at both the clinical and molecular levels. © 2012 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atrial arrhythmias (AAs) are a common complication in adult patients with congenital heart disease. We sought to compare the lifetime prevalence of AAs in patients with right- versus left-sided congenital cardiac lesions and their effect on the prognosis. A congenital heart disease diagnosis was assigned using the International Disease Classification, Ninth Revision, diagnostic codes in the administrative databases of Quebec, from 1983 to 2005. Patients with AAs were those diagnosed with an International Disease Classification, Ninth Revision, code for atrial fibrillation or intra-atrial reentry tachycardia. To ensure that the diagnosis of AA was new, a washout period of 5 years after entry into the database was used, a period during which the patient could not have received an International Disease Classification, Ninth Revision, code for AA. The cumulative lifetime risk of AA was estimated using the Practical Incidence Estimators method. The hazard ratios (HRs) for mortality, morbidity, and cardiac interventions were compared between those with right- and left-sided lesions after adjustment for age, gender, disease severity, and cardiac risk factors. In a population of 71,467 patients, 7,756 adults developed AAs (isolated right-sided, 2,229; isolated left-sided, 1,725). The lifetime risk of developing AAs was significantly greater in patients with right- sided than in patients with left-sided lesions (61.0% vs 55.4%, p <0.001). The HR for mortality and the development of stroke or heart failure was similar in both groups (HR 0.96, 95% confidence interval [CI] 0.86 to 1.09; HR 0.94, 95% CI 0.80 to 1.09; and HR 1.10, 95% CI 0.98 to 1.23, respectively). However, the rates of cardiac catheterization (HR 0.63, 95% CI 0.55 to 0.72), cardiac surgery (HR 0.40, 95% CI 0.36 to 0.45), and arrhythmia surgery (HR 0.77, 95% CI 0.6 to 0.98) were significantly less for patients with right-sided lesions. In conclusion, patients with right-sided lesions had a greater lifetime burden of AAs. However, their morbidity and mortality were no less than those with left-sided lesions, although the rate of intervention was substantially different.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phenotypic convergence is a widespread and well-recognized evolutionary phenomenon. However, the responsible molecular mechanisms remain often unknown mainly because the genes involved are not identified. A well-known example of physiological convergence is the C4 photosynthetic pathway, which evolved independently more than 45 times [1]. Here, we address the question of the molecular bases of the C4 convergent phenotypes in grasses (Poaceae) by reconstructing the evolutionary history of genes encoding a C4 key enzyme, the phosphoenolpyruvate carboxylase (PEPC). PEPC genes belong to a multigene family encoding distinct isoforms of which only one is involved in C4 photosynthesis [2]. By using phylogenetic analyses, we showed that grass C4 PEPCs appeared at least eight times independently from the same non-C4 PEPC. Twenty-one amino acids evolved under positive selection and converged to similar or identical amino acids in most of the grass C4 PEPC lineages. This is the first record of such a high level of molecular convergent evolution, illustrating the repeatability of evolution. These amino acids were responsible for a strong phylogenetic bias grouping all C4 PEPCs together. The C4-specific amino acids detected must be essential for C4 PEPC enzymatic characteristics, and their identification opens new avenues for the engineering of the C4 pathway in crops.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The variability observed in drug exposure has a direct impact on the overall response to drug. The largest part of variability between dose and drug response resides in the pharmacokinetic phase, i.e. in the dose-concentration relationship. Among possibilities offered to clinicians, Therapeutic Drug Monitoring (TDM; Monitoring of drug concentration measurements) is one of the useful tool to guide pharmacotherapy. TDM aims at optimizing treatments by individualizing dosage regimens based on blood drug concentration measurement. Bayesian calculations, relying on population pharmacokinetic approach, currently represent the gold standard TDM strategy. However, it requires expertise and computational assistance, thus limiting its large implementation in routine patient care. The overall objective of this thesis was to implement robust tools to provide Bayesian TDM to clinician in modern routine patient care. To that endeavour, aims were (i) to elaborate an efficient and ergonomic computer tool for Bayesian TDM: EzeCHieL (ii) to provide algorithms for drug concentration Bayesian forecasting and software validation, relying on population pharmacokinetics (iii) to address some relevant issues encountered in clinical practice with a focus on neonates and drug adherence. First, the current stage of the existing software was reviewed and allows establishing specifications for the development of EzeCHieL. Then, in close collaboration with software engineers a fully integrated software, EzeCHieL, has been elaborated. EzeCHieL provides population-based predictions and Bayesian forecasting and an easy-to-use interface. It enables to assess the expectedness of an observed concentration in a patient compared to the whole population (via percentiles), to assess the suitability of the predicted concentration relative to the targeted concentration and to provide dosing adjustment. It allows thus a priori and a posteriori Bayesian drug dosing individualization. Implementation of Bayesian methods requires drug disposition characterisation and variability quantification trough population approach. Population pharmacokinetic analyses have been performed and Bayesian estimators have been provided for candidate drugs in population of interest: anti-infectious drugs administered to neonates (gentamicin and imipenem). Developed models were implemented in EzeCHieL and also served as validation tool in comparing EzeCHieL concentration predictions against predictions from the reference software (NONMEM®). Models used need to be adequate and reliable. For instance, extrapolation is not possible from adults or children to neonates. Therefore, this work proposes models for neonates based on the developmental pharmacokinetics concept. Patients' adherence is also an important concern for drug models development and for a successful outcome of the pharmacotherapy. A last study attempts to assess impact of routine patient adherence measurement on models definition and TDM interpretation. In conclusion, our results offer solutions to assist clinicians in interpreting blood drug concentrations and to improve the appropriateness of drug dosing in routine clinical practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider robust parametric procedures for univariate discrete distributions, focusing on the negative binomial model. The procedures are based on three steps: ?First, a very robust, but possibly inefficient, estimate of the model parameters is computed. ?Second, this initial model is used to identify outliers, which are then removed from the sample. ?Third, a corrected maximum likelihood estimator is computed with the remaining observations. The final estimate inherits the breakdown point (bdp) of the initial one and its efficiency can be significantly higher. Analogous procedures were proposed in [1], [2], [5] for the continuous case. A comparison of the asymptotic bias of various estimates under point contamination points out the minimum Neyman's chi-squared disparity estimate as a good choice for the initial step. Various minimum disparity estimators were explored by Lindsay [4], who showed that the minimum Neyman's chi-squared estimate has a 50% bdp under point contamination; in addition, it is asymptotically fully efficient at the model. However, the finite sample efficiency of this estimate under the uncontaminated negative binomial model is usually much lower than 100% and the bias can be strong. We show that its performance can then be greatly improved using the three step procedure outlined above. In addition, we compare the final estimate with the procedure described in

Relevância:

10.00% 10.00%

Publicador:

Resumo:

«Quel est l'âge de cette trace digitale?» Cette question est relativement souvent soulevée au tribunal ou lors d'investigations, lorsque la personne suspectée admet avoir laissé ses empreintes digitales sur une scène de crime mais prétend l'avoir fait à un autre moment que celui du crime et pour une raison innocente. Toutefois, aucune réponse ne peut actuellement être donnée à cette question, puisqu'aucune méthodologie n'est pour l'heure validée et acceptée par l'ensemble de la communauté forensique. Néanmoins, l'inventaire de cas américains conduit dans cette recherche a montré que les experts fournissent tout de même des témoignages au tribunal concernant l'âge de traces digitales, même si ceux-­‐ci sont majoritairement basés sur des paramètres subjectifs et mal documentés. Il a été relativement aisé d'accéder à des cas américains détaillés, ce qui explique le choix de l'exemple. Toutefois, la problématique de la datation des traces digitales est rencontrée dans le monde entier, et le manque de consensus actuel dans les réponses données souligne la nécessité d'effectuer des études sur le sujet. Le but de la présente recherche est donc d'évaluer la possibilité de développer une méthode de datation objective des traces digitales. Comme les questions entourant la mise au point d'une telle procédure ne sont pas nouvelles, différentes tentatives ont déjà été décrites dans la littérature. Cette recherche les a étudiées de manière critique, et souligne que la plupart des méthodologies reportées souffrent de limitations prévenant leur utilisation pratique. Néanmoins, certaines approches basées sur l'évolution dans le temps de composés intrinsèques aux résidus papillaires se sont montrées prometteuses. Ainsi, un recensement détaillé de la littérature a été conduit afin d'identifier les composés présents dans les traces digitales et les techniques analytiques capables de les détecter. Le choix a été fait de se concentrer sur les composés sébacés détectés par chromatographie gazeuse couplée à la spectrométrie de masse (GC/MS) ou par spectroscopie infrarouge à transformée de Fourier. Des analyses GC/MS ont été menées afin de caractériser la variabilité initiale de lipides cibles au sein des traces digitales d'un même donneur (intra-­‐variabilité) et entre les traces digitales de donneurs différents (inter-­‐variabilité). Ainsi, plusieurs molécules ont été identifiées et quantifiées pour la première fois dans les résidus papillaires. De plus, il a été déterminé que l'intra-­‐variabilité des résidus était significativement plus basse que l'inter-­‐variabilité, mais que ces deux types de variabilité pouvaient être réduits en utilisant différents pré-­‐ traitements statistiques s'inspirant du domaine du profilage de produits stupéfiants. Il a également été possible de proposer un modèle objectif de classification des donneurs permettant de les regrouper dans deux classes principales en se basant sur la composition initiale de leurs traces digitales. Ces classes correspondent à ce qui est actuellement appelé de manière relativement subjective des « bons » ou « mauvais » donneurs. Le potentiel d'un tel modèle est élevé dans le domaine de la recherche en traces digitales, puisqu'il permet de sélectionner des donneurs représentatifs selon les composés d'intérêt. En utilisant la GC/MS et la FTIR, une étude détaillée a été conduite sur les effets de différents facteurs d'influence sur la composition initiale et le vieillissement de molécules lipidiques au sein des traces digitales. Il a ainsi été déterminé que des modèles univariés et multivariés pouvaient être construits pour décrire le vieillissement des composés cibles (transformés en paramètres de vieillissement par pré-­‐traitement), mais que certains facteurs d'influence affectaient ces modèles plus sérieusement que d'autres. En effet, le donneur, le substrat et l'application de techniques de révélation semblent empêcher la construction de modèles reproductibles. Les autres facteurs testés (moment de déposition, pression, température et illumination) influencent également les résidus et leur vieillissement, mais des modèles combinant différentes valeurs de ces facteurs ont tout de même prouvé leur robustesse dans des situations bien définies. De plus, des traces digitales-­‐tests ont été analysées par GC/MS afin d'être datées en utilisant certains des modèles construits. Il s'est avéré que des estimations correctes étaient obtenues pour plus de 60 % des traces-­‐tests datées, et jusqu'à 100% lorsque les conditions de stockage étaient connues. Ces résultats sont intéressants mais il est impératif de conduire des recherches supplémentaires afin d'évaluer les possibilités d'application de ces modèles dans des cas réels. Dans une perspective plus fondamentale, une étude pilote a également été effectuée sur l'utilisation de la spectroscopie infrarouge combinée à l'imagerie chimique (FTIR-­‐CI) afin d'obtenir des informations quant à la composition et au vieillissement des traces digitales. Plus précisément, la capacité de cette technique à mettre en évidence le vieillissement et l'effet de certains facteurs d'influence sur de larges zones de traces digitales a été investiguée. Cette information a ensuite été comparée avec celle obtenue par les spectres FTIR simples. Il en a ainsi résulté que la FTIR-­‐CI était un outil puissant, mais que son utilisation dans l'étude des résidus papillaires à des buts forensiques avait des limites. En effet, dans cette recherche, cette technique n'a pas permis d'obtenir des informations supplémentaires par rapport aux spectres FTIR traditionnels et a également montré des désavantages majeurs, à savoir de longs temps d'analyse et de traitement, particulièrement lorsque de larges zones de traces digitales doivent être couvertes. Finalement, les résultats obtenus dans ce travail ont permis la proposition et discussion d'une approche pragmatique afin d'aborder les questions de datation des traces digitales. Cette approche permet ainsi d'identifier quel type d'information le scientifique serait capable d'apporter aux enquêteurs et/ou au tribunal à l'heure actuelle. De plus, le canevas proposé décrit également les différentes étapes itératives de développement qui devraient être suivies par la recherche afin de parvenir à la validation d'une méthodologie de datation des traces digitales objective, dont les capacités et limites sont connues et documentées. -- "How old is this fingermark?" This question is relatively often raised in trials when suspects admit that they have left their fingermarks on a crime scene but allege that the contact occurred at a time different to that of the crime and for legitimate reasons. However, no answer can be given to this question so far, because no fingermark dating methodology has been validated and accepted by the whole forensic community. Nevertheless, the review of past American cases highlighted that experts actually gave/give testimonies in courts about the age of fingermarks, even if mostly based on subjective and badly documented parameters. It was relatively easy to access fully described American cases, thus explaining the origin of the given examples. However, fingermark dating issues are encountered worldwide, and the lack of consensus among the given answers highlights the necessity to conduct research on the subject. The present work thus aims at studying the possibility to develop an objective fingermark dating method. As the questions surrounding the development of dating procedures are not new, different attempts were already described in the literature. This research proposes a critical review of these attempts and highlights that most of the reported methodologies still suffer from limitations preventing their use in actual practice. Nevertheless, some approaches based on the evolution of intrinsic compounds detected in fingermark residue over time appear to be promising. Thus, an exhaustive review of the literature was conducted in order to identify the compounds available in the fingermark residue and the analytical techniques capable of analysing them. It was chosen to concentrate on sebaceous compounds analysed using gas chromatography coupled with mass spectrometry (GC/MS) or Fourier transform infrared spectroscopy (FTIR). GC/MS analyses were conducted in order to characterize the initial variability of target lipids among fresh fingermarks of the same donor (intra-­‐variability) and between fingermarks of different donors (inter-­‐variability). As a result, many molecules were identified and quantified for the first time in fingermark residue. Furthermore, it was determined that the intra-­‐variability of the fingermark residue was significantly lower than the inter-­‐variability, but that it was possible to reduce both kind of variability using different statistical pre-­‐ treatments inspired from the drug profiling area. It was also possible to propose an objective donor classification model allowing the grouping of donors in two main classes based on their initial lipid composition. These classes correspond to what is relatively subjectively called "good" or "bad" donors. The potential of such a model is high for the fingermark research field, as it allows the selection of representative donors based on compounds of interest. Using GC/MS and FTIR, an in-­‐depth study of the effects of different influence factors on the initial composition and aging of target lipid molecules found in fingermark residue was conducted. It was determined that univariate and multivariate models could be build to describe the aging of target compounds (transformed in aging parameters through pre-­‐ processing techniques), but that some influence factors were affecting these models more than others. In fact, the donor, the substrate and the application of enhancement techniques seemed to hinder the construction of reproducible models. The other tested factors (deposition moment, pressure, temperature and illumination) also affected the residue and their aging, but models combining different values of these factors still proved to be robust. Furthermore, test-­‐fingermarks were analysed with GC/MS in order to be dated using some of the generated models. It turned out that correct estimations were obtained for 60% of the dated test-­‐fingermarks and until 100% when the storage conditions were known. These results are interesting but further research should be conducted to evaluate if these models could be used in uncontrolled casework conditions. In a more fundamental perspective, a pilot study was also conducted on the use of infrared spectroscopy combined with chemical imaging in order to gain information about the fingermark composition and aging. More precisely, its ability to highlight influence factors and aging effects over large areas of fingermarks was investigated. This information was then compared with that given by individual FTIR spectra. It was concluded that while FTIR-­‐ CI is a powerful tool, its use to study natural fingermark residue for forensic purposes has to be carefully considered. In fact, in this study, this technique does not yield more information on residue distribution than traditional FTIR spectra and also suffers from major drawbacks, such as long analysis and processing time, particularly when large fingermark areas need to be covered. Finally, the results obtained in this research allowed the proposition and discussion of a formal and pragmatic framework to approach the fingermark dating questions. It allows identifying which type of information the scientist would be able to bring so far to investigators and/or Justice. Furthermore, this proposed framework also describes the different iterative development steps that the research should follow in order to achieve the validation of an objective fingermark dating methodology, whose capacities and limits are well known and properly documented.