103 resultados para Model-based Categorical Sequence Clustering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is focused on the development of a methodology for the use of chemical characteristic of tire traces to help answer the following question: "Is the offending tire at the origin of the trace found on the crime scene?". This methodology goes from the trace sampling on the road to statistical analysis of its chemical characteristics. Knowledge about the composition and manufacture of tread tires as well as a review of instrumental techniques used for the analysis of polymeric materials were studied to select, as an ansi vi cal technique for this research, pyrolysis coupled to a gas Chromatograph with a mass spectrometry detector (Py-GC/MS). An analytical method was developed and optimized to obtain the lowest variability between replicates of the same sample. Within-variability of the tread was evaluated regarding width and circumference with several samples taken from twelve tires of different brands and/or models. The variability within each of the treads (within-variability) and between the treads (between-variability) could be quantified. Different statistical methods have shown that within-variability is lower than between-variability, which helped differentiate these tires. Ten tire traces were produced with tires of different brands and/or models by braking tests. These traces have been adequately sampled using sheets of gelatine. Particles of each trace were analysed using the same methodology as for the tires at their origin. The general chemical profile of a trace or of a tire has been characterized by eighty-six compounds. Based on a statistical comparison of the chemical profiles obtained, it has been shown that a tire trace is not differentiable from the tire at its origin but is generally differentiable from tires that are not at its origin. Thereafter, a sample containing sixty tires was analysed to assess the discrimination potential of the developed methodology. The statistical results showed that most of the tires of different brands and models are differentiable. However, tires of the same brand and model with identical characteristics, such as country of manufacture, size and DOT number, are not differentiable. A model, based on a likelihood ratio approach, was chosen to evaluate the results of the comparisons between the chemical profiles of the traces and tires. The methodology developed was finally blindly tested using three simulated scenarios. Each scenario involved a trace of an unknown tire as well as two tires possibly at its origin. The correct results for the three scenarios were used to validate the developed methodology. The different steps of this work were useful to collect the required information to test and validate the underlying assumption that it is possible to help determine if an offending tire » or is not at the origin of a trace, by means of a statistical comparison of their chemical profile. This aid was formalized by a measure of the probative value of the evidence, which is represented by the chemical profile of the trace of the tire. - Ce travail s'est proposé de développer une méthodologie pour l'exploitation des caractéristiques chimiques des traces de pneumatiques dans le but d'aider à répondre à la question suivante : «Est-ce que le pneumatique incriminé est ou n'est pas à l'origine de la trace relevée sur les lieux ? ». Cette méthodologie s'est intéressée du prélèvement de la trace de pneumatique sur la chaussée à l'exploitation statistique de ses caractéristiques chimiques. L'acquisition de connaissances sur la composition et la fabrication de la bande de roulement des pneumatiques ainsi que la revue de techniques instrumentales utilisées pour l'analyse de matériaux polymériques ont permis de choisir, comme technique analytique pour la présente recherche, la pyrolyse couplée à un chromatographe en phase gazeuse avec un détecteur de spectrométrie de masse (Py-GC/MS). Une méthode analytique a été développée et optimisée afin d'obtenir la plus faible variabilité entre les réplicas d'un même échantillon. L'évaluation de l'intravariabilité de la bande de roulement a été entreprise dans sa largeur et sa circonférence à l'aide de plusieurs prélèvements effectués sur douze pneumatiques de marques et/ou modèles différents. La variabilité au sein de chacune des bandes de roulement (intravariabilité) ainsi qu'entre les bandes de roulement considérées (intervariabilité) a pu être quantifiée. Les différentes méthodes statistiques appliquées ont montré que l'intravariabilité est plus faible que l'intervariabilité, ce qui a permis de différencier ces pneumatiques. Dix traces de pneumatiques ont été produites à l'aide de pneumatiques de marques et/ou modèles différents en effectuant des tests de freinage. Ces traces ont pu être adéquatement prélevées à l'aide de feuilles de gélatine. Des particules de chaque trace ont été analysées selon la même méthodologie que pour les pneumatiques à leur origine. Le profil chimique général d'une trace de pneumatique ou d'un pneumatique a été caractérisé à l'aide de huitante-six composés. Sur la base de la comparaison statistique des profils chimiques obtenus, il a pu être montré qu'une trace de pneumatique n'est pas différenciable du pneumatique à son origine mais est, généralement, différenciable des pneumatiques qui ne sont pas à son origine. Par la suite, un échantillonnage comprenant soixante pneumatiques a été analysé afin d'évaluer le potentiel de discrimination de la méthodologie développée. Les méthodes statistiques appliquées ont mis en évidence que des pneumatiques de marques et modèles différents sont, majoritairement, différenciables entre eux. La méthodologie développée présente ainsi un bon potentiel de discrimination. Toutefois, des pneumatiques de la même marque et du même modèle qui présentent des caractéristiques PTD (i.e. pays de fabrication, taille et numéro DOT) identiques ne sont pas différenciables. Un modèle d'évaluation, basé sur une approche dite du likelihood ratio, a été adopté pour apporter une signification au résultat des comparaisons entre les profils chimiques des traces et des pneumatiques. La méthodologie mise en place a finalement été testée à l'aveugle à l'aide de la simulation de trois scénarios. Chaque scénario impliquait une trace de pneumatique inconnue et deux pneumatiques suspectés d'être à l'origine de cette trace. Les résultats corrects obtenus pour les trois scénarios ont permis de valider la méthodologie développée. Les différentes étapes de ce travail ont permis d'acquérir les informations nécessaires au test et à la validation de l'hypothèse fondamentale selon laquelle il est possible d'aider à déterminer si un pneumatique incriminé est ou n'est pas à l'origine d'une trace, par le biais d'une comparaison statistique de leur profil chimique. Cette aide a été formalisée par une mesure de la force probante de l'indice, qui est représenté par le profil chimique de la trace de pneumatique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: Therapeutic hypothermia (TH) is often used to treat out-of-hospital cardiac arrest (OHCA) patients who also often simultaneously receive insulin for stress-induced hyperglycaemia. However, the impact of TH on systemic metabolism and insulin resistance in critical illness is unknown. This study analyses the impact of TH on metabolism, including the evolution of insulin sensitivity (SI) and its variability, in patients with coma after OHCA. METHODS: This study uses a clinically validated, model-based measure of SI. Insulin sensitivity was identified hourly using retrospective data from 200 post-cardiac arrest patients (8,522 hours) treated with TH, shortly after admission to the intensive care unit (ICU). Blood glucose and body temperature readings were taken every one to two hours. Data were divided into three periods: 1) cool (T <35°C); 2) an idle period of two hours as normothermia was re-established; and 3) warm (T >37°C). A maximum of 24 hours each for the cool and warm periods was considered. The impact of each condition on SI is analysed per cohort and per patient for both level and hour-to-hour variability, between periods and in six-hour blocks. RESULTS: Cohort and per-patient median SI levels increase consistently by 35% to 70% and 26% to 59% (P <0.001) respectively from cool to warm. Conversely, cohort and per-patient SI variability decreased by 11.1% to 33.6% (P <0.001) for the first 12 hours of treatment. However, SI variability increases between the 18th and 30th hours over the cool to warm transition, before continuing to decrease afterward. CONCLUSIONS: OCHA patients treated with TH have significantly lower and more variable SI during the cool period, compared to the later warm period. As treatment continues, SI level rises, and variability decreases consistently except for a large, significant increase during the cool to warm transition. These results demonstrate increased resistance to insulin during mild induced hypothermia. Our study might have important implications for glycaemic control during targeted temperature management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

South Peak is a 7-Mm3 potentially unstable rock mass located adjacent to the 1903 Frank Slide on Turtle Mountain, Alberta. This paper presents three-dimensional numerical rock slope stability models and compares them with a previous conceptual slope instability model based on discontinuity surfaces identified using an airborne LiDAR digital elevation model (DEM). Rock mass conditions at South Peak are described using the Geological Strength Index and point load tests, whilst the mean discontinuity set orientations and characteristics are based on approximately 500 field measurements. A kinematic analysis was first conducted to evaluate probable simple discontinuity-controlled failure modes. The potential for wedge failure was further assessed by considering the orientation of wedge intersections over the airborne LiDAR DEM and through a limit equilibrium combination analysis. Block theory was used to evaluate the finiteness and removability of blocks in the rock mass. Finally, the complex interaction between discontinuity sets and the topography within South Peak was investigated through three-dimensional distinct element models using the code 3DEC. The influence of individual discontinuity sets, scale effects, friction angle and the persistence along the discontinuity surfaces on the slope stability conditions were all investigated using this code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An African oxalogenic tree, the iroko tree (Milicia excelsa), has the property to enhance carbonate precipitation in tropical oxisols, where such accumulations are not expected due to the acidic conditions in these types of soils. This uncommon process is linked to the oxalate-carbonate pathway, which increases soil pH through oxalate oxidation. In order to investigate the oxalate-carbonate pathway in the iroko system, fluxes of matter have been identified, described, and evaluated from field to microscopic scales. In the first centimeters of the soil profile, decaying of the organic matter allows the release of whewellite crystals, mainly due to the action of termites and saprophytic fungi. In addition, a concomitant flux of carbonate formed in wood tissues contributes to the carbonate flux and is identified as a direct consequence of wood feeding by termites. Nevertheless, calcite biomineralization of the tree is not a consequence of in situ oxalate consumption, but rather related to the oxalate oxidation inside the upper part of the soil. The consequence of this oxidation is the presence of carbonate ions in the soil solution pumped through the roots, leading to preferential mineralization of the roots and the trunk base. An ideal scenario for the iroko biomineralization and soil carbonate accumulation starts with oxalatization: as the iroko tree grows, the organic matter flux to the soil constitutes the litter, and an oxalate pool is formed on the forest ground. Then, wood rotting agents (mainly termites, saprophytic fungi, and bacteria) release significant amounts of oxalate crystals from decaying plant tissues. In addition, some of these agents are themselves producers of oxalate (e.g. fungi). Both processes contribute to a soil pool of "available" oxalate crystals. Oxalate consumption by oxalotrophic bacteria can then start. Carbonate and calcium ions present in the soil solution represent the end products of the oxalate-carbonate pathway. The solution is pumped through the roots, leading to carbonate precipitation. The main pools of carbon are clearly identified as the organic matter (the tree and its organic products), the oxalate crystals, and the various carbonate features. A functional model based on field observations and diagenetic investigations with δ13C signatures of the various compartments involved in the local carbon cycle is proposed. It suggests that the iroko ecosystem can act as a long-term carbon sink, as long as the calcium source is related to non-carbonate rocks. Consequently, this carbon sink, driven by the oxalate carbonate pathway around an iroko tree, constitutes a true carbon trapping ecosystem as defined by ecological theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, Revil & Florsch proposed a novel mechanistic model based on the polarization of the Stern layer relating the permeability of granular media to their spectral induced polarization (SIP) characteristics based on the formation of polarized cells around individual grains. To explore the practical validity of this model, we compare it to pertinent laboratory measurements on samples of quartz sands with a wide range of granulometric characteristics. In particular, we measure the hydraulic and SIP characteristics of all samples both in their loose, non-compacted and compacted states, which might allow for the detection of polarization processes that are independent of the grain size. We first verify the underlying grain size/permeability relationship upon which the model of Revil & Florsch is based and then proceed to compare the observed and predicted permeability values for our samples by substituting the grain size characteristics by corresponding SIP parameters, notably the so-called Cole-Cole time constant. In doing so, we also asses the quantitative impact of an observed shift in the Cole-Cole time constant related to textural variations in the samples and observe that changes related to the compaction of the samples are not relevant for the corresponding permeability predictions. We find that the proposed model does indeed provide an adequate prediction of the overall trend of the observed permeability values, but underestimates their actual values by approximately one order-of-magnitude. This discrepancy in turn points to the potential importance of phenomena, which are currently not accounted for in the model and which tend to reduce the characteristic size of the prevailing polarization cells compared to the considered model, such as, for example, membrane polarization, contacts of double-layers of neighbouring grains, and incorrect estimation of the size of the polarized cells because of the irregularity of natural sand grains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: Imatinib has been increasingly proposed for therapeutic drug monitoring (TDM), as trough concentrations (Cmin) correlate with response rates in CML patients. This analysis aimed to evaluate the impact of imatinib exposure on optimal molecular response rates in a large European cohort of patients followed by centralized TDM.¦Methods: Sequential PK/PD analysis was performed in NONMEM 7 on 2230 plasma (PK) samples obtained along with molecular response (PD) data from 1299 CML patients. Model-based individual Bayesian estimates of exposure, parameterized as to initial dose adjusted and log-normalized Cmin (log-Cmin) or clearance (CL), were investigated as potential predictors of optimal molecular response, while accounting for time under treatment (stratified at 3 years), gender, CML phase, age, potentially interacting comedication, and TDM frequency. PK/PD analysis used mixed-effect logistic regression (iterative two-stage method) to account for intra-patient correlation.¦Results: In univariate analyses, CL, log-Cmin, time under treatment, TDM frequency, gender (all p<0.01) and CML phase (p=0.02) were significant predictors of the outcome. In multivariate analyses, all but log-Cmin remained significant (p<0.05). Our model estimates a 54.1% probability of optimal molecular response in a female patient with a median CL of 14.4 L/h, increasing by 4.7% with a 35% decrease in CL (percentile 10 of CL distribution), and decreasing by 6% with a 45% increased CL (percentile 90), respectively. Male patients were less likely than female to be in optimal response (odds ratio: 0.62, p<0.001), with an estimated probability of 42.3%.¦Conclusions: Beyond CML phase and time on treatment, expectedly correlated to the outcome, an effect of initial imatinib exposure on the probability of achieving optimal molecular response was confirmed in field-conditions by this multivariate analysis. Interestingly, male patients had a higher risk of suboptimal response, which might not exclusively derive from their 18.5% higher CL, but also from reported lower adherence to the treatment. A prospective longitudinal study would be desirable to confirm the clinical importance of identified covariates and to exclude biases possibly affecting this observational survey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: Darunavir is a protease inhibitor that is administered with low-dose ritonavir to enhance its bioavailability. It is prescribed at standard dosage regimens of 600/100 mg twice daily in treatment-experienced patients and 800/100 mg once daily in naive patients. A population pharmacokinetic approach was used to characterize the pharmacokinetics of both drugs and their interaction in a cohort of unselected patients and to compare darunavir exposure expected under alternative dosage regimens. METHODS: The study population included 105 HIV-infected individuals who provided darunavir and ritonavir plasma concentrations. Firstly, a population pharmacokinetic analysis for darunavir and ritonavir was conducted, with inclusion of patients' demographic, clinical and genetic characteristics as potential covariates (NONMEM(®)). Then, the interaction between darunavir and ritonavir was studied while incorporating levels of both drugs into different inhibitory models. Finally, model-based simulations were performed to compare trough concentrations (Cmin) between the recommended dosage regimen and alternative combinations of darunavir and ritonavir. RESULTS: A one-compartment model with first-order absorption adequately characterized darunavir and ritonavir pharmacokinetics. The between-subject variability in both compounds was important [coefficient of variation (CV%) 34% and 47% for darunavir and ritonavir clearance, respectively]. Lopinavir and ritonavir exposure (AUC) affected darunavir clearance, while body weight and darunavir AUC influenced ritonavir elimination. None of the tested genetic variants showed any influence on darunavir or ritonavir pharmacokinetics. The simulations predicted darunavir Cmin much higher than the IC50 thresholds for wild-type and protease inhibitor-resistant HIV-1 strains (55 and 550 ng/mL, respectively) under standard dosing in >98% of experienced and naive patients. Alternative regimens of darunavir/ritonavir 1200/100 or 1200/200 mg once daily also had predicted adequate Cmin (>550 ng/mL) in 84% and 93% of patients, respectively. Reduction of darunavir/ritonavir dosage to 600/50 mg twice daily led to a 23% reduction in average Cmin, still with only 3.8% of patients having concentrations below the IC50 for resistant strains. CONCLUSIONS: The important variability in darunavir and ritonavir pharmacokinetics is poorly explained by clinical covariates and genetic influences. In experienced patients, treatment simplification strategies guided by drug level measurements and adherence monitoring could be proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, the introduction of second generation sequencing and further advance-ments in confocal microscopy have enabled system-level studies for the functional characterization of genes. The degree of complexity intrinsic to these approaches needs the development of bioinformatics methodologies and computational models for extracting meaningful biological knowledge from the enormous amount of experi¬mental data which is continuously generated. This PhD thesis presents several novel bioinformatics methods and computational models to address specific biological questions in Plant Biology by using the plant Arabidopsis thaliana as a model system. First, a spatio-temporal qualitative analysis of quantitative transcript and protein profiles is applied to show the role of the BREVIS RADIX (BRX) protein in the auxin- cytokinin crosstalk for root meristem growth. Core of this PhD work is the functional characterization of the interplay between the BRX protein and the plant hormone auxin in the root meristem by using a computational model based on experimental evidence. Hyphotesis generated by the modelled to the discovery of a differential endocytosis pattern in the root meristem that splits the auxin transcriptional response via the plasma membrane to nucleus partitioning of BRX. This positional information system creates an auxin transcriptional pattern that deviates from the canonical auxin response and is necessary to sustain the expression of a subset of BRX-dependent auxin-responsive genes to drive root meristem growth. In the second part of this PhD thesis, we characterized the genome-wide impact of large scale deletions on four divergent Arabidopsis natural strains, through the integration of Ultra-High Throughput Sequencing data with data from genomic hybridizations on tiling arrays. Analysis of the identified deletions revealed a considerable portion of protein coding genes affected and supported a history of genomic rearrangements shaped by evolution. In the last part of the thesis, we showed that VIP3 gene in Arabidopsis has an evo-lutionary conserved role in the 3' to 5' mRNA degradation machinery, by applying a novel approach for the analysis of mRNA-Seq data from random-primed mRNA. Altogether, this PhD research contains major advancements in the study of natural genomic variation in plants and in the application of computational morphodynamics models for the functional characterization of biological pathways essential for the plant. - Récemment, l'introduction du séquençage de seconde génération et les avancées dans la microscopie confocale ont permis des études à l'échelle des différents systèmes cellulaires pour la caractérisation fonctionnelle de gènes. Le degrés de complexité intrinsèque à ces approches ont requis le développement de méthodologies bioinformatiques et de modèles mathématiques afin d'extraire de la masse de données expérimentale générée, des information biologiques significatives. Ce doctorat présente à la fois des méthodes bioinformatiques originales et des modèles mathématiques pour répondre à certaines questions spécifiques de Biologie Végétale en utilisant la plante Arabidopsis thaliana comme modèle. Premièrement, une analyse qualitative spatio-temporelle de profiles quantitatifs de transcripts et de protéines est utilisée pour montrer le rôle de la protéine BREVIS RADIX (BRX) dans le dialogue entre l'auxine et les cytokinines, des phytohormones, dans la croissance du méristème racinaire. Le noyau de ce travail de thèse est la caractérisation fonctionnelle de l'interaction entre la protéine BRX et la phytohormone auxine dans le méristème de la racine en utilisant des modèles informatiques basés sur des preuves expérimentales. Les hypothèses produites par le modèle ont mené à la découverte d'un schéma différentiel d'endocytose dans le méristème racinaire qui divise la réponse transcriptionnelle à l'auxine par le partitionnement de BRX de la membrane plasmique au noyau de la cellule. Cette information positionnelle crée une réponse transcriptionnelle à l'auxine qui dévie de la réponse canonique à l'auxine et est nécessaire pour soutenir l'expression d'un sous ensemble de gènes répondant à l'auxine et dépendant de BRX pour conduire la croissance du méristème. Dans la seconde partie de cette thèse de doctorat, nous avons caractérisé l'impact sur l'ensemble du génome des délétions à grande échelle sur quatre souches divergentes naturelles d'Arabidopsis, à travers l'intégration du séquençage à ultra-haut-débit avec l'hybridation génomique sur puces ADN. L'analyse des délétions identifiées a révélé qu'une proportion considérable de gènes codant était affectée, supportant l'idée d'un historique de réarrangement génomique modelé durant l'évolution. Dans la dernière partie de cette thèse, nous avons montré que le gène VÏP3 dans Arabidopsis a conservé un rôle évolutif dans la machinerie de dégradation des ARNm dans le sens 3' à 5', en appliquant une nouvelle approche pour l'analyse des données de séquençage d'ARNm issue de transcripts amplifiés aléatoirement. Dans son ensemble, cette recherche de doctorat contient des avancées majeures dans l'étude des variations génomiques naturelles des plantes et dans l'application de modèles morphodynamiques informatiques pour la caractérisation de réseaux biologiques essentiels à la plante. - Le développement des plantes est écrit dans leurs codes génétiques. Pour comprendre comment les plantes sont capables de s'adapter aux changements environnementaux, il est essentiel d'étudier comment leurs gènes gouvernent leur formation. Plus nous essayons de comprendre le fonctionnement d'une plante, plus nous réalisons la complexité des mécanismes biologiques, à tel point que l'utilisation d'outils et de modèles mathématiques devient indispensable. Dans ce travail, avec l'utilisation de la plante modèle Arabidopsis thalicinci nous avons résolu des problèmes biologiques spécifiques à travers le développement et l'application de méthodes informatiques concrètes. Dans un premier temps, nous avons investigué comment le gène BREVIS RADIX (BRX) régule le développement de la racine en contrôlant la réponse à deux hormones : l'auxine et la cytokinine. Nous avons employé une analyse statistique sur des mesures quantitatives de transcripts et de produits de gènes afin de démontrer que BRX joue un rôle antagonisant dans le dialogue entre ces deux hormones. Lorsque ce-dialogue moléculaire est perturbé, la racine primaire voit sa longueur dramatiquement réduite. Pour comprendre comment BRX répond à l'auxine, nous avons développé un modèle informatique basé sur des résultats expérimentaux. Les simulations successives ont mené à la découverte d'un signal positionnel qui contrôle la réponse de la racine à l'auxine par la régulation du mouvement intracellulaire de BRX. Dans la seconde partie de cette thèse, nous avons analysé le génome entier de quatre souches naturelles d'Arabidopsis et nous avons trouvé qu'une grande partie de leurs gènes étaient manquant par rapport à la souche de référence. Ce résultat indique que l'historique des modifications génomiques conduites par l'évolution détermine une disponibilité différentielle des gènes fonctionnels dans ces plantes. Dans la dernière partie de ce travail, nous avons analysé les données du transcriptome de la plante où le gène VIP3 était non fonctionnel. Ceci nous a permis de découvrir le rôle double de VIP3 dans la régulation de l'initiation de la transcription et dans la dégradation des transcripts. Ce rôle double n'avait jusqu'alors été démontrée que chez l'homme. Ce travail de doctorat supporte le développement et l'application de méthodologies informatiques comme outils inestimables pour résoudre la complexité des problèmes biologiques dans la recherche végétale. L'intégration de la biologie végétale et l'informatique est devenue de plus en plus importante pour l'avancée de nos connaissances sur le fonctionnement et le développement des plantes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unstable rock slope above the village of Flåm shows signs of active and postglacial gravitational deformation over an area of 11 km2. We performed detailed structural field mapping, annual differential Global Navigation Satellite System (GNSS) surveys, and generated a detailed topographic model based on airborne and terrestrial laser scanning. Kinematic analyses of the structural data indicates that deformation is complex and varies over the slope. Both sliding and toppling are locally feasible. Using differential GNSS, 18 points were measured annually over a period of up to 6 years. Two of these points show an average yearly movement of around 10 mm/year. They are located at the frontal cliff on almost completely detached blocks. Large fractures indicate deep-seated gravitational deformation of volumes up to 80 million m3, but the movement rates in these areas are below 2 mm/year. Based upon these combined observations, we interpret that small collapses of blocks along the frontal cliff will be more frequent. Larger collapses of free-standing blocks along the cliff with volumes >100,000 m3, thus large enough to reach the fjord, cannot be ruled out. A large collapse involving more than 10 million m3, however, is of very low likelihood at present.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: Gentamicin is one of the most commonly prescribed antibiotics for suspected or proven infection in newborns. Because of age-associated (pre- and post- natal) changes in body composition and organ function, large interindividual variability in gentamicin drug levels exists, thus requiring a close monitoring of this drug due to its narrow therapeutic index. We aimed to investigate clinical and demographic factors influencing gentamicin pharmacokinetics (PK) in a large cohort of unselected newborns and to explore optimal regimen based on simulation. Methods: All gentamicin concentration data from newborns treated at the University Hospital Center of Lausanne between December 2006 and October 2011 were retrieved. Gentamicin concentrations were measured within the frame of a routine therapeutic drug monitoring program, in which 2 concentrations (at 1h and 12h) are systematically collected after the first administered dose, and a few additional concentrations are sampled along the treatment course. A population PK analysis was performed by comparing various structural models, and the effect of clinical and demographic factors on gentamicin disposition was explored using NONMEM®. Results: A total of 3039 concentrations collected in 994 preterm (median gestational age 32.3 weeks, range 24.2-36.5 weeks) and 455 term newborns were used in the analysis. Most of the data (86%) were sampled after the first dose (C1 h and C12 h). A two-compartment model best characterized gentamicin PK. Average clearance (CL) was 0.044 L/h/kg (CV 25%), central volume of distribution (Vc) 0.442 L/kg (CV 18%), intercompartmental clearance (Q) 0.040 L/h/kg and peripheral volume of distribution (Vp) 0.122 L/kg. Body weight, gestational age and postnatal age positively influenced CL. The use of both gestational age and postnatal age better predicted CL than postmenstrual age alone. CL was affected by dopamine and furosemide administration and non-significantly by indometacin. Body weight, gestational age and dopamine coadminstration significantly influenced Vc. Model based simulation confirms that preterm infants need higher dose, superior to 4 mg/kg, and extended interval dosage regimen to achieve adequate concentration. Conclusions: This study, performed on a very large cohort of neonates, identified important factors influencing gentamicin PK. The model will serve to elaborate a Bayesian tool for dosage individualization based on a single measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM: This study aims to investigate the clinical and demographic factors influencing gentamicin pharmacokinetics in a large cohort of unselected premature and term newborns and to evaluate optimal regimens in this population. METHODS: All gentamicin concentration data, along with clinical and demographic characteristics, were retrieved from medical charts in a Neonatal Intensive Care Unit over 5 years within the frame of a routine therapeutic drug monitoring programme. Data were described using non-linear mixed-effects regression analysis ( nonmem®). RESULTS: A total of 3039 gentamicin concentrations collected in 994 preterm and 455 term newborns were included in the analysis. A two compartment model best characterized gentamicin disposition. The average parameter estimates, for a median body weight of 2170 g, were clearance (CL) 0.089 l h(-1) (CV 28%), central volume of distribution (Vc ) 0.908 l (CV 18%), intercompartmental clearance (Q) 0.157 l h(-1) and peripheral volume of distribution (Vp ) 0.560 l. Body weight, gestational age and post-natal age positively influenced CL. Dopamine co-administration had a significant negative effect on CL, whereas the influence of indomethacin and furosemide was not significant. Both body weight and gestational age significantly influenced Vc . Model-based simulations confirmed that, compared with term neonates, preterm infants need higher doses, superior to 4 mg kg(-1) , at extended intervals to achieve adequate concentrations. CONCLUSIONS: This observational study conducted in a large cohort of newborns confirms the importance of body weight and gestational age for dosage adjustment. The model will serve to set up dosing recommendations and elaborate a Bayesian tool for dosage individualization based on concentration monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: Human papillomavirus (HPV) is a sexually transmitted infection of particular interest because of its high prevalence rate and strong causal association with cervical cancer. Two prophylactic vaccines have been developed and different countries have made or will soon make recommendations for the vaccination of girls. Even if there is a consensus to recommend a vaccination before the beginning of sexual activity, there are, however, large discrepancies between countries concerning the perceived usefulness of a catch-up procedure and of boosters. The main objective of this article is to simulate the impact on different vaccination policies upon the mid- and long-term HPV 16/18 age-specific infection rates. METHODS: We developed an epidemiological model based on the susceptible-infective-recovered approach using Swiss data. The mid- and long-term impact of different vaccination scenarios was then compared. RESULTS: The generalization of a catch-up procedure is always beneficial, whatever its extent. Moreover, pending on the length of the protection offered by the vaccine, boosters will also be very useful. CONCLUSIONS: To be really effective, a vaccination campaign against HPV infection should at least include a catch-up to early reach a drop in HPV 16/18 prevalence, and maybe boosters. Otherwise, the protection insured for women in their 20s could be lower than expected, resulting in higher risks to later develop cervical cancer.