64 resultados para interpreting
em Université de Lausanne, Switzerland
Resumo:
Objectives: The study objective was to derive reference pharmacokinetic curves of antiretroviral drugs (ART) based on available population pharmacokinetic (Pop-PK) studies that can be used to optimize therapeutic drug monitoring guided dosage adjustment.¦Methods: A systematic search of Pop-PK studies of 8 ART in adults was performed in PubMed. To simulate reference PK curves, a summary of the PK parameters was obtained for each drug based on meta-analysis approach. Most models used one-compartment model, thus chosen as reference model. Models using bi-exponential disposition were simplified to one-compartment, since the first distribution phase was rapid and not determinant for the description of the terminal elimination phase, mostly relevant for this project. Different absorption were standardized for first-order absorption processes.¦Apparent clearance (CL), apparent volume of distribution of the terminal phase (Vz) and absorption rate constant (ka) and inter-individual variability were pooled into summary mean value, weighted by number of plasma levels; intra-individual variability was weighted by number of individuals in each study.¦Simulations based on summary PK parameters served to construct concentration PK percentiles (NONMEM®).¦Concordance between individual and summary parameters was assessed graphically using Forest-plots. To test robustness, difference in simulated curves based on published and summary parameters was calculated using efavirenz as probe drug.¦Results: CL was readily accessible from all studies. For studies with one-compartment, Vz was central volume of distribution; for two-compartment, Vz was CL/λz. ka was directly used or derived based on the mean absorption time (MAT) for more complicated absorption models, assuming MAT=1/ka.¦The value of CL for each drug was in excellent agreement throughout all Pop-PK models, suggesting that minimal concentration derived from summary models was adequately characterized. The comparison of the concentration vs. time profile for efavirenz between published and summary PK parameters revealed not more than 20% difference. Although our approach appears adequate for estimation of elimination phase, the simplification of absorption phase might lead to small bias shortly after drug intake.¦Conclusions: Simulated reference percentile curves based on such an approach represent a useful tool for interpretating drug concentrations. This Pop-PK meta-analysis approach should be further validated and could be extended to elaborate more sophisticated computerized tool for the Bayesian TDM of ART.
Resumo:
Estimating the time since discharge of a spent cartridge or a firearm can be useful in criminal situa-tions involving firearms. The analysis of volatile gunshot residue remaining after shooting using solid-phase microextraction (SPME) followed by gas chromatography (GC) was proposed to meet this objective. However, current interpretative models suffer from several conceptual drawbacks which render them inadequate to assess the evidential value of a given measurement. This paper aims to fill this gap by proposing a logical approach based on the assessment of likelihood ratios. A probabilistic model was thus developed and applied to a hypothetical scenario where alternative hy-potheses about the discharge time of a spent cartridge found on a crime scene were forwarded. In order to estimate the parameters required to implement this solution, a non-linear regression model was proposed and applied to real published data. The proposed approach proved to be a valuable method for interpreting aging-related data.
Resumo:
Except for the first 2 years since July 29, 1968, Arenal volcano has continuously erupted compositionally monotonous and phenocryst-rich (similar to35%) basaltic andesites composed of plagioclase (plag), orthopyroxene (opx), clinopyroxene (cpx), spinel olivine. Detailed textural and compositional analyses of phenocrysts, mineral inclusions, and microlites reveal comparable complexities in any given sample and identify mineral components that require a minimum of four crystallization environments. We suggest three distinct crystallization environments crystallized low Mg# (<78) silicate phases from andesitic magma but at different physical conditions, such as variable pressure of crystallization and water conditions. The dominant environment, i.e., the one which accounts for the majority of minerals and overprinted all other assemblages near rims of phenocrysts, cocrystallized clinopyroxene (Mg# similar to71-78), orthopyroxene (Mg# similar to71-78), titanomagnetite and plagioclase (An(60) to An(85)). The second environment cocrystallized clinopyroxene (Mg# 71-78), olivine (<Fo(78)), titanomagnetite, and very high An (similar to90) plagioclase, while the third cocrystallized clinopyroxene (Mg# 71-78) with high (>7) Al/Ti and high (>4 wt.%) Al2O3, titanomagnetite with considerable Al2O3 (10-18 wt.%) and possibly olivine but appears to lack plagioclase. A fourth crystallization environment is characterized by clinopyroxene (e.g., Mg#=similar to78-85; Cr2O3=0.15-0.7 wt.%), Al-, Cr-rich spinel olivine (similar toFo(80)), and in some circumstances high-An (>80) plagioclase. This assemblage seems to record mafic inputs into the Arenal system and crystallization at high to low pressures. Single crystals cannot be completely classified as xenocrysts, antecrysts (cognate crystals), or phenocrysts, because they often contain different parts each representing a different crystallization environment and thus belong to different categories. Bulk compositions are mostly too mafic to have crystallized the bulk of ferromagnesian minerals and thus likely do not represent liquid compositions. On the other hand, they are the cumulative products of multiple mixing events assembling melts and minerals from a variety of sources. The driving force for this multistage mixing evolution to generate erupting basaltic andesites is thought to be the ascent of mafic magma from lower crustal levels to subvolcanic depths which at the same time may also go through compositional modification by fractionation and assimilation of country rocks. Thus, mafic magmas become basaltic andesite through mixing, fractionation and assimilation by the time they arrive at subvolcanic depths. We infer new increments of basaltic andesite are supplied nearly continuously to the subvolcanic reservoir concurrently to the current eruption and that these new increments are blended into the residing, subvolcanic magma. Thus, the compositional monotony is mostly the product of repetitious production of very similar basaltic andesite. Furthermore, we propose that this quasi-constant supply of small increments of magma is the fundamental cause for small-scale, decade-long continuous volcanic activity; that is, the current eruption of Arenal is flux-controlled by inputs of mantle magmas. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The Breast International Group (BIG) 1-98 study is a four-arm trial comparing 5 years of monotherapy with tamoxifen or with letrozole or with sequences of 2 years of one followed by 3 years of the other for postmenopausal women with endocrine-responsive early invasive breast cancer. From 1998 to 2003, BIG -98 enrolled 8,010 women. The enhanced design f the trial enabled two complementary analyses of efficacy and safety. Collection of tumor specimens further enabled treatment comparisons based on tumor biology. Reports of BIG 1-98 should be interpreted in relation to each individual patient as she weighs the costs and benefits of available treatments. Clinicaltrials.gov ID: NCT00004205.
Resumo:
Background. Accurate quantification of the prevalence of human immunodeficiency virus type 1 (HIV-1) drug resistance in patients who are receiving antiretroviral therapy (ART) is difficult, and results from previous studies vary. We attempted to assess the prevalence and dynamics of resistance in a highly representative patient cohort from Switzerland. Methods. On the basis of genotypic resistance test results and clinical data, we grouped patients according to their risk of harboring resistant viruses. Estimates of resistance prevalence were calculated on the basis of either the proportion of individuals with a virologic failure or confirmed drug resistance (lower estimate) or the frequency-weighted average of risk group-specific probabilities for the presence of drug resistance mutations (upper estimate). Results. Lower and upper estimates of drug resistance prevalence in 8064 ART-exposed patients were 50% and 57% in 1999 and 37% and 45% in 2007, respectively. This decrease was driven by 2 mechanisms: loss to follow-up or death of high-risk patients exposed to mono- or dual-nucleoside reverse-transcriptase inhibitor therapy (lower estimates range from 72% to 75%) and continued enrollment of low-risk patients who were taking combination ART containing boosted protease inhibitors or nonnucleoside reverse-transcriptase inhibitors as first-line therapy (lower estimates range from 7% to 12%). A subset of 4184 participants (52%) had 1 study visit per year during 2002-2007. In this subset, lower and upper estimates increased from 45% to 49% and from 52% to 55%, respectively. Yearly increases in prevalence were becoming smaller in later years. Conclusions. Contrary to earlier predictions, in situations of free access to drugs, close monitoring, and rapid introduction of new potent therapies, the emergence of drug-resistant viruses can be minimized at the population level. Moreover, this study demonstrates the necessity of interpreting time trends in the context of evolving cohort populations.
Resumo:
The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.
Resumo:
L'objectif principal de ce travail était d'explorer les relations parent-enfant et les processus d'apprentissage familiaux associés aux troubles anxieux. A cet effet, des familles ayant un membre anxieux (la mère ou l'enfant) ont été comparées avec des familles n'ayant aucun membre anxieux. Dans une première étude, l'observation de l'interaction mère-enfant, pendant une situation standardisée de jeu, a révélé que les mères présentant un trouble panique étaient plus susceptibles de se montrer verbalement contrôlantes, critiques et moins sensibles aux besoins de l'enfant, que les mères qui ne présentaient pas de trouble panique. Une deuxième étude a examiné les perceptions des différents membres de la famille quant aux relations au sein de la famille et a indiqué que, par comparaison aux adolescents non-anxieux, les adolescents anxieux étaient plus enclins à éprouver un sentiment d'autonomie individuelle diminué par rapport à leurs parents. Finalement, une troisième étude s'est intéressée à déterminer l'impact d'expériences d'apprentissage moins directes dans l'étiologie de l'anxiété. Les résultats ont indiqué que les mères présentant un trouble panique étaient plus enclines à s'engager dans des comportements qui maintiennent la panique et à impliquer leurs enfants dans ces comportements, que les mères ne présentant pas de trouble panique. En se basant sur des recherches antérieures qui ont établi une relation entre le contrôle parental, la perception de contrôle chez l'enfant et les troubles anxieux, le présent travail non seulement confirme ce lien mais propose également un modèle pour résumer l'état actuel des connaissances concernant les processus familiaux et le développement des troubles anxieux. Deux routes ont été suggérées par lesquelles l'anxiété pourrait être transmise de manière intergénérationnelle. Chacune de ces routes attribue un rôle important à la perception de contrôle chez l'enfant. L'idée est que lorsque les enfants présentent une prédisposition à interpréter le comportement de leurs parents comme hors de leur contrôle, ils seraient plus enclins à développer de l'anxiété. A ce titre, la perception du contrôle représenterait un tampon entre le comportement de contrôle/surprotection des parents et le trouble anxieux chez l'enfant. - The principal objective of the present work was to explore parent-child relationships and family learning processes associated with anxiety disorders. To this purpose, families with and without an anxious family member (mother or child) were compared. In a first study, observation of mother-child interaction, during a standard play situation, revealed that mothers with panic disorder were more likely to display verbal control and criticism, and less likely to display sensitivity toward their children than mothers without panic disorder. A second study examined family members' perceptions of family relationships and indicated that compared to non-anxious adolescents, anxious adolescents were more prone to experience a diminished sense of individual autonomy in relation to their parents. Finally a third study was interested in determining the effect of less direct learning experiences in the aetiology of anxiety. Results indicated that mothers with panic disorder were more likely to engage in panic-maintaining behaviour and to involve their children in this behaviour than mothers without panic disorder. Based on previous research showing a relationship between parental control, children's perception of control, and anxiety disorders, the present work not only further adds evidence to support this link but also proposes a model summarizing the current knowledge concerning family processes and the development of anxiety disorders. Two pathways have been suggested through which anxiety may be intergenerationally transmitted. Both pathways assign an important role to children's perception of control. The idea is that whenever children have a predisposition towards interpreting their parents' behaviour as beyond of their control, they may be more prone to develop anxiety. As such, perceived control may represent a buffer between parental overcontrolling/overprotective behaviours and childhood anxiety disorder.
Resumo:
Background: There is little information regarding the impact of diet on disease incidence and mortality in Switzerland. We assessed ecologic correlations between food availability and disease.Methods: In this ecologic study for the period 1970-2009, food availability was measured using the food balance sheets of the Food and Agriculture Organization of the United Nations. Standardized mortality rates (SMRs) were obtained from the Swiss Federal Office of Statistics. Cancer incidence data were obtained from the World Health Organization Health For All database and the Vaud Cancer Registry. Associations between food availability and mortality/incidence were assessed at lags 0, 5, 10, and 15 years by multivariate regression adjusted for total caloric intake.Results: Alcoholic beverages and fruit availability were positively associated, and fish availability was inversely associated, with SMRs for cardiovascular diseases. Animal products, meat, and animal fats were positively associated with the SMR for ischemic heart disease only. For cancer, the results of analysis using SMRs and incidence rates were contradictory. Alcoholic beverages and fruits were positively associated with SMRs for all cancer but inversely associated with all-cancer incidence rates. Similar findings were obtained for all other foods except vegetables, which were weakly inversely associated with SMRs and incidence rates. Use of a 15-year lag reversed the associations with animal and vegetal products, weakened the association with alcohol and fruits, and strengthened the association with fish.Conclusions: Ecologic associations between food availability and disease vary considerably on the basis of whether mortality or incidence rates are used in the analysis. Great care is thus necessary when interpreting our results.
Resumo:
The progressive development of Alzheimer's disease (AD)-related lesions such as neurofibrillary tangles,amyloid deposits and synaptic loss within the cerebral cortex is a main event of brain aging.Recent neuropathologic studies strongly suggested that the clinical diagnosis of dementia depends more on the severity and topography of pathologic changes than on the presence of a qualitative marker. However, several methodological problems such as selection biases, case-control design,density-based measures, and masking effects of concomitant pathologies should be taken into account when interpreting these data. In last years, the use of stereologic counting permitted to define reliably the cognitive impact of AD lesions in the human brain. Unlike fibrillar amyloid deposits that are poorly or not related to the dementia severity, the use of this method documented that total neurofibrillary tangles and neuron numbers in the CA1 field are the best correlates of cognitive deterioration in brain aging. Loss of dendritic spines in neocortical but not hippocampal areas has a modest but independent contribution to dementia. In contrast, the importance of early dendritic and axonal tau-related pathologic changes such as neuropil threads remains doubtful. Despite these progresses, neuronal pathology and synaptic loss in cases with pure AD pathology cannot explain more than 50% of clinical severity. The present review discusses the complex structure/function relationships in brain aging and AD within the theoretical framework of the functional neuropathology of brain aging.
Resumo:
In vivo imaging of green fluorescent protein (GFP)-labeled neurons in the intact brain is being used increasingly to study neuronal plasticity. However, interpreting the observed changes as modifications in neuronal connectivity needs information about synapses. We show here that axons and dendrites of GFP-labeled neurons imaged previously in the live mouse or in slice preparations using 2-photon laser microscopy can be analyzed using light and electron microscopy, allowing morphological reconstruction of the synapses both on the imaged neurons, as well as those in the surrounding neuropil. We describe how, over a 2-day period, the imaged tissue is fixed, sliced and immuno-labeled to localize the neurons of interest. Once embedded in epoxy resin, the entire neuron can then be drawn in three dimensions (3D) for detailed morphological analysis using light microscopy. Specific dendrites and axons can be further serially thin sectioned, imaged in the electron microscope (EM) and then the ultrastructure analyzed on the serial images.
Resumo:
According to the most widely accepted Cattell-Horn-Carroll (CHC) model of intelligence measurement, each subtest score of the Wechsler Intelligence Scale for Adults (3rd ed.; WAIS-III) should reflect both 1st- and 2nd-order factors (i.e., 4 or 5 broad abilities and 1 general factor). To disentangle the contribution of each factor, we applied a Schmid-Leiman orthogonalization transformation (SLT) to the standardization data published in the French technical manual for the WAIS-III. Results showed that the general factor accounted for 63% of the common variance and that the specific contributions of the 1st-order factors were weak (4.7%-15.9%). We also addressed this issue by using confirmatory factor analysis. Results indicated that the bifactor model (with 1st-order group and general factors) better fit the data than did the traditional higher order structure. Models based on the CHC framework were also tested. Results indicated that a higher order CHC model showed a better fit than did the classical 4-factor model; however, the WAIS bifactor structure was the most adequate. We recommend that users do not discount the Full Scale IQ when interpreting the index scores of the WAIS-III because the general factor accounts for the bulk of the common variance in the French WAIS-III. The 4 index scores cannot be considered to reflect only broad ability because they include a strong contribution of the general factor.