222 resultados para Nanometric ranges
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
Abstract : The existence of a causal relationship between the spatial distribution of living organisms and their environment, in particular climate, has been long recognized and is the central principle of biogeography. In turn, this recognition has led scientists to the idea of using the climatic, topographic, edaphic and biotic characteristics of the environment to predict its potential suitability for a given species or biological community. In this thesis, my objective is to contribute to the development of methodological improvements in the field of species distribution modeling. More precisely, the objectives are to propose solutions to overcome limitations of species distribution models when applied to conservation biology issues, or when .used as an assessment tool of the potential impacts of global change. The first objective of my thesis is to contribute to evidence the potential of species distribution models for conservation-related applications. I present a methodology to generate pseudo-absences in order to overcome the frequent lack of reliable absence data. I also demonstrate, both theoretically (simulation-based) and practically (field-based), how species distribution models can be successfully used to model and sample rare species. Overall, the results of this first part of the thesis demonstrate the strong potential of species distribution models as a tool for practical applications in conservation biology. The second objective this thesis is to contribute to improve .projections of potential climate change impacts on species distributions, and in particular for mountain flora. I develop and a dynamic model, MIGCLIM, that allows the implementation of dispersal limitations into classic species distribution models and present an application of this model to two virtual species. Given that accounting for dispersal limitations requires information on seed dispersal, distances, a general methodology to classify species into broad dispersal types is also developed. Finally, the M~GCLIM model is applied to a large number of species in a study area of the western Swiss Alps. Overall, the results indicate that while dispersal limitations can have an important impact on the outcome of future projections of species distributions under climate change scenarios, estimating species threat levels (e.g. species extinction rates) for a mountainous areas of limited size (i.e. regional scale) can also be successfully achieved when considering dispersal as unlimited (i.e. ignoring dispersal limitations, which is easier from a practical point of view). Finally, I present the largest fine scale assessment of potential climate change impacts on mountain vegetation that has been carried-out to date. This assessment involves vegetation from 12 study areas distributed across all major western and central European mountain ranges. The results highlight that some mountain ranges (the Pyrenees and the Austrian Alps) are expected to be more affected by climate change than others (Norway and the Scottish Highlands). The results I obtain in this study also indicate that the threat levels projected by fine scale models are less severe than those derived from coarse scale models. This result suggests that some species could persist in small refugias that are not detected by coarse scale models. Résumé : L'existence d'une relation causale entre la répartition des espèces animales et végétales et leur environnement, en particulier le climat, a été mis en évidence depuis longtemps et est un des principes centraux en biogéographie. Ce lien a naturellement conduit à l'idée d'utiliser les caractéristiques climatiques, topographiques, édaphiques et biotiques de l'environnement afin d'en prédire la qualité pour une espèce ou une communauté. Dans ce travail de thèse, mon objectif est de contribuer au développement d'améliorations méthodologiques dans le domaine de la modélisation de la distribution d'espèces dans le paysage. Plus précisément, les objectifs sont de proposer des solutions afin de surmonter certaines limitations des modèles de distribution d'espèces dans des applications pratiques de biologie de la conservation ou dans leur utilisation pour évaluer l'impact potentiel des changements climatiques sur l'environnement. Le premier objectif majeur de mon travail est de contribuer à démontrer le potentiel des modèles de distribution d'espèces pour des applications pratiques en biologie de la conservation. Je propose une méthode pour générer des pseudo-absences qui permet de surmonter le problème récurent du manque de données d'absences fiables. Je démontre aussi, de manière théorique (par simulation) et pratique (par échantillonnage de terrain), comment les modèles de distribution d'espèces peuvent être utilisés pour modéliser et améliorer l'échantillonnage des espèces rares. Ces résultats démontrent le potentiel des modèles de distribution d'espèces comme outils pour des applications de biologie de la conservation. Le deuxième objectif majeur de ce travail est de contribuer à améliorer les projections d'impacts potentiels des changements climatiques sur la flore, en particulier dans les zones de montagnes. Je développe un modèle dynamique de distribution appelé MigClim qui permet de tenir compte des limitations de dispersion dans les projections futures de distribution potentielle d'espèces, et teste son application sur deux espèces virtuelles. Vu que le fait de prendre en compte les limitations dues à la dispersion demande des données supplémentaires importantes (p.ex. la distance de dispersion des graines), ce travail propose aussi une méthode de classification simplifiée des espèces végétales dans de grands "types de disperseurs", ce qui permet ainsi de d'obtenir de bonnes approximations de distances de dispersions pour un grand nombre d'espèces. Finalement, j'applique aussi le modèle MIGCLIM à un grand nombre d'espèces de plantes dans une zone d'études des pré-Alpes vaudoises. Les résultats montrent que les limitations de dispersion peuvent avoir un impact considérable sur la distribution potentielle d'espèces prédites sous des scénarios de changements climatiques. Cependant, quand les modèles sont utilisés pour évaluer les taux d'extinction d'espèces dans des zones de montages de taille limitée (évaluation régionale), il est aussi possible d'obtenir de bonnes approximations en considérant la dispersion des espèces comme illimitée, ce qui est nettement plus simple d'un point dé vue pratique. Pour terminer je présente la plus grande évaluation à fine échelle d'impact potentiel des changements climatiques sur la flore des montagnes conduite à ce jour. Cette évaluation englobe 12 zones d'études réparties sur toutes les chaines de montages principales d'Europe occidentale et centrale. Les résultats montrent que certaines chaines de montagnes (les Pyrénées et les Alpes Autrichiennes) sont projetées comme plus sensibles aux changements climatiques que d'autres (les Alpes Scandinaves et les Highlands d'Ecosse). Les résultats obtenus montrent aussi que les modèles à échelle fine projettent des impacts de changement climatiques (p. ex. taux d'extinction d'espèces) moins sévères que les modèles à échelle large. Cela laisse supposer que les modèles a échelle fine sont capables de modéliser des micro-niches climatiques non-détectées par les modèles à échelle large.
Resumo:
Understanding factors that shape ranges of species is central in evolutionary biology. Species distribution models have become important tools to test biogeographical, ecological and evolutionary hypotheses. Moreover, from an ecological and evolutionary perspective, these models help to elucidate the spatial strategies of species at a regional scale. We modelled species distributions of two phylogenetically, geographically and ecologically close Tupinambis species (Teiidae) that occupy the southernmost area of the genus distribution in South America. We hypothesized that similarities between these species might have induced spatial strategies at the species level, such as niche differentiation and divergence of distribution patterns at a regional scale. Using logistic regression and MaxEnt we obtained species distribution models that revealed interspecific differences in habitat requirements, such as environmental temperature, precipitation and altitude. Moreover, the models obtained suggest that although the ecological niches of Tupinambis merianae and T. rufescens are different, these species might co-occur in a large contact zone. We propose that niche plasticity could be the mechanism enabling their co-occurrence. Therefore, the approach used here allowed us to understand the spatial strategies of two Tupinambis lizards at a regional scale.
Resumo:
BACKGROUND: Knowledge of normal heart weight ranges is important information for pathologists. Comparing the measured heart weight to reference values is one of the key elements used to determine if the heart is pathological, as heart weight increases in many cardiac pathologies. The current reference tables are old and in need of an update. AIMS: The purposes of this study are to establish new reference tables for normal heart weights in the local population and to determine the best predictive factor for normal heart weight. We also aim to provide technical support to calculate the predictive normal heart weight. METHODS: The reference values are based on retrospective analysis of adult Caucasian autopsy cases without any obvious pathology that were collected at the University Centre of Legal Medicine in Lausanne from 2007 to 2011. We selected 288 cases. The mean age was 39.2 years. There were 118 men and 170 women. Regression analyses were performed to assess the relationship of heart weight to body weight, body height, body mass index (BMI) and body surface area (BSA). RESULTS: The heart weight increased along with an increase in all the parameters studied. The mean heart weight was greater in men than in women at a similar body weight. BSA was determined to be the best predictor for normal heart weight. New reference tables for predicted heart weights are presented as a web application that enable the comparison of heart weights observed at autopsy with the reference values. CONCLUSIONS: The reference tables for heart weight and other organs should be systematically updated and adapted for the local population. Web access and smartphone applications for the predicted heart weight represent important investigational tools.
Resumo:
Abstract: Protective immune responses against pathogen invasion and transformed cells requires the coordinated action of distinct leukocyte subsets and soluble factors, overall termed immunological network. Among antigen-presenting cells (APC), a crucial role is played by dendritic cells (DC), which initiate, amplify and determine the outcome of the immune response. Micro-environmental conditions profoundly influence DC in such ways that the resulting immune response ranges from successful immune stimulation to abortive response or immune suppression. For instance, the presence in the milieu of anti-inflammatory cytokine interleukin-10 (IL-10) reverts most of the effects mediated on DC by even strong pro-inflammatory agents such as bacterial Lipopolysaccharide (LPS), in terms of differentiation, activation and functions. In an environment containing both LPS and IL-10, uncoupling of receptors for inflammatory chemokines already occurs after a few hours and in a reversible manner on DC, allowing scavenging of chemokines and, consequently, attenuation of the inflammatory process which could be deleterious to the organism. By studying the effects on DC of concomitant stimulation by LPS and IL-10 from the gene expression point of view, we were able to define four distinct transcriptional programs: A. the inhibition of inflammation and immunity, B. the regulation of tissue remodeling, C. the tuning of cytokine/growth factor receptors and G protein-coupled receptors, D. the stimulation of B cell function and lymphoid tissue neogenesis. Among the latter genes, we further demonstrated that IL-10 synergizes with Toll-like receptor ligands for the production of functionally active B cell attracting chemokine CXCL13. Our data provide evidence that the combined exposure of APC to LPS and IL-10, via the production of CXCL13, involves humoral immunity by attracting antibody-producing cells. It is well known that the persistent release of CXCL13 leads to the development of ectopic lymphoid tissue aggregates and production of high levels of antibodies, thus favoring the induction of auto-immunity. Our findings suggest that the IL-10 produced in chronic inflammatory conditions may promote lymphoid tissue neogenesis through increased release of CXCL13. IL-10 is an anti-inflammatory cytokine inhibiting cellular-mediated TH 1-polarized immune responses. In this study we demonstrate that IL- 10 strongly supports the development of humoral immunity. IL-10 and CXCL13 can thus be targets for specific therapies in auto-immune diseases.
Resumo:
The occurrence of microvascular and small macrovascular lesions and Alzheimer's disease (AD)-related pathology in the aging human brain is a well-described phenomenon. Although there is a wide consensus about the relationship between macroscopic vascular lesions and incident dementia, the cognitive consequences of the progressive accumulation of these small vascular lesions in the human brain are still a matter of debate. Among the vast group of small vessel-related forms of ischemic brain injuries, the present review discusses the cognitive impact of cortical microinfarcts, subcortical gray matter and deep white matter lacunes, periventricular and diffuse white matter demyelinations, and focal or diffuse gliosis in old age. A special focus will be on the sub-types of microvascular lesions not detected by currently available neuroimaging studies in routine clinical settings. After providing a critical overview of in vivo data on white matter demyelinations and lacunes, we summarize the clinicopathological studies performed by our center in large cohorts of individuals with microvascular lesions and concomitant AD-related pathology across two age ranges (the younger old, 65-85 years old, versus the oldest old, nonagenarians and centenarians). In conjunction with other autopsy datasets, these observations fully support the idea that cortical microinfarcts are the only consistent determinant of cognitive decline across the entire spectrum from pure vascular cases to cases with combined vascular and AD lesion burden.
Resumo:
The eclogite facies assemblage K-feldspar-jadeite-quartz in metagranites and metapelites from the Sesia-Lanzo Zone (Western Alps, Italy) records the equilibration pressure by dilution of the reaction jadeite + quartz = albite. The metapelites show partial transformation from a pre-Alpine assemblage of garnet (Alm(63)Prp(26)Grs(10))-K-feldspar-plagioclase-biotite +/- sillimanite to the Eo-Alpine high-pressure assemblage garnet (Alm(50)Prp(14)Grs(35))-jadeite (Jd(80-97)Di(0-4)Hd(0-8)Acm(0-7))=zoisite-phengite. Plagioclase is replaced by jadeite-zoisite-kyanite-K-feldspar-quartz and biotite is replaced by garnet-phengite or omphacite-kyanite-phengite. Equilibrium was attained only in local domains in the metapelites and therefore the K-feldspar-jadeite-quartz (KJQ) barometer was applied only to the plagioclase pseudomorphs and K-feldspar domains. The albite content of K-feldspar ranges from 4 to 11 mol% in less equilibrated assemblages from Val Savenca and from 4 to 7 mol% in the partially equilibrated samples from Monte Mucrone and the equilibrated samples from Montestrutto and Tavagnasco. Thermodynamic calculations on the stability of the assemblage K-feldspar-jadeite-quartz using available mixing data for K-feldspar and pyroxene indicate pressures of 15-21 kbar (+/- 1.6-1.9 kbar) at 550 +/- 50 degrees C. This barometer yields direct pressure estimates in high-pressure rocks where pressures are seldom otherwise fixed, although it is sensitive to analytical precision and the choice of thermodynamic mixing model for K-feldspar. Moreover, the KJQ barometer is independent of the ratio P-H2O/P-T. The inferred limiting a(H2O) for the assemblage jadeite-kyanite in the metapelites from Val Savenca is low and varies from 0.2 to 0.6.
Resumo:
Résumée Le théâtre romain d'Aventicum s'inscrit entre la petite ville moderne d'Avenches et le village de Donatyre, au pied d'une colline en pente douce délimitant au sud-est la plaine de la Broye. Il se situe à l'ouest des quartiers urbains antiques, construits selon un plan orthogonal, et s'intègre à une zone comptant divers temples et édifices publics. Dès l'hiver 1889/1890, l'Association Pro Aventico nouvellement fondée lança les premières fouilles archéologiques. Jusqu'en 1914, on dégagea les parties originales de la maçonnerie tout en assurant la restauration de l'édifice. En 1926/1927 et de 1939 à 1942 auront lieu d'autres fouilles de grande envergure, accompagnées de mesures de conservation. En 2001, la Fondation Pro Aventico lança un projet visant à étudier l'histoire de la construction ainsi que l'architecture du monument, alors connues en partie seulement. Sur la base de vestiges attestant la présence d'édifices antérieurs au théâtre, on définira pour la construction de ce dernier un terminus post quem entre 100 et 120 ap. J.-C. Comme l'indique l'étude du plan au sol, ce projet nécessita une importante planification. L'édifice lui-même se constitue d'une zone en demi-cercle réservée au public, dont les substructions indiquent qu'elle était partiellement isolée des autres. La cavea, subdivisée en trois secteurs concentriques, se termine par le bâtiment des halles et par les aditus; on relèvera que les rangées supérieures réservées aux spectateurs s'étendaient sans doute au-delà des halles et jusqu'à la façade. Les aditus permettaient d'accéder à la zone de l'orchestra et de la scène, dominée par une plate-forme de plan rectangulaire et bordée d'une proédrie. On disposait de deux voies d'accès différentes: l'une à l'avant, par les arcades des halles, et l'une à l'arrière, pratiquée dans le mur en demi-cercle; apparemment, on ne pouvait pénétrer que dans la partie centrale de ce dernier. On ne parvient à restituer que partiellement les voies de circulation dans les substructions de la cavea, en raison de leur piètre état de conservation. On a par contre pu repérer le deambulatorium, à la périphérie, ainsi que cinq vomitoria sur la première praecinctio et six vomitoria sur la seconde praecinctio. On peut admettre, sans toutefois disposer d'arguments à toute épreuve, que la troisième rangée, en haut, était accessible par des cages d'escaliers conduisant à la summa cavea. Ces hypothèses, fondées essentiellement sur le plan au sol de l'édifice et touchant aux voies de circulation, sont corroborées par une restitution des gradins des parties en élévation, aujourd'hui disparus. Quelques éléments architecturaux fournissent des arguments décisifs pour cette restitution, comme par exemple un bloc de gradin qui permet de conclure à un pendage de la cavea de 26.5°. On peut par ailleurs démontrer que le module architectural défini sur la base du plan au sol fut également appliqué lors de la planification de l'élévation. Grâce à des fragments de corniche, à deux chapiteaux de pilastre ornés de feuilles d'acanthe, à une base de pilastre engagée in situ dans la maçonnerie restaurée, et en tenant compte du module architectural, on peut proposer une reconstitution approximative de la composition de la façade de l'enceinte en demi-cercle. Si les structures architecturales révèlent que le théâtre fut planifié et édifié selon un seul et unique concept, on observe cependant quelques transformations et modifications au cours du temps. D'une part, on décèle en divers endroits des traces de réparation et de consolidation, visant sans doute à stabiliser un bâtiment ayant visiblement subi des dégâts. Par ailleurs, on a également entrepris des modifications structurelles ou fonctionnelles, comme l'édification ultérieure du postscaenium le long du mur de scène extérieur. Dans un contexte identique, on relèvera également deux murs flanquant les basiliques, qu'on suppose être en relation avec l'agrandissement du complexe architectural du temple du Cigognier et du théâtre, augmenté des deux temples édifiés au milieu du 2e s. ap. J.-C. au lieu-dit Au Lavoëx. L'excavation, au cours du dernier tiers du IIIe siècle ap. J.-C., d'un fossé de près de 6 m de large pour 1.5 m de profondeur tout autour de l'édifice fit du théâtre un véritable lieu fortifié. Au-dessus du fossé, on a pu relever une séquence stratigraphique témoignant d'une activité d'habitation à proximité du théâtre pour la période allant du IVe au VIIe siècle ap. J.-C. Il s'agit de l'un des rares cas où l'on peut, à Avenches, évoquer la présence d'un habitat de la période du Haut Moyen Age.
Resumo:
PURPOSE OF REVIEW: Many chemotherapeutic drugs, including fluoropyrimidines, platinums, CPT-11, taxanes and adriamycin have single-agent activity in advanced gastric cancer. Although combination chemotherapy has been shown to be more effective than single agents, response rates between 30 and 50% have not fulfilled their promise as progression-free survival from the best combinations ranges between 3 and 7 months and overall survival between 8 and 11 months. The development of targeted therapies in gastric cancer clearly stays behind the integration of these novel agents into new treatment concepts for patients with colorectal cancer. This review summarizes the experience and major recent advances in the development of targeted therapies in advanced gastric cancer. RECENT FINDINGS: Recent publications on targeted therapies in gastric cancer are limited to nonrandomized phase I or II trials. The majority of agents tested were angiogenesis inhibitors or agents targeting the epidermal growth factor receptors epidermal growth factor receptor 1 and HER2. SUMMARY: Adequately powered, randomized phase III trials are necessary to define the clinical role of targeted therapies in advanced gastric cancer. Biomarker studies to correlate with treatment outcomes will be critical to identify patients who benefit most from chemotherapy and targeted therapy.
Resumo:
The ability to adapt to marginal habitats, in which survival and reproduction are initially poor, plays a crucial role in the evolution of ecological niches and species ranges. Adaptation to marginal habitats may be limited by genetic, developmental, and functional constraints, but also by consequences of demographic characteristics of marginal populations. Marginal populations are often sparse, fragmented, prone to local extinctions, or are demographic sinks subject to high immigration from high-quality core habitats. This makes them demographically and genetically dependent on core habitats and prone to gene flow counteracting local selection. Theoretical and empirical research in the past decade has advanced our understanding of conditions that favor adaptation to marginal habitats despite those limitations. This review is an attempt at synthesis of those developments and of the emerging conceptual framework.
Resumo:
Background: Since the rate of histologically 'negative' appendices still ranges between 15 and 20%, appendicitis in 'borderline' cases remains a challenging disease. As previously described, cell adhesion molecule expression correlates with different stages of appendicitis. Therefore, it was of interest to determine whether the 'negative' appendix correlated with the absence of E-selectin or vascular cell adhesion molecule-1 (VCAM-1). Methods: Nineteen grossly normal appendices from a series of 120 appendectomy specimens from patients with suspected appendicitis were analysed in frozen sections for the expression of E-selectin and VCAM-1. As control, 5 normal appendices were stained. Results: This study showed a coexpression of E-selectin and VCAM-1 in endothelial cells in early and recurrent appendicitis. In patients with symptoms for less than 6 h, only E-selectin was detected. Cases with fibrosis and luminal obliteration were only positive for VCAM-1. In cases of early appendicitis with symptoms of less than 6 h duration, a discordance between histological and immunohistochemical results was found. Conclusions: This report indicates that E-selectin and VCAM-1 expression could be useful parameters in the diagnosis of appendicitis in borderline cases.
Resumo:
Deeply incised river networks are generally regarded as robust features that are not easily modified by erosion or tectonics. Although the reorganization of deeply incised drainage systems has been documented, the corresponding importance with regard to the overall landscape evolution of mountain ranges and the factors that permit such reorganizations are poorly understood. To address this problem, we have explored the rapid drainage reorganization that affected the Cahabon River in Guatemala during the Quaternary. Sediment-provenance analysis, field mapping, and electrical resistivity tomography (ERT) imaging are used to reconstruct the geometry of the valley before the river was captured. Dating of the abandoned valley sediments by the Be-10-Al-26 burial method and geomagnetic polarity analysis allow us to determine the age of the capture events and then to quantify several processes, such as the rate of tectonic deformation of the paleovalley, the rate of propagation of post-capture drainage reversal, and the rate at which canyons that formed at the capture sites have propagated along the paleovalley. Transtensional faulting started 1 to 3 million years ago, produced ground tilting and ground faulting along the Cahabon River, and thus generated differential uplift rate of 0.3 +/- 0.1 up to 0.7 +/- 0.4 mm . y(-1) along the river's course. The river responded to faulting by incising the areas of relative uplift and depositing a few tens of meters of sediment above the areas of relative subsidence. Then, the river experienced two captures and one avulsion between 700 ky and 100 ky. The captures breached high-standing ridges that separate the Cahabon River from its captors. Captures occurred at specific points where ridges are made permeable by fault damage zones and/or soluble rocks. Groundwater flow from the Cahabon River down to its captors likely increased the erosive power of the captors thus promoting focused erosion of the ridges. Valley-fill formation and capture occurred in close temporal succession, suggesting a genetic link between the two. We suggest that the aquifers accumulated within the valley-fills, increased the head along the subterraneous system connecting the Cahabon River to its captors, and promoted their development. Upon capture, the breached valley experienced widespread drainage reversal toward the capture sites. We attribute the generalized reversal to combined effects of groundwater sapping in the valley-fill, axial drainage obstruction by lateral fans, and tectonic tilting. Drainage reversal increased the size of the captured areas by a factor of 4 to 6. At the capture sites, 500 m deep canyons have been incised into the bedrock and are propagating upstream at a rate of 3 to 11 mm . y(-1) deepening at a rate of 0.7 to 1 5 mm . y(-1). At this rate, 1 to 2 million years will be necessary for headward erosion to completely erase the topographic expression of the paleovalley. It is concluded that the rapid reorganization of this drainage system was made possible by the way the river adjusted to the new tectonic strain field, which involved transient sedimentation along the river's course. If the river had escaped its early reorganization and had been given the time necessary to reach a new dynamic equilibrium, then the transient conditions that promoted capture would have vanished and its vulnerability to capture would have been strongly reduced.
Resumo:
Some methadone maintenance treatment (MMT) programs prescribe inadequate daily methadone doses. Patients complain of withdrawal symptoms and continue illicit opioid use, yet practitioners are reluctant to increase doses above certain arbitrary thresholds. Serum methadone levels (SMLs) may guide practitioners dosing decisions, especially for those patients who have low SMLs despite higher methadone doses. Such variation is due in part to the complexities of methadone metabolism. The medication itself is a racemic (50:50) mixture of 2 enantiomers: an active "R" form and an essentially inactive "S" form. Methadone is metabolized primarily in the liver, by up to five cytochrome P450 isoforms, and individual differences in enzyme activity help explain wide ranges of active R-enantiomer concentrations in patients given identical doses of racemic methadone. Most clinical research studies have used methadone doses of less than 100 mg/day [d] and have not reported corresponding SMLs. New research suggests that doses ranging from 120 mg/d to more than 700 mg/d, with correspondingly higher SMLs, may be optimal for many patients. Each patient presents a unique clinical challenge, and there is no way of prescribing a single best methadone dose to achieve a specific blood level as a "gold standard" for all patients. Clinical signs and patient-reported symptoms of abstinence syndrome, and continuing illicit opioid use, are effective indicators of dose inadequacy. There does not appear to be a maximum daily dose limit when determining what is adequately "enough" methadone in MMT.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.