117 resultados para Models and Principles
Resumo:
TERMINOLOGY AND PRINCIPLES OF COMBINING ANTIPSYCHOTICS WITH A SECOND MEDICATION: The term "combination" includes virtually all the ways in which one medication may be added to another. The other commonly used terms are "augmentation" which implies an additive effect from adding a second medicine to that obtained from prescribing a first, an "add on" which implies adding on to existing, possibly effective treatment which, for one reason or another, cannot or should not be stopped. The issues that arise in all potential indications are: a) how long it is reasonable to wait to prove insufficiency of response to monotherapy; b) by what criteria that response should be defined; c) how optimal is the dose of the first monotherapy and, therefore, how confident can one be that its lack of effect is due to a truly inadequate response? Before one considers combination treatment, one or more of the following criteria should be met; a) monotherapy has been only partially effective on core symptoms; b) monotherapy has been effective on some concurrent symptoms but not others, for which a further medicine is believed to be required; c) a particular combination might be indicated de novo in some indications; d) The combination could improve tolerability because two compounds may be employed below their individual dose thresholds for side effects. Regulators have been concerned primarily with a and, in principle at least, c above. In clinical practice, the use of combination treatment reflects the often unsatisfactory outcome of treatment with single agents. ANTIPSYCHOTICS IN MANIA: There is good evidence that most antipsychotics tested show efficacy in acute mania when added to lithium or valproate for patients showing no or a partial response to lithium or valproate alone. Conventional 2-armed trial designs could benefit from a third antipsychotic monotherapy arm. In the long term treatment of bipolar disorder, in patients responding acutely to the addition of quetiapine to lithium or valproate, this combination reduces the subsequent risk of relapse to depression, mania or mixed states compared to monotherapy with lithium or valproate. Comparable data is not available for combination with other antipsychotics. ANTIPSYCHOTICS IN MAJOR DEPRESSION: Some atypical antipsychotics have been shown to induce remission when added to an antidepressant (usually a SSRI or SNRI) in unipolar patients in a major depressive episode unresponsive to the antidepressant monotherapy. Refractoriness is defined as at least 6 weeks without meeting an adequate pre-defined treatment response. Long term data is not yet available to support continuing efficacy. SCHIZOPHRENIA: There is only limited evidence to support the combination of two or more antipsychotics in schizophrenia. Any monotherapy should be given at the maximal tolerated dose and at least two antipsychotics of different action/tolerability and clozapine should be given as a monotherapy before a combination is considered. The addition of a high potency D2/3 antagonist to a low potency antagonist like clozapine or quetiapine is the logical combination to treat positive symptoms, although further evidence from well conducted clinical trials is needed. Other mechanisms of action than D2/3 blockade, and hence other combinations might be more relevant for negative, cognitive or affective symptoms. OBSESSIVE-COMPULSIVE DISORDER: SSRI monotherapy has moderate overall average benefit in OCD and can take as long as 3 months for benefit to be decided. Antipsychotic addition may be considered in OCD with tic disorder and in refractory OCD. For OCD with poor insight (OCD with "psychotic features"), treatment of choice should be medium to high dose of SSRI, and only in refractory cases, augmentation with antipsychotics might be considered. Augmentation with haloperidol and risperidone was found to be effective (symptom reduction of more than 35%) for patients with tics. For refractory OCD, there is data suggesting a specific role for haloperidol and risperidone as well, and some data with regard to potential therapeutic benefit with olanzapine and quetiapine. ANTIPSYCHOTICS AND ADVERSE EFFECTS IN SEVERE MENTAL ILLNESS: Cardio-metabolic risk in patients with severe mental illness and especially when treated with antipsychotic agents are now much better recognized and efforts to ensure improved physical health screening and prevention are becoming established.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
Peroxisome proliferator-activated receptor gamma (PPAR-gamma) plays a key role in adipocyte differentiation and insulin sensitivity. Its synthetic ligands, the thiazolidinediones (TZD), are used as insulin sensitizers in the treatment of type 2 diabetes. These compounds induce both adipocyte differentiation in cell culture models and promote weight gain in rodents and humans. Here, we report on the identification of a new synthetic PPARgamma antagonist, the phosphonophosphate SR-202, which inhibits both TZD-stimulated recruitment of the coactivator steroid receptor coactivator-1 and TZD-induced transcriptional activity of the receptor. In cell culture, SR-202 efficiently antagonizes hormone- and TZD-induced adipocyte differentiation. In vivo, decreasing PPARgamma activity, either by treatment with SR-202 or by invalidation of one allele of the PPARgamma gene, leads to a reduction of both high fat diet-induced adipocyte hypertrophy and insulin resistance. These effects are accompanied by a smaller size of the adipocytes and a reduction of TNFalpha and leptin secretion. Treatment with SR-202 also dramatically improves insulin sensitivity in the diabetic ob/ob mice. Thus, although we cannot exclude that its actions involve additional signaling mechanisms, SR-202 represents a new selective PPARgamma antagonist that is effective both in vitro and in vivo. Because it yields both antiobesity and antidiabetic effects, SR-202 may be a lead for new compounds to be used in the treatment of obesity and type 2 diabetes.
Resumo:
BACKGROUND: Postmenopausal women with hormone receptor-positive early breast cancer have persistent, long-term risk of breast-cancer recurrence and death. Therefore, trials assessing endocrine therapies for this patient population need extended follow-up. We present an update of efficacy outcomes in the Breast International Group (BIG) 1-98 study at 8·1 years median follow-up. METHODS: BIG 1-98 is a randomised, phase 3, double-blind trial of postmenopausal women with hormone receptor-positive early breast cancer that compares 5 years of tamoxifen or letrozole monotherapy, or sequential treatment with 2 years of one of these drugs followed by 3 years of the other. Randomisation was done with permuted blocks, and stratified according to the two-arm or four-arm randomisation option, participating institution, and chemotherapy use. Patients, investigators, data managers, and medical reviewers were masked. The primary efficacy endpoint was disease-free survival (events were invasive breast cancer relapse, second primaries [contralateral breast and non-breast], or death without previous cancer event). Secondary endpoints were overall survival, distant recurrence-free interval (DRFI), and breast cancer-free interval (BCFI). The monotherapy comparison included patients randomly assigned to tamoxifen or letrozole for 5 years. In 2005, after a significant disease-free survival benefit was reported for letrozole as compared with tamoxifen, a protocol amendment facilitated the crossover to letrozole of patients who were still receiving tamoxifen alone; Cox models and Kaplan-Meier estimates with inverse probability of censoring weighting (IPCW) are used to account for selective crossover to letrozole of patients (n=619) in the tamoxifen arm. Comparison of sequential treatments to letrozole monotherapy included patients enrolled and randomly assigned to letrozole for 5 years, letrozole for 2 years followed by tamoxifen for 3 years, or tamoxifen for 2 years followed by letrozole for 3 years. Treatment has ended for all patients and detailed safety results for adverse events that occurred during the 5 years of treatment have been reported elsewhere. Follow-up is continuing for those enrolled in the four-arm option. BIG 1-98 is registered at clinicaltrials.govNCT00004205. FINDINGS: 8010 patients were included in the trial, with a median follow-up of 8·1 years (range 0-12·4). 2459 were randomly assigned to monotherapy with tamoxifen for 5 years and 2463 to monotherapy with letrozole for 5 years. In the four-arm option of the trial, 1546 were randomly assigned to letrozole for 5 years, 1548 to tamoxifen for 5 years, 1540 to letrozole for 2 years followed by tamoxifen for 3 years, and 1548 to tamoxifen for 2 years followed by letrozole for 3 years. At a median follow-up of 8·7 years from randomisation (range 0-12·4), letrozole monotherapy was significantly better than tamoxifen, whether by IPCW or intention-to-treat analysis (IPCW disease-free survival HR 0·82 [95% CI 0·74-0·92], overall survival HR 0·79 [0·69-0·90], DRFI HR 0·79 [0·68-0·92], BCFI HR 0·80 [0·70-0·92]; intention-to-treat disease-free survival HR 0·86 [0·78-0·96], overall survival HR 0·87 [0·77-0·999], DRFI HR 0·86 [0·74-0·998], BCFI HR 0·86 [0·76-0·98]). At a median follow-up of 8·0 years from randomisation (range 0-11·2) for the comparison of the sequential groups with letrozole monotherapy, there were no statistically significant differences in any of the four endpoints for either sequence. 8-year intention-to-treat estimates (each with SE ≤1·1%) for letrozole monotherapy, letrozole followed by tamoxifen, and tamoxifen followed by letrozole were 78·6%, 77·8%, 77·3% for disease-free survival; 87·5%, 87·7%, 85·9% for overall survival; 89·9%, 88·7%, 88·1% for DRFI; and 86·1%, 85·3%, 84·3% for BCFI. INTERPRETATION: For postmenopausal women with endocrine-responsive early breast cancer, a reduction in breast cancer recurrence and mortality is obtained by letrozole monotherapy when compared with tamoxifen montherapy. Sequential treatments involving tamoxifen and letrozole do not improve outcome compared with letrozole monotherapy, but might be useful strategies when considering an individual patient's risk of recurrence and treatment tolerability. FUNDING: Novartis, United States National Cancer Institute, International Breast Cancer Study Group.
Resumo:
The anaplastic lymphoma kinase (ALK) gene is overexpressed, mutated or amplified in most neuroblastoma (NB), a pediatric neural crest-derived embryonal tumor. The two most frequent mutations, ALK-F1174L and ALK-R1275Q, contribute to NB tumorigenesis in mouse models, and cooperate with MYCN in the oncogenic process. However, the precise role of activating ALK mutations or ALK-wt overexpression in NB tumor initiation needs further clarification. Human ALK-wt, ALK-F1174L, or ALK-R1275Q were stably expressed in murine neural crest progenitor cells (NCPC), MONC-1 or JoMa1, immortalized with v-Myc or Tamoxifen-inducible Myc-ERT, respectively. While orthotopic implantations of MONC- 1 parental cells in nude mice generated various tumor types, such as NB, osteo/ chondrosarcoma, and undifferentiated tumors, due to v-Myc oncogenic activity, MONC-1-ALK-F1174L cells only produced undifferentiated tumors. Furthermore, our data represent the first demonstration of ALK-wt transforming capacity, as ALK-wt expression in JoMa1 cells, likewise ALK-F1174L, or ALK-R1275Q, in absence of exogenous Myc-ERT activity, was sufficient to induce the formation of aggressive and undifferentiated neural crest cell-derived tumors, but not to drive NB development. Interestingly, JoMa1-ALK tumors and their derived cell lines upregulated Myc endogenous expression, resulting from ALK activation, and both ALK and Myc activities were necessary to confer tumorigenic properties on tumor-derived JoMa1 cells in vitro.
Resumo:
Cleft palate is a common congenital disorder that affects up to 1 in 2,500 live human births and results in considerable morbidity to affected individuals and their families. The etiology of cleft palate is complex, with both genetic and environmental factors implicated. Mutations in the transcription factor-encoding genes p63 and interferon regulatory factor 6 (IRF6) have individually been identified as causes of cleft palate; however, a relationship between the key transcription factors p63 and IRF6 has not been determined. Here, we used both mouse models and human primary keratinocytes from patients with cleft palate to demonstrate that IRF6 and p63 interact epistatically during development of the secondary palate. Mice simultaneously carrying a heterozygous deletion of p63 and the Irf6 knockin mutation R84C, which causes cleft palate in humans, displayed ectodermal abnormalities that led to cleft palate. Furthermore, we showed that p63 transactivated IRF6 by binding to an upstream enhancer element; genetic variation within this enhancer element is associated with increased susceptibility to cleft lip. Our findings therefore identify p63 as a key regulatory molecule during palate development and provide a mechanism for the cooperative role of p63 and IRF6 in orofacial development in mice and humans.
Resumo:
The complexity of sleep-wake regulation, in addition to the many environmental influences, includes genetic predisposing factors, which begin to be discovered. Most of the current progress in the study of sleep genetics comes from animal models (dogs, mice, and drosophila). Multiple approaches using both animal models and different genetic techniques are needed to follow the segregation and ultimately to identify 'sleep genes' and molecular bases of sleep disorders. Recent progress in molecular genetics and the development of detailed human genome map have already led to the identification of genetic factors in several complex disorders. Only a few genes are known for which a mutation causes a sleep disorder. However, single gene disorders are rare and most common disorders are complex in terms of their genetic susceptibility, environmental factors, gene-gene, and gene-environment interactions. We review here the current progress in the genetics of normal and pathological sleep and suggest a few future perspectives.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
La diarrhée congénitale de sodium est une maladie génétique très rare. Les enfants touchés par cette maladie présentent une diarrhée aqueuse sévère accompagnée d'une perte fécale de sodium et bicarbonates causant une déshydratation hyponatrémique et une acidose métabolique. Des analyses génétiques ont identifié des mutations du gène Spint2 comme cause de cette maladie. Le gène Spint2 code pour un inhibiteur de sérine protéase transmembranaire exprimé dans divers épithéliums tels que ceux du tube digestif ou des tubules rénaux. Le rôle physiologique de Spint2 n'est pas connu. De plus, aucun partenaire physiologique de Spint2 n'a été identifié et le mécanisme d'inhibition par Spint2 nous est peu connu. Le but de ce projet est donc d'obtenir de plus amples informations concernant la fonction et le rôle de Spint2 dans le contexte de la diarrhée congénitale de sodium, cela afin de mieux comprendre la physiopathologie des diarrhées et peut-être d'identifier de nouvelles cibles thérapeutiques. Un test fonctionnel dans les ovocytes de Xenopus a identifié les sérine protéases transmembranaires CAPI et Tmprssl3 comme potentielles cibles de Spint2 dans la mesure où ces deux protéases n'étaient plus bloquées par le mutant de Spint2 Y163C qui est associé avec la diarrhée congénitale de sodium. Des expériences fonctionnelles et biochimiques plus poussées suggèrent que l'inhibition de Tmprssl3 par Spint2 est le résultat d'une interaction complexe entre ces deux protéines. Les effets des sérine protéases transmembranaires sur l'échangeur Na+-H+ NHE3, qui pourrait être impliqué dans la pathogenèse de la diarrhée congénitale de sodium ont aussi été testés. Un clivage spécifique de NHE3 par la sérine protéase transmembranaire Tmprss3 a été observé lors d'expériences biochimiques. Malheureusement, la pertinence physiologique de ces résultats n'a pas pu être évaluée in vivo, étant donné que le modèle de souris knockout conditionnel de Spint2 que nous avons créé ne montrait une réduction de l'expression de Spint2 que de 50% et aucun phénotype. En résumé, ce travail met en évidence deux nouveaux partenaires possibles de Spint2, ainsi qu'une potentielle régulation de NHE3 par des sérine protéases transmembranaires. Des expériences supplémentaires faites dans des modèles animaux et lignées cellulaires sont requises pour évaluer la pertinence physiologique de ces données et pour obtenir de plus amples informations au sujet de Spint2 et de la diarrhée congénitale de sodium. - The congenital sodium diarrhea is a very rare genetic disease. Children affected by this condition suffer from a severe diarrhea characterized by watery stools with a high fecal loss of sodium and bicarbonates, resulting in hyponatremic dehydration and metabolic acidosis. Genetic analyses have identified mutations in the Spint2 gene as a cause of this disease. The spint2 gene encodes a transmembrane serine protease inhibitor expressed in various epithelial tissues including the gastro-intestinal tract and renal tubules. The physiological role of Spint2 is completely unknown. In addition, physiological partners of Spint2 are still to be identified and the mechanism of inhibition by Spint2 remains elusive. Therefore, the aim of this project was to get insights about the function and the role of Spint2 in the context of the congenital sodium diarrhea in order to better understand the pathophysiology of diarrheas and maybe identify new therapeutic targets. A functional assay in Xenopus oocytes identified the membrane-bound serine proteases CAPI and Tmprssl3 as potential targets of Spint2 because both proteases were no longer inhibited by the mutant Spint2 Y163C that has been associated with the congenital diarrhea. Further functional and biochemical experiments suggested that the inhibition of Tmprssl3 by Spint2 occurs though a complex interaction between both proteins. The effects of membrane-bound serine proteases on the Na+-H+ exchanger NHE3, which has been proposed to be involved in the pathogenesis of the congenital sodium diarrhea, were also tested. A specific cleavage of NHE3 by the membrane-bound serine protease Tmprss3 was observed in biochemical experiments. Unfortunately, the physiological relevance of these results could not be assessed in vivo since the conditional Spint2 knockout mouse model that we generated showed a reduction in Spint2 expression of only 50% and displayed no phenotype. Briefly, this work provides two new potential partners of Spint2 and emphasizes a putative regulation of NHE3 by membrane-bound serine proteases. Further work done in animal models and cell lines is required to assess the physiological relevance of these results and to obtain additional data about Spint2 and the congenital diarrhea.
Resumo:
Aim: The insulin sensitizer rosiglitazone (RTZ) acts by activating peroxisome proliferator and activated receptor gamma (PPAR gamma), an effect accompanied in vivo in humans by an increase in fat storage. We hypothesized that this effect concerns PPARgamma(1) and PPARgamma(2) differently and is dependant on the origin of the adipose cells (subcutaneous or visceral). To this aim, the effect of RTZ, the PPARgamma antagonist GW9662 and lentiviral vectors expressing interfering RNA were evaluated on human pre-adipocyte models. Methods: Two models were investigated: the human pre-adipose cell line Chub-S7 and primary pre-adipocytes derived from subcutaneous and visceral biopsies of adipose tissue (AT) obtained from obese patients. Cells were used to perform oil-red O staining, gene expression measurements and lentiviral infections. Results: In both models, RTZ was found to stimulate the differentiation of pre-adipocytes into mature cells. This was accompanied by significant increases in both the PPARgamma(1) and PPARgamma(2) gene expression, with a relatively stronger stimulation of PPARgamma(2). In contrast, RTZ failed to stimulate differentiation processes when cells were incubated in the presence of GW9662. This effect was similar to the effect observed using interfering RNA against PPARgamma(2). It was accompanied by an abrogation of the RTZ-induced PPARgamma(2) gene expression, whereas the level of PPARgamma(1) was not affected. Conclusions: Both the GW9662 treatment and interfering RNA against PPARgamma(2) are able to abrogate RTZ-induced differentiation without a significant change of PPARgamma(1) gene expression. These results are consistent with previous results obtained in animal models and suggest that in humans PPARgamma(2) may also be the key isoform involved in fat storage.
Resumo:
BACKGROUND: Over 50% of patients with head and neck squamous cell carcinoma (HNSCC) present with locoregionally advanced disease. Those at intermediate-to-high risk of recurrence after definitive therapy exhibit advanced disease based on tumour size or lymph node involvement, non-oropharynx primary sites, human papillomavirus (HPV)-negative oropharyngeal cancer, or HPV-positive oropharynx cancer with smoking history (>10-pack-years). Non-surgical approaches include concurrent chemoradiotherapy, induction chemotherapy followed by definitive radiotherapy or chemoradiotherapy, or radiotherapy alone. Following locoregional therapies (including surgical salvage of residual cervical nodes), no standard intervention exists. Overexpression of epidermal growth factor receptor (EGFR), an ErbB family member, is associated with poor prognosis in HNSCC. EGFR-targeted cetuximab is the only targeted therapy that impacts overall survival and is approved for HNSCC in the USA or Europe. However, resistance often occurs, and new approaches, such as targeting multiple ErbB family members, may be required. Afatinib, an irreversible ErbB family blocker, demonstrated antiproliferative activity in preclinical models and comparable clinical efficacy with cetuximab in a randomized phase II trial in recurrent or metastatic HNSCC. LUX-Head & Neck 2, a phase III study, will assess adjuvant afatinib versus placebo following chemoradiotherapy in primary unresected locoregionally advanced intermediate-to-high-risk HNSCC. METHODS/DESIGN: Patients with primary unresected locoregionally advanced HNSCC, in good clinical condition with unfavourable risk of recurrence, and no evidence of disease after chemoradiotherapy will be randomized 2:1 to oral once-daily afatinib (40 mg starting dose) or placebo. As HPV status will not be determined for eligibility, unfavourable risk is defined as non-oropharynx primary site or oropharynx cancer in patients with a smoking history (>10 pack-years). Treatment will continue for 18 months or until recurrence or unacceptable adverse events occur. The primary endpoint measure is duration of disease-free survival; secondary endpoint measures are disease-free survival rate at 2 years, overall survival, health-related quality of life and safety. DISCUSSION: Given the unmet need in the adjuvant treatment of intermediate-to-high-risk HNSCC patients, it is expected that LUX-Head & Neck 2 will provide new insights into treatment in this setting and might demonstrate the ability of afatinib to significantly improve disease-free survival, compared with placebo. TRIAL REGISTRATION: ClinicalTrials.gov NCT01345669.
Resumo:
Peroxynitrite is a strong biological oxidant formed from the reaction between two free radicals, superoxide and nitric oxide. It inflicts serious damages to most biomolecules, including proteins, lipids and nucleic acids, either through direct oxidation or through the secondary generation of highly reactive free radicals. When such damage reaches a critical threshold, cells eventually die by necrosis or apoptosis. An excessive production of peroxynitrite is instrumental in the development of organ damage and dysfunction in conditions such as circulatory shock and ischemia-reperfusion. In such circumstances, various synthetic metalloporphyrins, able to degrade peroxynitrite, disclose important beneficial effects in animal models, and might therefore represent novel pharmacological agents in the future.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.