953 resultados para Models and Principles


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effect of environment on development and survival of pupae of the necrophagous fly Ophyra albuquerquei Lopes (Diptera, Muscidae). Species of Ophyra Robineau-Desvoidy, 1830 are found in decomposing bodies, usually in fresh, bloated and decay stages. Ophyra albuquerquei Lopes, for example, can be found in animal carcasses. The influence of environmental factors has not been evaluated in puparia of O. albuquerquei. Thus, the focus of this work was motivated by the need for models to predict the development of a necrophagous insect as a function of abiotic factors. Colonies of O. albuquerquei were maintained in the laboratory to obtain pupae. On the tenth day of each month 200 pupae, divided equally into 10 glass jars, were exposed to the environment and checked daily for adult emergence of each sample. We concluded that the high survival rate observed suggested that the diets used for rearing the larvae and maintaining the adults were appropriate. Also, the data adjusted to robust generalized linear models and there were no interruptions of O. albuquerquei pupae development within the limits of temperatures studied in southern Rio Grande do Sul, given the high survival presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives In this study, we have investigated the effects of cannabidiol (CBD) on myocardial dysfunction, inflammation, oxidative/nitrative stress, cell death, and interrelated signaling pathways, using a mouse model of type I diabetic cardiomyopathy and primary human cardiomyocytes exposed to high glucose. Background Cannabidiol, the most abundant nonpsychoactive constituent of Cannabis sativa (marijuana) plant, exerts anti-inflammatory effects in various disease models and alleviates pain and spasticity associated with multiple sclerosis in humans. Methods Left ventricular function was measured by the pressure-volume system. Oxidative stress, cell death, and fibrosis markers were evaluated by molecular biology/biochemical techniques, electron spin resonance spectroscopy, and flow cytometry. Results Diabetic cardiomyopathy was characterized by declined diastolic and systolic myocardial performance associated with increased oxidative-nitrative stress, nuclear factor-kappa B and mitogen-activated protein kinase (c-Jun N-terminal kinase, p-38, p38 alpha) activation, enhanced expression of adhesion molecules (intercellular adhesion molecule-1, vascular cell adhesion molecule-1), tumor necrosis factor-alpha, markers of fibrosis (transforming growth factor-beta, connective tissue growth factor, fibronectin, collagen-1, matrix metalloproteinase-2 and -9), enhanced cell death (caspase 3/7 and poly[adenosine diphosphate-ribose] polymerase activity, chromatin fragmentation, and terminal deoxynucleotidyl transferase dUTP nick end labeling), and diminished Akt phosphorylation. Remarkably, CBD attenuated myocardial dysfunction, cardiac fibrosis, oxidative/nitrative stress, inflammation, cell death, and interrelated signaling pathways. Furthermore, CBD also attenuated the high glucose-induced increased reactive oxygen species generation, nuclear factor-kappa B activation, and cell death in primary human cardiomyocytes. Conclusions Collectively, these results coupled with the excellent safety and tolerability profile of CBD in humans, strongly suggest that it may have great therapeutic potential in the treatment of diabetic complications, and perhaps other cardiovascular disorders, by attenuating oxidative/nitrative stress, inflammation, cell death and fibrosis. (J Am Coll Cardiol 2010;56:2115-25) (C) 2010 by the American College of Cardiology Foundation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this paper is to study the diffusion and transformation of scientific information in everyday discussions. Based on rumour models and social representations theory, the impact of interpersonal communication and pre-existing beliefs on transmission of the content of a scientific discovery was analysed. In three experiments, a communication chain was simulated to investigate how laypeople make sense of a genetic discovery first published in a scientific outlet, then reported in a mainstream newspaper and finally discussed in groups. Study 1 (N=40) demonstrated a transformation of information when the scientific discovery moved along the communication chain. During successive narratives, scientific expert terminology disappeared while scientific information associated with lay terminology persisted. Moreover, the idea of a discovery of a faithfulness gene emerged. Study 2 (N=70) revealed that transmission of the scientific message varied as a function of attitudes towards genetic explanations of behaviour (pro-genetics vs. anti-genetics). Pro-genetics employed more scientific terminology than anti-genetics. Study 3 (N=75) showed that endorsement of genetic explanations was related to descriptive accounts of the scientific information, whereas rejection of genetic explanations was related to evaluative accounts of the information.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Estimates of the decrease in CD4(+) cell counts in untreated patients with human immunodeficiency virus (HIV) infection are important for patient care and public health. We analyzed CD4(+) cell count decreases in the Cape Town AIDS Cohort and the Swiss HIV Cohort Study. METHODS: We used mixed-effects models and joint models that allowed for the correlation between CD4(+) cell count decreases and survival and stratified analyses by the initial cell count (50-199, 200-349, 350-499, and 500-750 cells/microL). Results are presented as the mean decrease in CD4(+) cell count with 95% confidence intervals (CIs) during the first year after the initial CD4(+) cell count. RESULTS: A total of 784 South African (629 nonwhite) and 2030 Swiss (218 nonwhite) patients with HIV infection contributed 13,388 CD4(+) cell counts. Decreases in CD4(+) cell count were steeper in white patients, patients with higher initial CD4(+) cell counts, and older patients. Decreases ranged from a mean of 38 cells/microL (95% CI, 24-54 cells/microL) in nonwhite patients from the Swiss HIV Cohort Study 15-39 years of age with an initial CD4(+) cell count of 200-349 cells/microL to a mean of 210 cells/microL (95% CI, 143-268 cells/microL) in white patients in the Cape Town AIDS Cohort > or =40 years of age with an initial CD4(+) cell count of 500-750 cells/microL. CONCLUSIONS: Among both patients from Switzerland and patients from South Africa, CD4(+) cell count decreases were greater in white patients with HIV infection than they were in nonwhite patients with HIV infection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

TERMINOLOGY AND PRINCIPLES OF COMBINING ANTIPSYCHOTICS WITH A SECOND MEDICATION: The term "combination" includes virtually all the ways in which one medication may be added to another. The other commonly used terms are "augmentation" which implies an additive effect from adding a second medicine to that obtained from prescribing a first, an "add on" which implies adding on to existing, possibly effective treatment which, for one reason or another, cannot or should not be stopped. The issues that arise in all potential indications are: a) how long it is reasonable to wait to prove insufficiency of response to monotherapy; b) by what criteria that response should be defined; c) how optimal is the dose of the first monotherapy and, therefore, how confident can one be that its lack of effect is due to a truly inadequate response? Before one considers combination treatment, one or more of the following criteria should be met; a) monotherapy has been only partially effective on core symptoms; b) monotherapy has been effective on some concurrent symptoms but not others, for which a further medicine is believed to be required; c) a particular combination might be indicated de novo in some indications; d) The combination could improve tolerability because two compounds may be employed below their individual dose thresholds for side effects. Regulators have been concerned primarily with a and, in principle at least, c above. In clinical practice, the use of combination treatment reflects the often unsatisfactory outcome of treatment with single agents. ANTIPSYCHOTICS IN MANIA: There is good evidence that most antipsychotics tested show efficacy in acute mania when added to lithium or valproate for patients showing no or a partial response to lithium or valproate alone. Conventional 2-armed trial designs could benefit from a third antipsychotic monotherapy arm. In the long term treatment of bipolar disorder, in patients responding acutely to the addition of quetiapine to lithium or valproate, this combination reduces the subsequent risk of relapse to depression, mania or mixed states compared to monotherapy with lithium or valproate. Comparable data is not available for combination with other antipsychotics. ANTIPSYCHOTICS IN MAJOR DEPRESSION: Some atypical antipsychotics have been shown to induce remission when added to an antidepressant (usually a SSRI or SNRI) in unipolar patients in a major depressive episode unresponsive to the antidepressant monotherapy. Refractoriness is defined as at least 6 weeks without meeting an adequate pre-defined treatment response. Long term data is not yet available to support continuing efficacy. SCHIZOPHRENIA: There is only limited evidence to support the combination of two or more antipsychotics in schizophrenia. Any monotherapy should be given at the maximal tolerated dose and at least two antipsychotics of different action/tolerability and clozapine should be given as a monotherapy before a combination is considered. The addition of a high potency D2/3 antagonist to a low potency antagonist like clozapine or quetiapine is the logical combination to treat positive symptoms, although further evidence from well conducted clinical trials is needed. Other mechanisms of action than D2/3 blockade, and hence other combinations might be more relevant for negative, cognitive or affective symptoms. OBSESSIVE-COMPULSIVE DISORDER: SSRI monotherapy has moderate overall average benefit in OCD and can take as long as 3 months for benefit to be decided. Antipsychotic addition may be considered in OCD with tic disorder and in refractory OCD. For OCD with poor insight (OCD with "psychotic features"), treatment of choice should be medium to high dose of SSRI, and only in refractory cases, augmentation with antipsychotics might be considered. Augmentation with haloperidol and risperidone was found to be effective (symptom reduction of more than 35%) for patients with tics. For refractory OCD, there is data suggesting a specific role for haloperidol and risperidone as well, and some data with regard to potential therapeutic benefit with olanzapine and quetiapine. ANTIPSYCHOTICS AND ADVERSE EFFECTS IN SEVERE MENTAL ILLNESS: Cardio-metabolic risk in patients with severe mental illness and especially when treated with antipsychotic agents are now much better recognized and efforts to ensure improved physical health screening and prevention are becoming established.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Study of the publication models and the means of accessing scientific literature in the current environment of digital communication and the web. The text introduces the concept of journal article as a well-defined and stable unit within the publishing world, and as a nucleus on which professional and scholarly communication has been based since its beginnings in the 17th century. The transformation of scientific communication that the digital world has enabled is analysed. Descriptions are provided of some of the practices undertaken by authors, research organisations, publishers and library-related institutions as a response to the new possibilities being unveiled for articles, both as products as well as for their creation and distribution processes. These transformations affect the very nature of articles as a minimal unit -both unique and stable- of scientific communication. The article concludes by noting that under varying documentary forms of publisher aggregation and bibliographic control -sometimes simultaneously and, even, apparently contradictory- there flourishes a more pluralistic type of scientific communication. This pluralism offers: more possibilities for communication among authors; fewer levels of intermediaries such as agents that intervene and provide added value to the products; greater availability for users both economically speaking and from the point of view of access; and greater interaction and wealth of contents, thanks to the new hypertext and multimedia possibilities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Peroxisome proliferator-activated receptor gamma (PPAR-gamma) plays a key role in adipocyte differentiation and insulin sensitivity. Its synthetic ligands, the thiazolidinediones (TZD), are used as insulin sensitizers in the treatment of type 2 diabetes. These compounds induce both adipocyte differentiation in cell culture models and promote weight gain in rodents and humans. Here, we report on the identification of a new synthetic PPARgamma antagonist, the phosphonophosphate SR-202, which inhibits both TZD-stimulated recruitment of the coactivator steroid receptor coactivator-1 and TZD-induced transcriptional activity of the receptor. In cell culture, SR-202 efficiently antagonizes hormone- and TZD-induced adipocyte differentiation. In vivo, decreasing PPARgamma activity, either by treatment with SR-202 or by invalidation of one allele of the PPARgamma gene, leads to a reduction of both high fat diet-induced adipocyte hypertrophy and insulin resistance. These effects are accompanied by a smaller size of the adipocytes and a reduction of TNFalpha and leptin secretion. Treatment with SR-202 also dramatically improves insulin sensitivity in the diabetic ob/ob mice. Thus, although we cannot exclude that its actions involve additional signaling mechanisms, SR-202 represents a new selective PPARgamma antagonist that is effective both in vitro and in vivo. Because it yields both antiobesity and antidiabetic effects, SR-202 may be a lead for new compounds to be used in the treatment of obesity and type 2 diabetes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most sedimentary modelling programs developed in recent years focus on either terrigenous or carbonate marine sedimentation. Nevertheless, only a few programs have attempted to consider mixed terrigenous-carbonate sedimentation, and most of these are two-dimensional, which is a major restriction since geological processes take place in 3D. This paper presents the basic concepts of a new 3D mathematical forward simulation model for clastic sediments, which was developed from SIMSAFADIM, a previous 3D carbonate sedimentation model. The new extended model, SIMSAFADIM-CLASTIC, simulates processes of autochthonous marine carbonate production and accumulation, together with clastic transport and sedimentation in three dimensions of both carbonate and terrigenous sediments. Other models and modelling strategies may also provide realistic and efficient tools for prediction of stratigraphic architecture and facies distribution of sedimentary deposits. However, SIMSAFADIM-CLASTIC becomes an innovative model that attempts to simulate different sediment types using a process-based approach, therefore being a useful tool for 3D prediction of stratigraphic architecture and facies distribution in sedimentary basins. This model is applied to the neogene Vallès-Penedès half-graben (western Mediterranean, NE Spain) to show the capacity of the program when applied to a realistic geologic situation involving interactions between terrigenous clastics and carbonate sediments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Postmenopausal women with hormone receptor-positive early breast cancer have persistent, long-term risk of breast-cancer recurrence and death. Therefore, trials assessing endocrine therapies for this patient population need extended follow-up. We present an update of efficacy outcomes in the Breast International Group (BIG) 1-98 study at 8·1 years median follow-up. METHODS: BIG 1-98 is a randomised, phase 3, double-blind trial of postmenopausal women with hormone receptor-positive early breast cancer that compares 5 years of tamoxifen or letrozole monotherapy, or sequential treatment with 2 years of one of these drugs followed by 3 years of the other. Randomisation was done with permuted blocks, and stratified according to the two-arm or four-arm randomisation option, participating institution, and chemotherapy use. Patients, investigators, data managers, and medical reviewers were masked. The primary efficacy endpoint was disease-free survival (events were invasive breast cancer relapse, second primaries [contralateral breast and non-breast], or death without previous cancer event). Secondary endpoints were overall survival, distant recurrence-free interval (DRFI), and breast cancer-free interval (BCFI). The monotherapy comparison included patients randomly assigned to tamoxifen or letrozole for 5 years. In 2005, after a significant disease-free survival benefit was reported for letrozole as compared with tamoxifen, a protocol amendment facilitated the crossover to letrozole of patients who were still receiving tamoxifen alone; Cox models and Kaplan-Meier estimates with inverse probability of censoring weighting (IPCW) are used to account for selective crossover to letrozole of patients (n=619) in the tamoxifen arm. Comparison of sequential treatments to letrozole monotherapy included patients enrolled and randomly assigned to letrozole for 5 years, letrozole for 2 years followed by tamoxifen for 3 years, or tamoxifen for 2 years followed by letrozole for 3 years. Treatment has ended for all patients and detailed safety results for adverse events that occurred during the 5 years of treatment have been reported elsewhere. Follow-up is continuing for those enrolled in the four-arm option. BIG 1-98 is registered at clinicaltrials.govNCT00004205. FINDINGS: 8010 patients were included in the trial, with a median follow-up of 8·1 years (range 0-12·4). 2459 were randomly assigned to monotherapy with tamoxifen for 5 years and 2463 to monotherapy with letrozole for 5 years. In the four-arm option of the trial, 1546 were randomly assigned to letrozole for 5 years, 1548 to tamoxifen for 5 years, 1540 to letrozole for 2 years followed by tamoxifen for 3 years, and 1548 to tamoxifen for 2 years followed by letrozole for 3 years. At a median follow-up of 8·7 years from randomisation (range 0-12·4), letrozole monotherapy was significantly better than tamoxifen, whether by IPCW or intention-to-treat analysis (IPCW disease-free survival HR 0·82 [95% CI 0·74-0·92], overall survival HR 0·79 [0·69-0·90], DRFI HR 0·79 [0·68-0·92], BCFI HR 0·80 [0·70-0·92]; intention-to-treat disease-free survival HR 0·86 [0·78-0·96], overall survival HR 0·87 [0·77-0·999], DRFI HR 0·86 [0·74-0·998], BCFI HR 0·86 [0·76-0·98]). At a median follow-up of 8·0 years from randomisation (range 0-11·2) for the comparison of the sequential groups with letrozole monotherapy, there were no statistically significant differences in any of the four endpoints for either sequence. 8-year intention-to-treat estimates (each with SE ≤1·1%) for letrozole monotherapy, letrozole followed by tamoxifen, and tamoxifen followed by letrozole were 78·6%, 77·8%, 77·3% for disease-free survival; 87·5%, 87·7%, 85·9% for overall survival; 89·9%, 88·7%, 88·1% for DRFI; and 86·1%, 85·3%, 84·3% for BCFI. INTERPRETATION: For postmenopausal women with endocrine-responsive early breast cancer, a reduction in breast cancer recurrence and mortality is obtained by letrozole monotherapy when compared with tamoxifen montherapy. Sequential treatments involving tamoxifen and letrozole do not improve outcome compared with letrozole monotherapy, but might be useful strategies when considering an individual patient's risk of recurrence and treatment tolerability. FUNDING: Novartis, United States National Cancer Institute, International Breast Cancer Study Group.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The anaplastic lymphoma kinase (ALK) gene is overexpressed, mutated or amplified in most neuroblastoma (NB), a pediatric neural crest-derived embryonal tumor. The two most frequent mutations, ALK-F1174L and ALK-R1275Q, contribute to NB tumorigenesis in mouse models, and cooperate with MYCN in the oncogenic process. However, the precise role of activating ALK mutations or ALK-wt overexpression in NB tumor initiation needs further clarification. Human ALK-wt, ALK-F1174L, or ALK-R1275Q were stably expressed in murine neural crest progenitor cells (NCPC), MONC-1 or JoMa1, immortalized with v-Myc or Tamoxifen-inducible Myc-ERT, respectively. While orthotopic implantations of MONC- 1 parental cells in nude mice generated various tumor types, such as NB, osteo/ chondrosarcoma, and undifferentiated tumors, due to v-Myc oncogenic activity, MONC-1-ALK-F1174L cells only produced undifferentiated tumors. Furthermore, our data represent the first demonstration of ALK-wt transforming capacity, as ALK-wt expression in JoMa1 cells, likewise ALK-F1174L, or ALK-R1275Q, in absence of exogenous Myc-ERT activity, was sufficient to induce the formation of aggressive and undifferentiated neural crest cell-derived tumors, but not to drive NB development. Interestingly, JoMa1-ALK tumors and their derived cell lines upregulated Myc endogenous expression, resulting from ALK activation, and both ALK and Myc activities were necessary to confer tumorigenic properties on tumor-derived JoMa1 cells in vitro.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cleft palate is a common congenital disorder that affects up to 1 in 2,500 live human births and results in considerable morbidity to affected individuals and their families. The etiology of cleft palate is complex, with both genetic and environmental factors implicated. Mutations in the transcription factor-encoding genes p63 and interferon regulatory factor 6 (IRF6) have individually been identified as causes of cleft palate; however, a relationship between the key transcription factors p63 and IRF6 has not been determined. Here, we used both mouse models and human primary keratinocytes from patients with cleft palate to demonstrate that IRF6 and p63 interact epistatically during development of the secondary palate. Mice simultaneously carrying a heterozygous deletion of p63 and the Irf6 knockin mutation R84C, which causes cleft palate in humans, displayed ectodermal abnormalities that led to cleft palate. Furthermore, we showed that p63 transactivated IRF6 by binding to an upstream enhancer element; genetic variation within this enhancer element is associated with increased susceptibility to cleft lip. Our findings therefore identify p63 as a key regulatory molecule during palate development and provide a mechanism for the cooperative role of p63 and IRF6 in orofacial development in mice and humans.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The complexity of sleep-wake regulation, in addition to the many environmental influences, includes genetic predisposing factors, which begin to be discovered. Most of the current progress in the study of sleep genetics comes from animal models (dogs, mice, and drosophila). Multiple approaches using both animal models and different genetic techniques are needed to follow the segregation and ultimately to identify 'sleep genes' and molecular bases of sleep disorders. Recent progress in molecular genetics and the development of detailed human genome map have already led to the identification of genetic factors in several complex disorders. Only a few genes are known for which a mutation causes a sleep disorder. However, single gene disorders are rare and most common disorders are complex in terms of their genetic susceptibility, environmental factors, gene-gene, and gene-environment interactions. We review here the current progress in the genetics of normal and pathological sleep and suggest a few future perspectives.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.