48 resultados para Three Generic Strategies


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Major depression, although frequent in primary care, is commonly hidden behind multiple physical complaints that are often the first and only reason for patient consultation. Major depression can be screened by two validated questions that are easier to use in primary care than the full DSM-IV criteria. A third question, called the "help" question, improves the specificity without apparently decreasing the sensitivity of this screening procedure. We validated the abbreviated screening procedure for major depression with and without the "help" question in primary care patients managed for a physical complaint. METHODS: This diagnostic accuracy study used data from a cohort study called SODA (for SOmatisation Depression Anxiety ) conducted by 24 general practitioners (GPs) in western Switzerland that included patients over 18 years of age with at least one physical complaint at index consultation. Major depression was identified with the full Patient Health Questionnaire. GPs were asked to screen patients for major depression with the three screening questions one year after inclusion. RESULTS: Out of 937 patients with at least one physical complaint, 751 were eligible one year after index consultation. Major depression was diagnosed in 69/724 (9.5%) patients. The sensitivity and specificity of the two-question method alone were 91.3% (95% confidence interval 81.4-96.4%) and 65.0% (95% confidence interval 61.2-68.6%), respectively. Adding the "help" question decreased the sensitivity (59.4% ; 95% confidence interval 47.0-70.9%) but improved the specificity (88.2% ; 95% confidence interval 85.4-90.5%) of the three-question method. CONCLUSIONS: The use of two screening questions for major depression was associated with high sensitivity and low specificity in primary care patients presenting a physical complaint. Adding the "help" question improved the specificity but clearly decreased the sensitivity; when using the "help" question; four out of ten patients with depression will be missed, compared to only one out of ten with the two-question method. Therefore, the "help" question is not useful as a screening question, but may help discussing management strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glucocorticoïds are widely used in medicine and associated with numerous complications. Whenever possible, dosage reduction or treatment withdrawal should be considered as soon as possible depending on the underlying disease being treated. Administration of glucocorticoids induces a physiologic negative feed-back on the hypothalamic-pituitary-adrenal (HPA) axis and three clinical situations can be distinguished during treatment withdrawal: reactivation of the disease for which the glucocorticoids were prescribed, acute adrenal insufficiency and steroid withdrawal syndrome. Acute adrenal insufficiency is a feared complication but probably rare. It is usually seen during stress situations and can be observed long after steroid withdrawal. There is no good predictive marker to anticipate acute adrenal insufficiency and clinical evaluation of the patient remains a key element in its diagnosis. If adrenal insufficiency is suspected, HPA suppression can be assessed with dynamic tests. During stress situation, steroid administration is then recommended depending on the severity of the stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a) real absences b) pseudo-absences selected randomly from the background and c) two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA) or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors) was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97), and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have limited fit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: While there is interest in measuring the satisfaction of patients discharged from psychiatric hospitals, it might be important to determine whether surveys of psychiatric patients should employ generic or psychiatry-specific instruments. The aim of this study was to compare two psychiatric-specific and one generic questionnaires assessing patients' satisfaction after a hospitalisation in a psychiatric hospital. METHODS: We randomised adult patients discharged from two Swiss psychiatric university hospitals between April and September 2004, to receive one of three instruments: the Saphora-Psy questionnaire, the Perceptions of Care survey questionnaire or the Picker Institute questionnaire for acute care hospitals. In addition to the comparison of response rates, completion time, mean number of missing items and mean ceiling effect, we targeted our comparison on patients and asked them to answer ten evaluation questions about the questionnaire they had just completed. RESULTS: 728 out of 1550 eligible patients (47%) participated in the study. Across questionnaires, response rates were similar (Saphora-Psy: 48.5%, Perceptions of Care: 49.9%, Picker: 43.4%; P = 0.08), average completion time was lowest for the Perceptions of Care questionnaire (minutes: Saphora-Psy: 17.7, Perceptions of Care: 13.7, Picker: 17.5; P = 0.005), the Saphora-Psy questionnaire had the largest mean proportion of missing responses (Saphora-Psy: 7.1%, Perceptions of Care: 2.8%, Picker: 4.0%; P < 0.001) and the Perceptions of Care questionnaire showed the highest ceiling effect (Saphora-Psy: 17.1%, Perceptions of Care: 41.9%, Picker: 36.3%; P < 0.001). There were no differences in the patients' evaluation of the questionnaires. CONCLUSION: Despite differences in the intended target population, content, lay-out and length of questionnaires, none appeared to be obviously better based on our comparison. All three presented advantages and drawbacks and could be used for the satisfaction evaluation of psychiatric inpatients. However, if comparison across medical services or hospitals is desired, using a generic questionnaire might be advantageous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate change has created the need for new strategies in conservation planning that account for the dynamics of factors threatening endangered species. Here we assessed climate change threat to the European otter, a flagship species for freshwater ecosystems, considering how current conservation areas will perform in preserving the species in a climatically changed future. We used an ensemble forecasting approach considering six modelling techniques applied to eleven subsets of otter occurrences across Europe. We performed a pseudo-independent and an internal evaluation of predictions. Future projections of species distribution were made considering the A2 and B2 scenarios for 2080 across three climate models: CCCMA-CGCM2, CSIRO-MK2 and HCCPR HAD-CM3. The current and the predicted otter distributions were used to identify priority areas for the conservation of the species, and overlapped to existing network of protected areas. Our projections show that climate change may profoundly reshuffle the otter's potential distribution in Europe, with important differences between the two scenarios we considered. Overall, the priority areas for conservation of the otter in Europe appear to be unevenly covered by the existing network of protected areas, with the current conservation efforts being insufficient in most cases. For a better conservation, the existing protected areas should be integrated within a more general conservation and management strategy incorporating climate change projections. Due to the important role that the otter plays for freshwater habitats, our study further highlights the potential sensitivity of freshwater habitats in Europe to climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bone ultrasound measures (QUSs) can assess fracture risk in the elderly. We compared three QUSs and their association with nonvertebral fracture history in 7562 Swiss women 70-80 years of age. The association between nonvertebral fracture was higher for heel than phalangeal QUS. INTRODUCTION: Because of the high morbidity and mortality associated with osteoporotic fractures, it is essential to detect subjects at risk for such fractures with screening methods. Because quantitative bone ultrasound (QUS) discriminated subjects with osteoporotic fractures from controls in several cross-sectional studies and predicted fractures in prospective studies, QUS could be more practical than DXA for screening. MATERIAL AND METHODS: This cross-sectional and retrospective multicenter (10 centers) study was performed to compare three QUSs (two heel ultrasounds: Achilles+ [GE-Lunar] and Sahara [Hologic]; the phalanges: ultrasound DBM sonic 1200 [IGEA]) for determining by logistic regression nonvertebral fracture odds ratio (OR) in a sample of 7562 Swiss women, 75.3 +/- 3.1 years of age. The two heel QUSs measured the broadband ultrasound attenuation (BUA) and the speed of sound (SOS). In addition, Achilles+ calculated the stiffness index (SI) and the Sahara calculated the quantitative ultrasound index (QUI) from BUA and SOS. The DBM sonic 1200 measured the amplitude-dependent SOS (AD-SOS). RESULTS: Eighty-six women had a history of a traumatic hip fracture after the age of 50, 1594 had a history of forearm fracture, and 2016 had other nonvertebral fractures. No fracture history was reported by 3866 women. Discrimination for hip fracture was higher than for the other nonvertebral fractures. The two heel QUSs had a significantly higher discrimination power than the QUSs of the phalanges, with standardized ORs, adjusted for age and body mass index, ranging from 2.1 to 2.7 (95% CI = 1.6, 3.5) compared with 1.4 (95% CI = 1.1, 1.7) for the AD-SOS of DBM sonic 1200. CONCLUSION: This study showed a high association between heel QUS and hip fracture history in elderly Swiss women. This could justify integration of QUS among screening strategies for identifying elderly women at risk for osteoporotic fractures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Several studies have shown that treatment with HMG-CoA reductase inhibitors (statins) can reduce coronary heart disease (CHD) rates. However, the cost effectiveness of statin treatment in the primary prevention of CHD has not been fully established. Objective: To estimate the costs of CHD prevention using statins in Switzerland according to different guidelines, over a 10-year period. Methods: The overall 10-year costs, costs of one CHD death averted, and of 1 year without CHD were computed for the European Society of Cardiology (ESC), the International Atherosclerosis Society (IAS), and the US Adult Treatment Panel III (ATP-III) guidelines. Sensitivity analysis was performed by varying number of CHD events prevented and costs of treatment. Results: Using an inflation rate of medical costs of 3%, a single yearly consultation, a single total cholesterol measurement per year, and a generic statin, the overall 10-year costs of the ESC, IAS, and ATP-III strategies were 2.2, 3.4, and 4.1 billion Swiss francs (SwF [SwF1 = $US0.97]). In this scenario, the average cost for 1 year of life gained was SwF352, SwF421, and SwF485 thousand, respectively, and it was always higher in women than in men. In men, the average cost for 1 year of life without CHD was SwF30.7, SwF42.5, and SwF51.9 thousand for the ESC, IAS, and ATP-III strategies, respectively, and decreased with age. Statin drug costs represented between 45% and 68% of the overall preventive cost. Changing the cost of statins, inflation rates, or number of fatal and non-fatal cases of CHD averted showed ESC guidelines to be the most cost effective. Conclusion: The cost of CHD prevention using statins depends on the guidelines used. The ESC guidelines appear to yield the lowest costs per year of life gained free of CHD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : Les anticorps monoclonaux ont une place de plus en plus prépondérante dans le traitement des lymphomes et leucémies. Dans cette étude, trois anticorps monoclonaux murins, dirigés contre les antigènes CDS, CD71 et HLA-DR exprimés à la surface des cellules de leucémies lymphoïdes chroniques (LLC), ont été évalués. In vitro, les anticorps radiomarqués ont montrés des bonnes liaisons spécifiques sur les différentes cellules cibles. L'anti-CD71 inhibait la prolifération de la plupart des lignées cellulaires testées avec une accumulation des cellules en phase S précoce du cycle cellulaire. L'anti-HLA-DR inhibait aussi la prolifération des lignées leucémique JOK1-5.3 et lymphoïde Daudi. Cette inhibition était associée à une agrégation des cellules. Aucune induction d'apoptose n'a pu être clairement observée avec ces anticorps. L'anti-CD5 n'a montré aucun effet d'inhibition de croissance in vitro. In vivo, l'injection des anticorps individuellement augmentait significativement la survie médiane de souris SCID greffées avec des cellules JOK1-5.3 en i.p. De plus, l'anticorps antiCD5 combiné à l'anti-HLA-DR ou l'anti-CD71, sous certaines conditions, inhibait complètement le développement tumoral dans la quasi totalité des souris traitées avec une augmentation significative de l'efficacité comparée aux anticorps seuls. L'augmentation de l'efficacité thérapeutique des anticorps monoclonaux par les cytokines, dont l'IL-2, a déjà été montrée dans la littérature. Au regard du meilleur comportement de l'IL-2 sous la forme complexée à un anticorps anti-IL-2, nous avons évalué l'efficacité de l'IL-2/anti-IL-2 seul ou combinés au rituximab chez différents modèles tumoraux s.c. (BL60.2, Daudi, Ramos) ou i.p. (JOK15.3) de souris SCID. Le complexe IL-2/anti-IL-2 a montré un effet anti-tumoral dans les souris greffées avec BL60.2 et Daudi. Le traitement IL-2/anti-IL-2 combiné au rituximab a montré une efficacité accrue chez des souris avec BL60.2 par rapport au rituximab seul. En revanche, nous n'avons pas observé de différence avec IL-2/anti-IL-2 seul.Aussi, nous avons évalué l'utilisation de l'agent couplant tri-fonctionnel TMEA pour produire des anticorps bispecifiques. Les expériences préliminaires avec les anticorps rituximab et herceptine, ont mis en évidence sur gel SDS-Page la formation de dimers (~100kDa) et de trimers (~150kDa). Les anticorps bispecifiques sont composés d'un fragment Fab' d'une spécificité et de un ou deux fragments Fab' de l'autre spécificité permettant de moduler la capacité de liaison. Nous avons enfin montré qu'une construction anti-CD5/anti-CD20 était capable de se lier indépendamment ou simultanément à ses antigènes cibles. En conclusion, ce travail a montré l'efficacité thérapeutique des trois anticorps monoclonaux étudiés dans un model de LLC in vivo, et plus particulièrement l'intérêt de certaines combinaisons. D'autre part, nous avons montré l'efficacité anti-tumorale du complexe IL-2/anti-IL-2 in vivo. Des études futures devront permettre de définir un régime favorable pour augmenter l'efficacité de la thérapie avec les anticorps monoclonaux. Enfin, nous avons montré la faisabilité d'utiliser l'agent couplant TMEA pour produire des anticorps bispécifiques fonctionnels.Abstract : Monoclonal antibody (mAb) therapy has become an integral part in different treatments of lymphomas and leukaemias. In this study, we describe three murine mAbs directed against the CD5, CD71 and HLA-DR antigens expressed on chronic lymphocytic leukaemia cells (CLL). In vitro, radiolabeled purified mAbs showed good specific binding on live target cells. Anti-CD71 mAb inhibited proliferation of most cell lines with an accumulation of responding cells in early S-phase of the cell cycle, but without induction of apoptosis. Anti-HLA-DR mAb showed proliferation inhibition of leukaemia JOK1-5.3 and lymphoid Daudi cells, associated with cell aggregation, but again no specific sign of apoptosis was observed. Anti-CD5 mAb did not show any growth inhibitory effect in vitro. In vivo, in a model of SCID mice grafted i.p. with JOK1-5.3 cells, injection of individual mAbs induced significant prolongation of median survival, up to complete inhibition of tumour growth in some mice. Antibody combination of anti-CD5 with anti-HLA-DR or anti-CD71, evaluated in an early treatment, completely inhibited tumour growth in most mice, with a significant efficacy enhancement as compared to mAb used as single agents. Previous reports described the improved efficacy of mAb therapy when combined with cytokines such as IL-2. Relying further on the improved efficacy of IL-2 when administered as an immune complex with anti-IL-2 mAb, we evaluated the anti-tumour effect of the IL-2/anti-IL-2 complex alone or combined with rituximab in subcutaneous (BL60.2, Daudi, Ramos) or i.p. (JOK1-5.3) tumour models in SCID mice. The IL-2/anti-IL-2 complex demonstrated an anti-tumour effect in BL60.2 and Daudi grafted SCID mice. Combination of IL-2/anti-IL-2 treatment with rituximab showed increased efficacy as compared to rituximab alone in BL60.2 grafted mice. However, no difference was observed with IL-2/anti-IL-2 complex alone in these experiments. Finally, we evaluated the feasibility of producing bispecific antibodies (bsAbs) using a trifunctional coupling agent, called TMEA. In preliminary experiments coupling rituximab with herceptine Fab' fragments we obtained the formation of dimers (~100kDa) and trimers (~150kDa) as observed on SDS-Page gel. This method allowed us to produce bsAb with one Fab' fragments of one specificity and one or two Fab' fragments of the second specificity. An anti-CD5/anti-CD20 bsAb was shown to bind targeted antigen either independently or simultaneously. In conclusion, these data show that the three mAbs were all able to induce significant growth inhibition of the JOK1-5.3 cell line in vivo, and efficacy was enhanced when used in combination. IL2/anti-IL-2 complex displayed anti-tumour efficacy in vivo. Further evaluation is necessary to define the most favourable combination to improve mAb therapy. BsAb were produced using the tri-functional agent allowing antibody fragments with relatively good binding. The poor yield obtained with such chemical couplings limited the use of these constructs in preclinical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation focuses on the strategies consumers use when making purchase decisions. It is organized in two main parts, one centering on descriptive and the other on applied decision making research. In the first part, a new process tracing tool called InterActive Process Tracing (IAPT) is pre- sented, which I developed to investigate the nature of consumers' decision strategies. This tool is a combination of several process tracing techniques, namely Active Information Search, Mouselab, and retrospective verbal protocol. To validate IAPT, two experiments on mobile phone purchase de- cisions were conducted where participants first repeatedly chose a mobile phone and then were asked to formalize their decision strategy so that it could be used to make choices for them. The choices made by the identified strategies correctly predicted the observed choices in 73% (Experiment 1) and 67% (Experiment 2) of the cases. Moreover, in Experiment 2, Mouselab and eye tracking were directly compared with respect to their impact on information search and strategy description. Only minor differences were found between these two methods. I conclude that IAPT is a useful research tool to identify choice strategies, and that using eye tracking technology did not increase its validity beyond that gained with Mouselab. In the second part, a prototype of a decision aid is introduced that was developed building in particular on the knowledge about consumers' decision strategies gained in Part I. This decision aid, which is called the InterActive Choice Aid (IACA), systematically assists consumers in their purchase decisions. To evaluate the prototype regarding its perceived utility, an experiment was conducted where IACA was compared to two other prototypes that were based on real-world consumer decision aids. All three prototypes differed in the number and type of tools they provided to facilitate the process of choosing, ranging from low (Amazon) to medium (Sunrise/dpreview) to high functionality (IACA). Overall, participants slightly preferred the prototype of medium functionality and this prototype was also rated best on the dimensions of understandability and ease of use. IACA was rated best regarding the two dimensions of ease of elimination and ease of comparison of alternatives. Moreover, participants choices were more in line with the normatively oriented weighted additive strategy when they used IACA than when they used the medium functionality prototype. The low functionality prototype was the least preferred overall. It is concluded that consumers can and will benefit from highly functional decision aids like IACA, but only when these systems are easy to understand and to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present strategies for chemical shift assignments of large proteins by magic-angle spinning solid-state NMR, using the 21-kDa disulfide-bond-forming enzyme DsbA as prototype. Previous studies have demonstrated that complete de novo assignments are possible for proteins up to approximately 17 kDa, and partial assignments have been performed for several larger proteins. Here we show that combinations of isotopic labeling strategies, high field correlation spectroscopy, and three-dimensional (3D) and four-dimensional (4D) backbone correlation experiments yield highly confident assignments for more than 90% of backbone resonances in DsbA. Samples were prepared as nanocrystalline precipitates by a dialysis procedure, resulting in heterogeneous linewidths below 0.2 ppm. Thus, high magnetic fields, selective decoupling pulse sequences, and sparse isotopic labeling all improved spectral resolution. Assignments by amino acid type were facilitated by particular combinations of pulse sequences and isotopic labeling; for example, transferred echo double resonance experiments enhanced sensitivity for Pro and Gly residues; [2-(13)C]glycerol labeling clarified Val, Ile, and Leu assignments; in-phase anti-phase correlation spectra enabled interpretation of otherwise crowded Glx/Asx side-chain regions; and 3D NCACX experiments on [2-(13)C]glycerol samples provided unique sets of aromatic (Phe, Tyr, and Trp) correlations. Together with high-sensitivity CANCOCA 4D experiments and CANCOCX 3D experiments, unambiguous backbone walks could be performed throughout the majority of the sequence. At 189 residues, DsbA represents the largest monomeric unit for which essentially complete solid-state NMR assignments have so far been achieved. These results will facilitate studies of nanocrystalline DsbA structure and dynamics and will enable analysis of its 41-kDa covalent complex with the membrane protein DsbB, for which we demonstrate a high-resolution two-dimensional (13)C-(13)C spectrum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preface In this thesis we study several questions related to transaction data measured at an individual level. The questions are addressed in three essays that will constitute this thesis. In the first essay we use tick-by-tick data to estimate non-parametrically the jump process of 37 big stocks traded on the Paris Stock Exchange, and of the CAC 40 index. We separate the total daily returns in three components (trading continuous, trading jump, and overnight), and we characterize each one of them. We estimate at the individual and index levels the contribution of each return component to the total daily variability. For the index, the contribution of jumps is smaller and it is compensated by the larger contribution of overnight returns. We test formally that individual stocks jump more frequently than the index, and that they do not respond independently to the arrive of news. Finally, we find that daily jumps are larger when their arrival rates are larger. At the contemporaneous level there is a strong negative correlation between the jump frequency and the trading activity measures. The second essay study the general properties of the trade- and volume-duration processes for two stocks traded on the Paris Stock Exchange. These two stocks correspond to a very illiquid stock and to a relatively liquid stock. We estimate a class of autoregressive gamma process with conditional distribution from the family of non-central gamma (up to a scale factor). This process was introduced by Gouriéroux and Jasiak and it is known as Autoregressive gamma process. We also evaluate the ability of the process to fit the data. For this purpose we use the Diebold, Gunther and Tay (1998) test; and the capacity of the model to reproduce the moments of the observed data, and the empirical serial correlation and the partial serial correlation functions. We establish that the model describes correctly the trade duration process of illiquid stocks, but have problems to adjust correctly the trade duration process of liquid stocks which present long-memory characteristics. When the model is adjusted to volume duration, it successfully fit the data. In the third essay we study the economic relevance of optimal liquidation strategies by calibrating a recent and realistic microstructure model with data from the Paris Stock Exchange. We distinguish the case of parameters which are constant through the day from time-varying ones. An optimization problem incorporating this realistic microstructure model is presented and solved. Our model endogenizes the number of trades required before the position is liquidated. A comparative static exercise demonstrates the realism of our model. We find that a sell decision taken in the morning will be liquidated by the early afternoon. If price impacts increase over the day, the liquidation will take place more rapidly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blood pressure is poorly controlled in most European countries and the control rate is even lower in high-risk patients such as patients with chronic kidney disease, diabetic patients or previous coronary heart disease. Several factors have been associated with poor control, some of which involve the characteristic of the patients themselves, such as socioeconomic factors, or unsuitable life-styles, other factors related to hypertension or to associated comorbidity, but there are also factors directly associated with antihypertensive therapy, mainly involving adherence problems, therapeutic inertia and therapeutic strategies unsuited to difficult-to-control hypertensive patients.It is common knowledge that only 30% of hypertensive patients can be controlled using monotherapy; all the rest require a combination of two or more antihypertensive drugs, and this can be a barrier to good adherence and log-term persistence in patients who also often need to use other drugs, such as antidiabetic agents, statins or antiplatelet agents. The fixed combinations of three antihypertensive agents currently available can facilitate long-term control of these patients in clinical practice. If well tolerated, a long-term therapeutic regimen that includes a diuretic, an ACE inhibitor or an angiotensin receptor blocker, and a calcium channel blocker is the recommended optimal triple therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.