80 resultados para Financial Integration
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
This paper shows that in a stylized model with two countries, characterized by different levels of financial development, the following facts can be replicated: 1) persistent current account surpluses and 2) high TFP growth in China. Under autarky, entrepreneurs in the emerging country overinvest in short-term projects and underinvest in long-term projects because short-term assets help them secure long-term investments in the presence of credit constraints. This creates an aggregate misallocation of capital. When financial markets integrate, entrepreneurs with long-term projects can have access to cheaper short-term assets abroad, which leaves them more resources to invest in their projects. This both reduces capital misallocations and generates capital outflows.
Resumo:
A number of OECD countries aim to encourage work integration of disabled persons using quota policies. For instance, Austrian firms must provide at least one job to a disabled worker per 25 nondisabled workers and are subject to a tax if they do not. This "threshold design" provides causal estimates of the noncompliance tax on disabled employment if firms do not manipulate nondisabled employment; a lower and upper bound on the causal effect can be constructed if they do. Results indicate that firms with 25 nondisabled workers employ about 0.04 (or 12%) more disabled workers than without the tax; firms do manipulate employment of nondisabled workers but the lower bound on the employment effect of the quota remains positive; employment effects are stronger in low-wage firms than in high-wage firms; and firms subject to the quota of two disabled workers or more hire 0.08 more disabled workers per additional quota job. Moreover, increasing the noncompliance tax increases excess disabled employment, whereas paying a bonus to overcomplying firms slightly dampens the employment effects of the tax.
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
Knowledge of the spatial distribution of hydraulic conductivity (K) within an aquifer is critical for reliable predictions of solute transport and the development of effective groundwater management and/or remediation strategies. While core analyses and hydraulic logging can provide highly detailed information, such information is inherently localized around boreholes that tend to be sparsely distributed throughout the aquifer volume. Conversely, larger-scale hydraulic experiments like pumping and tracer tests provide relatively low-resolution estimates of K in the investigated subsurface region. As a result, traditional hydrogeological measurement techniques contain a gap in terms of spatial resolution and coverage, and they are often alone inadequate for characterizing heterogeneous aquifers. Geophysical methods have the potential to bridge this gap. The recent increased interest in the application of geophysical methods to hydrogeological problems is clearly evidenced by the formation and rapid growth of the domain of hydrogeophysics over the past decade (e.g., Rubin and Hubbard, 2005).
Resumo:
The need for better gene transfer systems towards improved risk=benefit balance for patients remains a major challenge in the clinical translation of gene therapy (GT). We have investigated the improvement of integrating vectors safety in combining (i) new short synthetic genetic insulator elements (GIE) and (ii) directing genetic integration to heterochromatin. We have designed SIN-insulated retrovectors with two candidate GIEs and could identify a specific combination of insulator 2 repeats which translates into best functional activity, high titers and boundary effect in both gammaretro (p20) and lentivectors (DCaro4) (see Duros et al, abstract ibid). Since GIEs are believed to shield the transgenic cassette from inhibitory effects and silencing, DCaro4 has been further tested with chimeric HIV-1 derived integrases which comprise C-ter chromodomains targeting heterochromatin through either histone H3 (ML6chimera) or methylatedCpGislands (ML10). With DCaro4 only and both chimeras, a homogeneous expression is evidenced in over 20% of the cells which is sustained over time. With control lentivectors, less than 2% of cells express GFP as compared to background using a control double-mutant in both catalytic and ledgf binding-sites; in addition, a two-times increase of expression can be induced with histone deacetylase inhibitors. Our approach could significantly reduce integration into open chromatin sensitive sites in stem cells at the time of transduction, a feature which might significantly decrease subsequent genotoxicity, according to X-SCIDs patients data.Work performed with the support of EC-DG research within the FP6-Network of Excellence, CLINIGENE: LSHB-CT-2006-018933
Resumo:
Male and female Wistar rats were treated postnatally (PND 5-16) with BSO (l-buthionine-(S,R)-sulfoximine) to provide a rat model of schizophrenia based on transient glutathione deficit. In the watermaze, BSO-treated male rats perform very efficiently in conditions where a diversity of visual information is continuously available during orientation trajectories [1]. Our hypothesis is that the treatment impairs proactive strategies anticipating future sensory information, while supporting a tight visual adjustment on memorized snapshots, i.e. compensatory reactive strategies. To test this hypothesis, BSO rats' performance was assessed in two conditions using an 8-arm radial maze task: a semi-transparent maze with no available view on the environment from maze centre [2], and a modified 2-parallel maze known to induce a neglect of the parallel pair in normal rats [3-5]. Male rats, but not females, were affected by the BSO treatment. In the semi-transparent maze, BSO males expressed a higher error rate, especially in completing the maze after an interruption. In the 2-parallel maze shape, BSO males, unlike controls, expressed no neglect of the parallel arms. This second result was in accord with a reactive strategy using accurate memory images of the contextual environment instead of a representation based on integrating relative directions. These results are coherent with a treatment-induced deficit in proactive decision strategy based on multimodal cognitive maps, compensated by accurate reactive adaptations based on the memory of local configurations. Control females did not express an efficient proactive capacity in the semi-transparent maze, neither did they show the significant neglect of the parallel arms, which might have masked the BSO induced effect. Their reduced sensitivity to BSO treatment is discussed with regard to a sex biased basal cognitive style.
Resumo:
Multisensory interactions are a fundamental feature of brain organization. Principles governing multisensory processing have been established by varying stimulus location, timing and efficacy independently. Determining whether and how such principles operate when stimuli vary dynamically in their perceived distance (as when looming/receding) provides an assay for synergy among the above principles and also means for linking multisensory interactions between rudimentary stimuli with higher-order signals used for communication and motor planning. Human participants indicated movement of looming or receding versus static stimuli that were visual, auditory, or multisensory combinations while 160-channel EEG was recorded. Multivariate EEG analyses and distributed source estimations were performed. Nonlinear interactions between looming signals were observed at early poststimulus latencies (∼75 ms) in analyses of voltage waveforms, global field power, and source estimations. These looming-specific interactions positively correlated with reaction time facilitation, providing direct links between neural and performance metrics of multisensory integration. Statistical analyses of source estimations identified looming-specific interactions within the right claustrum/insula extending inferiorly into the amygdala and also within the bilateral cuneus extending into the inferior and lateral occipital cortices. Multisensory effects common to all conditions, regardless of perceived distance and congruity, followed (∼115 ms) and manifested as faster transition between temporally stable brain networks (vs summed responses to unisensory conditions). We demonstrate the early-latency, synergistic interplay between existing principles of multisensory interactions. Such findings change the manner in which to model multisensory interactions at neural and behavioral/perceptual levels. We also provide neurophysiologic backing for the notion that looming signals receive preferential treatment during perception.
Resumo:
Although combination chemotherapy has been shown to be more effective than single agents in advanced esophagogastric cancer, the better response rates have not fulfilled their promise as overall survival times from best combination still range between 8 to 11 months. So far, the development of targeted therapies stays somewhat behind their integration into treatment concepts compared to other gastrointestinal diseases. Thus, the review summarizes the recent advances in the development of targeted therapies in advanced esophagogastric cancer. The majority of agents tested were angiogenesis inhibitors or agents targeting the epidermal growth factor receptors EGFR1 and HER2. For trastuzumab and bevacizumab, phase III trial results have been presented recently. While addition of trastuzumab to cisplatin/5-fluoropyrimidine-based chemotherapy results in a clinically relevant and statistically significant survival benefit in HER 2+ patients, the benefit of the addition of bevacizumab to chemotherapy was not significant. Thus, all patients with metastatic disease should be tested for HER-2 status in the tumor. Trastuzumab in combination with cisplatin/5-fluoropyrimidine-based chemotherapy is the new standard of care for patients with HER2-positive advanced gastric cancer.
Resumo:
The objective of this work is to present a multitechnique approach to define the geometry, the kinematics, and the failure mechanism of a retrogressive large landslide (upper part of the La Valette landslide, South French Alps) by the combination of airborne and terrestrial laser scanning data and ground-based seismic tomography data. The advantage of combining different methods is to constrain the geometrical and failure mechanism models by integrating different sources of information. Because of an important point density at the ground surface (4. 1 points m?2), a small laser footprint (0.09 m) and an accurate three-dimensional positioning (0.07 m), airborne laser scanning data are adapted as a source of information to analyze morphological structures at the surface. Seismic tomography surveys (P-wave and S-wave velocities) may highlight the presence of low-seismic-velocity zones that characterize the presence of dense fracture networks at the subsurface. The surface displacements measured from the terrestrial laser scanning data over a period of 2 years (May 2008?May 2010) allow one to quantify the landslide activity at the direct vicinity of the identified discontinuities. An important subsidence of the crown area with an average subsidence rate of 3.07 m?year?1 is determined. The displacement directions indicate that the retrogression is controlled structurally by the preexisting discontinuities. A conceptual structural model is proposed to explain the failure mechanism and the retrogressive evolution of the main scarp. Uphill, the crown area is affected by planar sliding included in a deeper wedge failure system constrained by two preexisting fractures. Downhill, the landslide body acts as a buttress for the upper part. Consequently, the progression of the landslide body downhill allows the development of dip-slope failures, and coherent blocks start sliding along planar discontinuities. The volume of the failed mass in the crown area is estimated at 500,000 m3 with the sloping local base level method.