17 resultados para Variable pricing model

em Université de Lausanne, Switzerland


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction. Adherence to medication for asymptomatic disease is often low. We assessed factors associated with good adherence to medication for high blood pressure (HBP) in a country of the African region. Methods. A population-based survey of adults aged 25-64 years (N=1240 and participation rate=73%). Information was available in knowledge attitude and practice, SES and other variables. One question assessed adherence. Good adherence to treatment was defined as answering "I forget very rarely" vs "I forget on 1-2 days in a week" or "I forget on 3 or more days in a week". Results. In a univariate model adherence was strongly associated with belief that hypertension is a long-term disease (OR 2.6, p<0.001) and was negatively associated with concomitant use of traditional medicine (OR 0.36, p<0.005). The following variables tended to be associated with good adherence for HBP treatment: age, SES, BMI, belief that HBP is not symptomatic, going to government's clinics, medium stress level, controlled hypertension, taking statins. The following variables were not associated with good adherence for HBP treatment: education, higher BP, knowing people who had a stroke/MI, suffering from another chronic condition. In a multivariate model, pseudo R2 was 0.14. Conclusion. We built a multidimensional model including a wide range of variable. This model only predicted 14% of adherence variability. Variables associated with good adherence were demographics or related to knowledge attitude and practice. The latter one is modifiable by different type of interventions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The purpose of this study was to develop a mathematical model (sine model, SIN) to describe fat oxidation kinetics as a function of the relative exercise intensity [% of maximal oxygen uptake (%VO2max)] during graded exercise and to determine the exercise intensity (Fatmax) that elicits maximal fat oxidation (MFO) and the intensity at which the fat oxidation becomes negligible (Fatmin). This model included three independent variables (dilatation, symmetry, and translation) that incorporated primary expected modulations of the curve because of training level or body composition. METHODS: Thirty-two healthy volunteers (17 women and 15 men) performed a graded exercise test on a cycle ergometer, with 3-min stages and 20-W increments. Substrate oxidation rates were determined using indirect calorimetry. SIN was compared with measured values (MV) and with other methods currently used [i.e., the RER method (MRER) and third polynomial curves (P3)]. RESULTS: There was no significant difference in the fitting accuracy between SIN and P3 (P = 0.157), whereas MRER was less precise than SIN (P < 0.001). Fatmax (44 +/- 10% VO2max) and MFO (0.37 +/- 0.16 g x min(-1)) determined using SIN were significantly correlated with MV, P3, and MRER (P < 0.001). The variable of dilatation was correlated with Fatmax, Fatmin, and MFO (r = 0.79, r = 0.67, and r = 0.60, respectively, P < 0.001). CONCLUSIONS: The SIN model presents the same precision as other methods currently used in the determination of Fatmax and MFO but in addition allows calculation of Fatmin. Moreover, the three independent variables are directly related to the main expected modulations of the fat oxidation curve. SIN, therefore, seems to be an appropriate tool in analyzing fat oxidation kinetics obtained during graded exercise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: : To determine the influence of nebulizer types and nebulization modes on bronchodilator delivery in a mechanically ventilated pediatric lung model. DESIGN: : In vitro, laboratory study. SETTING: : Research laboratory of a university hospital. INTERVENTIONS: : Using albuterol as a marker, three nebulizer types (jet nebulizer, ultrasonic nebulizer, and vibrating-mesh nebulizer) were tested in three nebulization modes in a nonhumidified bench model mimicking the ventilatory pattern of a 10-kg infant. The amounts of albuterol deposited on the inspiratory filters (inhaled drug) at the end of the endotracheal tube, on the expiratory filters, and remaining in the nebulizers or in the ventilator circuit were determined. Particle size distribution of the nebulizers was also measured. MEASUREMENTS AND MAIN RESULTS: : The inhaled drug was 2.8% ± 0.5% for the jet nebulizer, 10.5% ± 2.3% for the ultrasonic nebulizer, and 5.4% ± 2.7% for the vibrating-mesh nebulizer in intermittent nebulization during the inspiratory phase (p < 0.01). The most efficient nebulizer was the vibrating-mesh nebulizer in continuous nebulization (13.3% ± 4.6%, p < 0.01). Depending on the nebulizers, a variable but important part of albuterol was observed as remaining in the nebulizers (jet and ultrasonic nebulizers), or being expired or lost in the ventilator circuit (all nebulizers). Only small particles (range 2.39-2.70 µm) reached the end of the endotracheal tube. CONCLUSIONS: : Important differences between nebulizer types and nebulization modes were seen for albuterol deposition at the end of the endotracheal tube in an in vitro pediatric ventilator-lung model. New aerosol devices, such as ultrasonic and vibrating-mesh nebulizers, were more efficient than the jet nebulizer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally accepted that most plant populations are locally adapted. Yet, understanding how environmental forces give rise to adaptive genetic variation is a challenge in conservation genetics and crucial to the preservation of species under rapidly changing climatic conditions. Environmental variation, phylogeographic history, and population demographic processes all contribute to spatially structured genetic variation, however few current models attempt to separate these confounding effects. To illustrate the benefits of using a spatially-explicit model for identifying potentially adaptive loci, we compared outlier locus detection methods with a recently-developed landscape genetic approach. We analyzed 157 loci from samples of the alpine herb Gentiana nivalis collected across the European Alps. Principle coordinates of neighbor matrices (PCNM), eigenvectors that quantify multi-scale spatial variation present in a data set, were incorporated into a landscape genetic approach relating AFLP frequencies with 23 environmental variables. Four major findings emerged. 1) Fifteen loci were significantly correlated with at least one predictor variable (R (adj) (2) > 0.5). 2) Models including PCNM variables identified eight more potentially adaptive loci than models run without spatial variables. 3) When compared to outlier detection methods, the landscape genetic approach detected four of the same loci plus 11 additional loci. 4) Temperature, precipitation, and solar radiation were the three major environmental factors driving potentially adaptive genetic variation in G. nivalis. Techniques presented in this paper offer an efficient method for identifying potentially adaptive genetic variation and associated environmental forces of selection, providing an important step forward for the conservation of non-model species under global change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on theoretical asset pricing models and their empirical applications. I aim to investigate the following noteworthy problems: i) if the relationship between asset prices and investors' propensities to gamble and to fear disaster is time varying, ii) if the conflicting evidence for the firm and market level skewness can be explained by downside risk, Hi) if costly learning drives liquidity risk. Moreover, empirical tests support the above assumptions and provide novel findings in asset pricing, investment decisions, and firms' funding liquidity. The first chapter considers a partial equilibrium model where investors have heterogeneous propensities to gamble and fear disaster. Skewness preference represents the desire to gamble, while kurtosis aversion represents fear of extreme returns. Using US data from 1988 to 2012, my model demonstrates that in bad times, risk aversion is higher, more people fear disaster, and fewer people gamble, in contrast to good times. This leads to a new empirical finding: gambling preference has a greater impact on asset prices during market downturns than during booms. The second chapter consists of two essays. The first essay introduces a foramula based on conditional CAPM for decomposing the market skewness. We find that the major market upward and downward movements can be well preadicted by the asymmetric comovement of betas, which is characterized by an indicator called "Systematic Downside Risk" (SDR). We find that SDR can efafectively forecast future stock market movements and we obtain out-of-sample R-squares (compared with a strategy using historical mean) of more than 2.27% with monthly data. The second essay reconciles a well-known empirical fact: aggregating positively skewed firm returns leads to negatively skewed market return. We reconcile this fact through firms' greater response to negative maraket news than positive market news. We also propose several market return predictors, such as downside idiosyncratic skewness. The third chapter studies the funding liquidity risk based on a general equialibrium model which features two agents: one entrepreneur and one external investor. Only the investor needs to acquire information to estimate the unobservable fundamentals driving the economic outputs. The novelty is that information acquisition is more costly in bad times than in good times, i.e. counter-cyclical information cost, as supported by previous empirical evidence. Later we show that liquidity risks are principally driven by costly learning. Résumé Cette thèse présente des modèles théoriques dévaluation des actifs et leurs applications empiriques. Mon objectif est d'étudier les problèmes suivants: la relation entre l'évaluation des actifs et les tendances des investisseurs à parier et à crainadre le désastre varie selon le temps ; les indications contraires pour l'entreprise et l'asymétrie des niveaux de marché peuvent être expliquées par les risques de perte en cas de baisse; l'apprentissage coûteux augmente le risque de liquidité. En outre, des tests empiriques confirment les suppositions ci-dessus et fournissent de nouvelles découvertes en ce qui concerne l'évaluation des actifs, les décisions relatives aux investissements et la liquidité de financement des entreprises. Le premier chapitre examine un modèle d'équilibre où les investisseurs ont des tendances hétérogènes à parier et à craindre le désastre. La préférence asymétrique représente le désir de parier, alors que le kurtosis d'aversion représente la crainte du désastre. En utilisant les données des Etats-Unis de 1988 à 2012, mon modèle démontre que dans les mauvaises périodes, l'aversion du risque est plus grande, plus de gens craignent le désastre et moins de gens parient, conatrairement aux bonnes périodes. Ceci mène à une nouvelle découverte empirique: la préférence relative au pari a un plus grand impact sur les évaluations des actifs durant les ralentissements de marché que durant les booms économiques. Exploitant uniquement cette relation générera un revenu excédentaire annuel de 7,74% qui n'est pas expliqué par les modèles factoriels populaires. Le second chapitre comprend deux essais. Le premier essai introduit une foramule base sur le CAPM conditionnel pour décomposer l'asymétrie du marché. Nous avons découvert que les mouvements de hausses et de baisses majeures du marché peuvent être prédits par les mouvements communs des bêtas. Un inadicateur appelé Systematic Downside Risk, SDR (risque de ralentissement systématique) est créé pour caractériser cette asymétrie dans les mouvements communs des bêtas. Nous avons découvert que le risque de ralentissement systématique peut prévoir les prochains mouvements des marchés boursiers de manière efficace, et nous obtenons des carrés R hors échantillon (comparés avec une stratégie utilisant des moyens historiques) de plus de 2,272% avec des données mensuelles. Un investisseur qui évalue le marché en utilisant le risque de ralentissement systématique aurait obtenu une forte hausse du ratio de 0,206. Le second essai fait cadrer un fait empirique bien connu dans l'asymétrie des niveaux de march et d'entreprise, le total des revenus des entreprises positiveament asymétriques conduit à un revenu de marché négativement asymétrique. Nous décomposons l'asymétrie des revenus du marché au niveau de l'entreprise et faisons cadrer ce fait par une plus grande réaction des entreprises aux nouvelles négatives du marché qu'aux nouvelles positives du marché. Cette décomposition révélé plusieurs variables de revenus de marché efficaces tels que l'asymétrie caractéristique pondérée par la volatilité ainsi que l'asymétrie caractéristique de ralentissement. Le troisième chapitre fournit une nouvelle base théorique pour les problèmes de liquidité qui varient selon le temps au sein d'un environnement de marché incomplet. Nous proposons un modèle d'équilibre général avec deux agents: un entrepreneur et un investisseur externe. Seul l'investisseur a besoin de connaitre le véritable état de l'entreprise, par conséquent, les informations de paiement coutent de l'argent. La nouveauté est que l'acquisition de l'information coute plus cher durant les mauvaises périodes que durant les bonnes périodes, comme cela a été confirmé par de précédentes expériences. Lorsque la récession comamence, l'apprentissage coûteux fait augmenter les primes de liquidité causant un problème d'évaporation de liquidité, comme cela a été aussi confirmé par de précédentes expériences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyzing the relationship between the baseline value and subsequent change of a continuous variable is a frequent matter of inquiry in cohort studies. These analyses are surprisingly complex, particularly if only two waves of data are available. It is unclear for non-biostatisticians where the complexity of this analysis lies and which statistical method is adequate.With the help of simulated longitudinal data of body mass index in children,we review statistical methods for the analysis of the association between the baseline value and subsequent change, assuming linear growth with time. Key issues in such analyses are mathematical coupling, measurement error, variability of change between individuals, and regression to the mean. Ideally, it is better to rely on multiple repeated measurements at different times and a linear random effects model is a standard approach if more than two waves of data are available. If only two waves of data are available, our simulations show that Blomqvist's method - which consists in adjusting for measurement error variance the estimated regression coefficient of observed change on baseline value - provides accurate estimates. The adequacy of the methods to assess the relationship between the baseline value and subsequent change depends on the number of data waves, the availability of information on measurement error, and the variability of change between individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of four essays in equilibrium asset pricing. The main topic is investors' heterogeneity: I investigates the equilibrium implications for the financial markets when investors have different attitudes toward risk. The first chapter studies why expected risk and remuneration on the aggregate market are negatively related, even if intuition and standard theory suggest a positive relation. I show that the negative trade-off can obtain in equilibrium if investors' beliefs about economic fundamentals are procyclically biased and the market Sharpe ratio is countercyclical. I verify that such conditions hold in the real markets and I find empirical support for the risk-return dynamics predicted by the model. The second chapter consists of two essays. The first essay studies how het¬erogeneity in risk preferences interacts with other sources of heterogeneity and how this affects asset prices in equilibrium. Using perceived macroeconomic un¬certainty as source of heterogeneity, the model helps to explain some patterns of financial returns, even if heterogeneity is small as suggested by survey data. The second essay determines conditions such that equilibrium prices have analytical solutions when investors have heterogeneous risk attitudes and macroeconomic fundamentals feature latent uncertainty. This approach provides additional in-sights to the previous literature where models require numerical solutions. The third chapter studies why equity claims (i.e. assets paying a single future dividend) feature premia and risk decreasing with the horizon, even if standard models imply the opposite shape. I show that labor relations helps to explain the puzzle. When workers have bargaining power to exploit partial income insurance within the firm, wages are smoother and dividends are riskier than in a standard economy. Distributional risk among workers and shareholders provides a rationale to the equity short-term risk, which leads to downward sloping term structures of premia and risk for equity claim. Résumé Cette thèse se compose de quatre essais dans l'évaluation des actifs d'équilibre. Le sujet principal est l'hétérogénéité des investisseurs: J'étudie les implications d'équilibre pour les marchés financiers où les investisseurs ont des attitudes différentes face au risque. Le première chapitre étudie pourquoi attendus risque et la rémunération sur le marché global sont liées négativement, même si l'intuition et la théorie standard suggèrent une relation positive. Je montre que le compromis négatif peut obtenir en équilibre si les croyances des investisseurs sur les fondamentaux économiques sont procyclique biaisées et le ratio de Sharpe du marché est anticyclique. Je vérifier que ces conditions sont réalisées dans les marchés réels et je trouve un appui empirique à la dynamique risque-rendement prédites par le modèle. Le deuxième chapitre se compose de deux essais. Le première essai étudie com¬ment hétérogénéité dans les préférences de risque inter agit avec d'autres sources d'hétérogénéité et comment cela affecte les prix des actifs en équilibre. Utili¬sation de l'incertitude macroéconomique perù comme source d'hétérogénéité, le modèle permet d'expliquer certaines tendances de rendements financiers, même si l'hétérogénéité est faible comme suggéré par les données d'enquête. Le deuxième essai détermine des conditions telles que les prix d'équilibre disposer de solutions analytiques lorsque les investisseurs ont des attitudes des risques hétérogènes et les fondamentaux macroéconomiques disposent d'incertitude latente. Cette approche fournit un éclairage supplémentaire à la littérature antérieure où les modèles nécessitent des solutions numériques. Le troisième chapitre étudie pourquoi les equity-claims (actifs que paient un seul dividende futur) ont les primes et le risque décroissante avec l'horizon, mme si les modèles standards impliquent la forme opposée. Je montre que les relations de travail contribue à expliquer l'énigme. Lorsque les travailleurs ont le pouvoir de négociation d'exploiter assurance revenu partiel dans l'entreprise, les salaires sont plus lisses et les dividendes sont plus risqués que dans une économie standard. Risque de répartition entre les travailleurs et les actionnaires fournit une justification à le risque à court terme, ce qui conduit à des term-structures en pente descendante des primes et des risques pour les equity-claims.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by processbased modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws.We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25m resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The complexity of the current business world is making corporate disclosure more and more important for information users. These users, including investors, financial analysts, and government authorities rely on the disclosed information to make their investment decisions, analyze and recommend shares, and to draft regulation policies. Moreover, the globalization of capital markets has raised difficulties for information users in understanding the differences incorporate disclosure across countries and across firms. Using a sample of 797 firms from 34 countries, this thesis advances the literature on disclosure by illustrating comprehensively the disclosure determinants originating at firm systems and national systems based on the multilevel latent variable approach. Under this approach, the overall variation associated with the firm-specific variables is decomposed into two parts, the within-country and the between-country part. Accordingly, the model estimates the latent association between corporate disclosure and information demand at two levels, the within-country and the between-country level. The results indicate that the variables originating from corporate systems are hierarchically correlated with those from the country environment. The information demand factor indicated by the number of exchanges listed and the number of analyst recommendations can significantly explain the variation of corporate disclosure for both "within" and "between" countries. The exogenous influences of firm fundamentals-firm size and performance-are exerted indirectly through the information demand factor. Specifically, if the between-country variation in firm variables is taken into account, only the variables of legal systems and economic growth keep significance in explaining the disclosure differences across countries. These findings strongly support the hypothesis that disclosure is a response to both corporate systems and national systems, but the influence of the latter on disclosure reflected significantly through that of the former. In addition, the results based on ADR (American Depositary Receipt) firms suggest that the globalization of capital markets is harmonizing the disclosure behavior of cross-boundary listed firms, but it cannot entirely eliminate the national features in disclosure and other firm-specific characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The functional interaction of BAFF and APRIL with TNF receptor superfamily members BAFFR, TACI and BCMA is crucial for development and maintenance of humoral immunity in mice and humans. Using a candidate gene approach, we identified homozygous and heterozygous mutations in TNFRSF13B, encoding TACI, in 13 individuals with common variable immunodeficiency. Homozygosity with respect to mutations causing the amino acid substitutions S144X and C104R abrogated APRIL binding and resulted in loss of TACI function, as evidenced by impaired proliferative response to IgM-APRIL costimulation and defective class switch recombination induced by IL-10 and APRIL or BAFF. Family members heterozygous with respect to the C104R mutation and individuals with sporadic common variable immunodeficiency who were heterozygous with respect to the amino acid substitutions A181E, S194X and R202H had humoral immunodeficiency. Although signs of autoimmunity and lymphoproliferation are evident, the human phenotype differs from that of the Tnfrsf13b-/- mouse model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present measurements of hydrogen and oxygen isotopes in MORB glasses from Macquarie Island (SW. Pacific Ocean) coupled with determination of bulk H2O content by two independent techniques: total dehydration and FTIR. The incompatible trace elements in these glasses vary by a factor of 12 to 17, with K2O varying from 0.1 to 1.7 wt.%; these ranges reflect a variable degree of closed-system mantle melting, estimated from 1 to 15%. Water concentrations determined by the two techniques match well, yielding a range from 0.25 to 1.49 wt.% which correlates positively with all of the measured incompatible trace elements, suggesting that water is un-degassed, and behaves conservatively during mantle melting. Also, the agreement between the FTIR-determined and extracted water contents gives us confidence that the measured isotopic values of hydrogen reflect that of the mantle. Comparison of the range of water content with that of other incompatible trace elements allows estimation of the water partition coefficient in lherzolite, 0.0208 (ranging from 0.017 to 0.023), and the water content in the source, 386 ppm (ranging from 370 to 440 ppm). We observe a fairly narrow range in delta D and delta O-18 values of -75.5 +/- 4.5 parts per thousand and 5.50 +/- 0 .05 parts per thousand respectively, that can be explained by partial melting of normal lherzolitic mantle. The measured delta D and delta O-18 values of Macquarie Island glasses that range from nepheline- to hypersthene-normative, and from MORB to EMORB in composition, are identical to those in average global MORB. The observed lack of variation of delta D and delta O-18 with 1 to 15% degree of mantle melting is consistent with a bulk melting model of delta D and delta O-18 fractionation, in which water is rapidly scavenged into the first partial melt. The narrow ranges of delta D and delta O-18 in normal mantle are mostly due to the buffering effect of clino- and orthopyroxenes in the residual assemblage; additionally, fast ``wet'' diffusion of oxygen and hydrogen isotopes through the melting regions may further smooth isotopic differences. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ruin occurs the first time when the surplus of a company or an institution is negative. In the Omega model, it is assumed that even with a negative surplus, the company can do business as usual until bankruptcy occurs. The probability of bankruptcy at a point of time only depends on the value of the negative surplus at that time. Under the assumption of Brownian motion for the surplus, the expected discounted value of a penalty at bankruptcy is determined, and hence the probability of bankruptcy. There is an intrinsic relation between the probability of no bankruptcy and an exposure random variable. In special cases, the distribution of the total time the Brownian motion spends below zero is found, and the Laplace transform of the integral of the negative part of the Brownian motion is expressed in terms of the Airy function of the first kind.