61 resultados para BENCHMARK
Resumo:
The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.
Resumo:
We analyze recent contributions to growth theory based on the model of expanding variety of Romer (1990). In the first part, we present different versions of the benchmark linear model with imperfect competition. These include the labequipment model, labor-for-intermediates and directed technical change . We review applications of the expanding variety framework to the analysis of international technology diffusion, trade, cross-country productivity differences, financial development and fluctuations. In many such applications, a key role is played by complementarities in the process of innovation.
Resumo:
We perform an experiment on a pure coordination game with uncertaintyabout the payoffs. Our game is closely related to models that have beenused in many macroeconomic and financial applications to solve problemsof equilibrium indeterminacy. In our experiment each subject receives anoisy signal about the true payoffs. This game has a unique strategyprofile that survives the iterative deletion of strictly dominatedstrategies (thus a unique Nash equilibrium). The equilibrium outcomecoincides, on average, with the risk-dominant equilibrium outcome ofthe underlying coordination game. The behavior of the subjects convergesto the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that subjects do not apply through"a priori" reasoning the iterated deletion of dominated strategies.Instead, they adapt to the responses of other players. Thus, the lengthof the learning phase clearly varies for the different signals. We alsotest behavior in a game without uncertainty as a benchmark case. The gamewith uncertainty is inspired by the "global" games of Carlsson and VanDamme (1993).
Resumo:
In this paper we consider the equilibrium effects of an institutionalinvestor whose performance is benchmarked to an index. In a partialequilibrium setting, the objective of the institutional investor is modeledas the maximization of expected utility (an increasing and concave function,in order to accommodate risk aversion) of final wealth minus a benchmark.In equilibrium this optimal strategy gives rise to the two-beta CAPM inBrennan (1993): together with the market beta a new risk-factor (that wecall active management risk) is brought into the analysis. This new betais deffined as the normalized (to the benchmark's variance) covariancebetween the asset excess return and the excess return of the market overthe benchmark index. Different to Brennan, the empirical test supports themodel's predictions. The cross-section return on the active management riskis positive and signifficant especially after 1990, when institutionalinvestors have become the representative agent of the market.
Resumo:
One plausible mechanism through which financial market shocks may propagate across countriesis through the impact that past gains and losses may have on investors risk aversion and behavior. This paper presents a stylized model illustrating how heterogeneous changes in investors risk aversion affect portfolio allocation decisions and stock prices. Our empirical findings suggest that when funds returns are below average, they adjust their holdings toward the average (or benchmark) portfolio. In so doing, funds tend to sell the assets of countries in which they were overweight , increasing their exposure to countries in which they were underweight. Based on this insight, the paper constructs an index of financial interdependence which reflects the extent to which countries share overexposed funds. The index helps in explain the pattern of stock market comovement across countries. Moreover, a comparison of this interdependence measure to indices of trade or commercial bank linkages indicates that our index can improve predictions about which countries are more likely to be affected by contagion from crisis centers.
Resumo:
We present a leverage theory of reputation building with co-branding. We showthat under certain conditions, co-branding that links unknown firms in a new sectorwith established firms in a mature sector allows the unknown firms to signal a highproduct quality and establish their own reputation. We compare this situationwith a benchmark in which both sectors are new and firms signal their qualityonly with prices. We investigate how this comparison is affected by the nature ofthe technology linking the two sectors and a cross-sector inference problem thatconsumers might face in identifying the true cause of product failure. We find thatco-branding facilitates the process in which a Þrm in the new sector to signal itsproduct quality only if the co-branding sectors produce complementary inputs andconsumers face a cross-sector inference problem. We apply our insight to economicsof superstars, multinational firms and co-authorship.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
We model firm-owned capital in a stochastic dynamic New-Keynesian generalequilibrium model à la Calvo. We find that this structure impliesequilibrium dynamics which are quantitatively di¤erent from the onesassociated with a benchmark case where households accumulate capital andrent it to firms. Our findings therefore stress the importance ofmodeling an investment decision at the firm level in addition to ameaningful price setting decision. Along the way we argue that the problemof modeling firm-owned capital with Calvo price-setting has not been solvedin a correct way in the previous literature.
Resumo:
Does financial development result in capital being reallocated more rapidly to industries where it is most productive? We argue that if this was the case, financially developed countries should see faster growth in industries with investment opportunities due to global demand and productivity shifts. Testing this cross-industry cross-country growth implication requires proxies for (latent) global industry investment opportunities. We show that tests relying only on data from specific (benchmark) countries may yield spurious evidence for or against the hypothesis. We therefore develop an alternative approach that combines benchmark-country proxies with a proxy that does not reflect opportunities specific to a country or level of financial development. Our empirical results yield clear support for the capital reallocation hypothesis.
Resumo:
In a closed economy context there is common agreement on price inflation stabilization being one of the objects of monetary policy. Moving to an open economy context gives rise to the coexistence of two measures of inflation: domestic inflation (DI) and consumer price inflation (CPI). Which one of the two measures should be the target variable? This is the question addressed in this paper. In particular, I use a small open economy model to show that once sticky wages indexed to past CPI inflation are introduced, a complete inward looking monetary policy is no more optimal. I first, derive a loss function from a secondorder approximation of the utility function and then, I compute the fully optimalmonetary policy under commitment. Then, I use the optimal monetary policy as a benchmark to compare the performance of different monetary policy rules. The main result is that once a positive degree of indexation is introduced in the model the rule performing better (among the Taylor type rules considered) is the one targeting wage inflation and CPI inflation. Moreover this rule delivers results very close to the one obtained under the fully optimal monetary policy with commitment.
Resumo:
The set covering problem is an NP-hard combinatorial optimization problemthat arises in applications ranging from crew scheduling in airlines todriver scheduling in public mass transport. In this paper we analyze searchspace characteristics of a widely used set of benchmark instances throughan analysis of the fitness-distance correlation. This analysis shows thatthere exist several classes of set covering instances that have a largelydifferent behavior. For instances with high fitness distance correlation,we propose new ways of generating core problems and analyze the performanceof algorithms exploiting these core problems.
Resumo:
We conduct a large-scale comparative study on linearly combining superparent-one-dependence estimators (SPODEs), a popular family of seminaive Bayesian classifiers. Altogether, 16 model selection and weighing schemes, 58 benchmark data sets, and various statistical tests are employed. This paper's main contributions are threefold. First, it formally presents each scheme's definition, rationale, and time complexity and hence can serve as a comprehensive reference for researchers interested in ensemble learning. Second, it offers bias-variance analysis for each scheme's classification error performance. Third, it identifies effective schemes that meet various needs in practice. This leads to accurate and fast classification algorithms which have an immediate and significant impact on real-world applications. Another important feature of our study is using a variety of statistical tests to evaluate multiple learning methods across multiple data sets.
Resumo:
The main goal of this article is to provide an answer to the question: "Does anything forecast exchange rates, and if so, which variables?". It is well known thatexchange rate fluctuations are very difficult to predict using economic models, andthat a random walk forecasts exchange rates better than any economic model (theMeese and Rogoff puzzle). However, the recent literature has identified a series of fundamentals/methodologies that claim to have resolved the puzzle. This article providesa critical review of the recent literature on exchange rate forecasting and illustratesthe new methodologies and fundamentals that have been recently proposed in an up-to-date, thorough empirical analysis. Overall, our analysis of the literature and thedata suggests that the answer to the question: "Are exchange rates predictable?" is,"It depends" -on the choice of predictor, forecast horizon, sample period, model, andforecast evaluation method. Predictability is most apparent when one or more of thefollowing hold: the predictors are Taylor rule or net foreign assets, the model is linear, and a small number of parameters are estimated. The toughest benchmark is therandom walk without drift.
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
L’estudi que es presenta a continuació té l’objectiu de comprendre quina és la realitat ociosa de les persones de 50 a 70 anys als municipis de Malla (Catalunya, Espanya) i de San Juan la Laguna (Sololá, Guatemala) des de la perspectiva humanista en termes de concepció i pràctica, i veure quina és la influència i la força que prenen les característiques de la societat en la qual es desenvolupa. Per tal de dur-ho a terme, primerament s’ha realitzat un procés d’aproximació respecte el concepte de l’oci, i una recerca concreta vers l’oci humanista. A partir d’aquí, s’ha fet l’estudi amb una mostra formada per deu persones del municipi de Malla i deu membres de San Juan la Laguna que es troben entre els 50 i 70 anys, i amb unes condicions econòmiques i uns estils de vida diferents. Per tal de realitzar la recerca i l’anàlisi de l’oci humanista en els contexts de Malla i San Juan la Laguna s’ha emprat una metodologia qualitativa, i s’ha utilitzat l’instrument corresponent a l’entrevista. Aquesta ha estat elaborada prenent com a marc de referència la metodologia de la Grounded Theory (Glaser y Strauss, 1967). El projecte també compte amb una vessant d’etnografia. Els resultats que s’han obtingut demostren que hi ha una presència significativa de l’oci humanista en els contexts analitzats, però que en el cas de San Juan la Laguna aquest esdevé un element en construcció.