1000 resultados para Universitat Pompeu Fabra
Resumo:
Sequential randomized prediction of an arbitrary binary sequence isinvestigated. No assumption is made on the mechanism of generating the bit sequence. The goal of the predictor is to minimize its relative loss, i.e., to make (almost) as few mistakes as the best ``expert'' in a fixed, possibly infinite, set of experts. We point out a surprising connection between this prediction problem and empirical process theory. First, in the special case of static (memoryless) experts, we completely characterize the minimax relative loss in terms of the maximum of an associated Rademacher process. Then we show general upper and lower bounds on the minimaxrelative loss in terms of the geometry of the class of experts. As main examples, we determine the exact order of magnitude of the minimax relative loss for the class of autoregressive linear predictors and for the class of Markov experts.
Resumo:
We perform an experimental test of Maskin's canonical mechanism for Nashimplementation, using 3 subjects in non-repeated groups, as well as 3 outcomes, states of nature, and integer choices. We find that this mechanism succesfully implements the desired outcome a large majority of the time and an imbedded comprehension test indicates that subjects were generally able to comprehend their decision tasks. The performance can also be improved by imposing a fine on non designated dissidents. We offer some explanations for the imperfect implementation, including risk preferences, the possibilities that agents have for collusion, and the mixed strategy equilibria of the game.
Resumo:
Under plausible assumptions about preferences and technology, the model in this papersuggests that the entire volume of world trade matters for wage inequality. Therefore,trade integration, even among identical countries, is likely to increase the skill premium.Further, we argue that empirical evidence of a falling relative price of skill-intensive goods can be reconciled with the fast growth of world trade and that the intersectoral mobility of capital exacerbates the effect of trade on inequality. We provide new empirical evidence in support of our results and a quantitative assessment of the skill bias of world trade.
Resumo:
I describe the customer valuations game, a simple intuitive game that can serve as a foundation for teaching revenue management. The game requires little or no preparation, props or software, takes around two hours (and hence can be finished in one session), and illustrates the formation of classical (airline and hotel) revenue management mechanisms such as advanced purchase discounts, booking limits and fixed multiple prices. I normally use the game as a base to introduce RM and to develop RM forecasting and optimization concepts off it. The game is particularly suited for non-technical audiences.
Resumo:
Monetary policy is conducted in an environment of uncertainty. This paper sets upa model where the central bank uses real-time data from the bond market togetherwith standard macroeconomic indicators to estimate the current state of theeconomy more efficiently, while taking into account that its own actions influencewhat it observes. The timeliness of bond market data allows for quicker responsesof monetary policy to disturbances compared to the case when the central bankhas to rely solely on collected aggregate data. The information content of theterm structure creates a link between the bond market and the macroeconomythat is novel to the literature. To quantify the importance of the bond market asa source of information, the model is estimated on data for the United Statesand Australia using Bayesian methods. The empirical exercise suggests that thereis some information in the US term structure that helps the Federal Reserve toidentify shocks to the economy on a timely basis. Australian bond prices seemto be less informative than their US counterparts, perhaps because Australia is arelatively small and open economy.
Resumo:
The view of a 1870-1913 expanding European economy providing increasing welfare to everybody has been challenged by many, then and now. We focus on the amazing growth that was experienced, its diffusion and its sources, in the context of the permanent competition among European nation states. During 1870-193 the globalized European economy reached a silver age . GDP growth was quite rapid (2.15% per annum) and diffused all over Europe. Even discounting the high rates of population growth (1.06%), per capita growth was left at a respectable 1.08%. Income per capita was rising in every country, and the rates of improvement were quite similar. This was a major achievement after two generations of highly localized growth, both geographically and socially. Growth was based on the increased use of labour and capital, but a good part of growth (73 per cent for the weighted average of the best documented European countries) came out of total factor productivity efficiency gains resulting from not well specified ultimate sources of growth. This proportion suggests that the European economy was growing at full capacity at its production frontier. It would have been very difficult to improve its performance. Within Europe, convergence was limited, and it only was in motion after 1900. What happened was more the end of the era of big divergence rather than an era of convergence.
Resumo:
We study the earnings structure and the equilibrium assignment of workers when workers exert intra-firm spillovers on each other.We allow for arbitrary spillovers provided output depends on some aggregate index of workers' skill. Despite the possibility of increasing returns to skills, equilibrium typically exists. We show that equilibrium will typically be segregated; that the skill space can be partitioned into a set of segments and any firm hires from only one segment. Next, we apply the model to analyze the effect of information technology on segmentation and the distribution of income. There are two types of human capital, productivity and creativity, i.e. the ability to produce ideas that may be duplicated over a network. Under plausible assumptions, inequality rises and then falls when network size increases, and the poorest workers cannot lose. We also analyze the impact of an improvement in worker quality and of an increased international mobility of ideas.
Resumo:
This paper presents an Optimised Search Heuristic that combines a tabu search method with the verification of violated valid inequalities. The solution delivered by the tabu search is partially destroyed by a randomised greedy procedure, and then the valid inequalities are used to guide the reconstruction of a complete solution. An application of the new method to the Job-Shop Scheduling problem is presented.
Resumo:
The spectacular failure of top-rated structured finance products has broughtrenewed attention to the conflicts of interest of Credit Rating Agencies (CRAs). We modelboth the CRA conflict of understating credit risk to attract more business, and the issuerconflict of purchasing only the most favorable ratings (issuer shopping), and examine theeffectiveness of a number of proposed regulatory solutions of CRAs. We find that CRAs aremore prone to inflate ratings when there is a larger fraction of naive investors in the marketwho take ratings at face value, or when CRA expected reputation costs are lower. To theextent that in booms the fraction of naive investors is higher, and the reputation risk forCRAs of getting caught understating credit risk is lower, our model predicts that CRAs aremore likely to understate credit risk in booms than in recessions. We also show that, due toissuer shopping, competition among CRAs in a duopoly is less efficient (conditional on thesame equilibrium CRA rating policy) than having a monopoly CRA, in terms of both totalex-ante surplus and investor surplus. Allowing tranching decreases total surplus further.We argue that regulatory intervention requiring upfront payments for rating services (beforeCRAs propose a rating to the issuer) combined with mandatory disclosure of any ratingproduced by CRAs can substantially mitigate the con.icts of interest of both CRAs andissuers.
Resumo:
We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.
Resumo:
Before the Civil War (1936-1939), Spain had seen the emergence offirms of complex organizational forms. However, the conflict andthe postwar years changed this pattern. The argument put forwardin this paper is based on historical experience, the efforts willbe addressed to explain the development of Spanish entrepreneurshipduring the second half of the twentieth century. To illustrate thechange in entrepreneurship and organizational patterns among theSpanish firms during the Francoist regime we will turn to the caseof the motor vehicle industry.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
We develop a stylized model of economic growth with bubbles. In this model, changes in investorsentiment lead to the appearance and collapse of macroeconomic bubbles or pyramid schemes.We show how these bubbles mitigate the effects of financial frictions. During bubbly episodes,unproductive investors demand bubbles while productive investors supply them. These transfersof resources improve the efficiency at which the economy operates, expanding consumption, thecapital stock and output. When bubbly episodes end, these transfers stop and consumption, thecapital stock and output contract. We characterize the stochastic equilibria of the model and arguethat they provide a natural way of introducing bubble shocks into business cycle models.
Resumo:
We construct an uncoupled randomized strategy of repeated play such that, if every player follows such a strategy, then the joint mixed strategy profiles converge, almost surely, to a Nash equilibrium of the one-shot game. The procedure requires very little in terms of players' information about the game. In fact, players' actions are based only on their own past payoffs and, in a variant of the strategy, players need not even know that their payoffs are determined through other players' actions. The procedure works for general finite games and is based on appropriate modifications of a simple stochastic learningrule introduced by Foster and Young.
Resumo:
We estimate the aggregate long-run elasticity of substitution between more and less educatedworkers (the slope of the demand curve for more relative to less educated workers) at theUS state level. Our data come from the (five)1950-1990 decennial censuses. Our empiricalapproach allows for state and time fixed effects and relies on time and state dependentchild labor and compulsory school attendance laws as instruments for (endogenous) changesin the relative supply of more educated workers. We find the aggregate long-run elasticity ofsubstitution between more and less educated workers to be around 1.5.