807 resultados para Hold
Resumo:
Planar polynomial vector fields which admit invariant algebraic curves, Darboux integrating factors or Darboux first integrals are of special interest. In the present paper we solve the inverse problem for invariant algebraic curves with a given multiplicity and for integrating factors, under generic assumptions regarding the (multiple) invariant algebraic curves involved. In particular we prove, in this generic scenario, that the existence of a Darboux integrating factor implies Darboux integrability. Furthermore we construct examples where the genericity assumption does not hold and indicate that the situation is different for these.
Resumo:
We test hypotheses on the dual role of boards of directors for a sample of large international commercial banks. We find an inverted U shaped relation between bank performance and board size that justifies a large board and imposes an efficient limit to the board’s size; a positive relation between the proportion of non-executive directors and performance; and a proactive role in board meetings. Our results show that bank boards’ composition and functioning are related to directors’ incentives to monitor and advise management. All these relations hold after we control for bank business, institutional differences, size, market power in the banking industry, bank ownership and investors’ legal protection.
Resumo:
In this paper, we investigate galois theory of CP-graded ring extensions. In particular, we generalize some galois results given in [1, 2] and, without restriction to nor graded fields nor torsion free of the grade groups, we show that some results of graded field extensions given in [3] hold.
Resumo:
This paper investigates the effects of monetary rewards on the pattern of research. We build a simple repeated model of a researcher capable to obtain innovative ideas. We analyse how the legal environment affects the allocation of researcher's time between research and development. Although technology transfer objectives reduce the time spent in research, they might also induce researchers to conduct research that is more basic in nature, contrary to what the skewing problem would presage. We also show that our results hold even if development delays publication.
Resumo:
Major outputs of the neocortex are conveyed by corticothalamic axons (CTAs), which form reciprocal connections with thalamocortical axons, and corticosubcerebral axons (CSAs) headed to more caudal parts of the nervous system. Previous findings establish that transcriptional programs define cortical neuron identity and suggest that CTAs and thalamic axons may guide each other, but the mechanisms governing CTA versus CSA pathfinding remain elusive. Here, we show that thalamocortical axons are required to guide pioneer CTAs away from a default CSA-like trajectory. This process relies on a hold in the progression of cortical axons, or waiting period, during which thalamic projections navigate toward cortical axons. At the molecular level, Sema3E/PlexinD1 signaling in pioneer cortical neurons mediates a "waiting signal" required to orchestrate the mandatory meeting with reciprocal thalamic axons. Our study reveals that temporal control of axonal progression contributes to spatial pathfinding of cortical projections and opens perspectives on brain wiring.
Resumo:
Existing empirical evidence suggests that the Uncovered Interest Rate Parity (UIRP) condition may not hold due to an exchange risk premium. For a panel data set of eleven emerging European economies we decompose this exchange risk premium into an idiosyncratic (country-specific) elements and a common factor using a principal components approach. We present evidence of a stationary idiosyncratic component and nonstationary common factor. This result leads to the conclusion of a nonstationary risk premium for these countries and a violation of the UIRP in the long-run, which is in contrast to previous studies often documenting a stationary premium in developed countries. Furthermore, we report that the variation in the premium is largely attributable to a common factor influenced by economic developments in the United States.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
This paper considers the instrumental variable regression model when there is uncertainty about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainty can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very exible and can be easily adapted to analyze any of the di¤erent priors that have been proposed in the Bayesian instrumental variables literature. We show how to calculate the probability of any relevant restriction (e.g. the posterior probability that over-identifying restrictions hold) and discuss diagnostic checking using the posterior distribution of discrepancy vectors. We illustrate our methods in a returns-to-schooling application.
Resumo:
A social choice function is group strategy-proof on a domain if no group of agents can manipulate its final outcome to their own benefit by declaring false preferences on that domain. Group strategy-proofness is a very attractive requirement of incentive compatibility. But in many cases it is hard or impossible to find nontrivial social choice functions satisfying even the weakest condition of individual strategy-proofness. However, there are a number of economically significant domains where interesting rules satisfying individual strategy-proofness can be defined, and for some of them, all these rules turn out to also satisfy the stronger requirement of group strategy-proofness. This is the case, for example, when preferences are single-peaked or single-dipped. In other cases, this equivalence does not hold. We provide sufficient conditions defining domains of preferences guaranteeing that individual and group strategy-proofness are equivalent for all rules defined on the
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting model as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
Proponents of proportional electoral rules often argue that majority rule depresses turnout and may lower welfare due to the 'tyranny of the majority' problem. The present paper studies the impact of electoral rules on turnout and social welfare. We analyze a model of instrumental voting where citizens have private information over their individual cost of voting and over the alternative they prefer. The electoral rule used to select the winning alternative is a combination of majority rule and proportional rule. Results show that the above arguments against majority rule do not hold in this set up. Social welfare and turnout increase with the weight that the electoral rule gives to majority rule when the electorate is expected to be split, and they are independent of the electoral rule employed when the expected size of the minority group tends to zero. However, more proportional rules can increase turnout within the minority group. This effect is stronger the smaller the minority group. We then conclude that majority rule fosters overall turnout and increases social welfare, whereas proportional rule fosters the participation of minorities.
Resumo:
Incorporating adaptive learning into macroeconomics requires assumptions about how agents incorporate their forecasts into their decision-making. We develop a theory of bounded rationality that we call finite-horizon learning. This approach generalizes the two existing benchmarks in the literature: Eulerequation learning, which assumes that consumption decisions are made to satisfy the one-step-ahead perceived Euler equation; and infinite-horizon learning, in which consumption today is determined optimally from an infinite-horizon optimization problem with given beliefs. In our approach, agents hold a finite forecasting/planning horizon. We find for the Ramsey model that the unique rational expectations equilibrium is E-stable at all horizons. However, transitional dynamics can differ significantly depending upon the horizon.
Resumo:
In this paper we examine the importance of imperfect competition in product and labour markets in determining the long-run welfare e¤ects of tax reforms assuming agent heterogeneneity in capital hold- ings. Each of these market failures, independently, results in welfare losses for at least a segment of the population, after a capital tax cut and a concurrent labour tax increase. However, when combined in a realistic calibration to the UK economy, they imply that a capital tax cut will be Pareto improving in the long run. Consistent with the the- ory of second-best, the two distortions in this context work to correct the negative distributional e¤ects of a capital tax cut that each one, on its own, creates.
Resumo:
During the past four decades both between and within group wage inequality increased significantly in the US. I provide a microfounded justification for this pattern, by introducing private employer learning in a model of signaling with credit constraints. In particular, I show that when financial constraints relax, talented individuals can acquire education and leave the uneducated pool, this decreases unskilled inexperienced wages and boosts wage inequality. This explanation is consistent with US data from 1970 to 1997, indicating that the rise of the skill and the experience premium coincides with a fall in unskilled-inexperienced wages, while at the same time skilled or experienced wages do not change much. The model accounts for: (i) the increase in the skill premium despite the growing supply of skills; (ii) the understudied aspect of rising inequality related to the increase in the experience premium; (iii) the sharp growth of the skill premium for inexperienced workers and its moderate expansion for the experienced ones; (iv) the puzzling coexistence of increasing experience premium within the group of unskilled workers and its stable pattern among the skilled ones. The results hold under various robustness checks and provide some interesting policy implications about the potential conflict between inequality of opportunity and substantial economic inequality, as well as the role of minimum wage policy in determining the equilibrium wage inequality.
Resumo:
Developing a predictive understanding of subsurface flow and transport is complicated by the disparity of scales across which controlling hydrological properties and processes span. Conventional techniques for characterizing hydrogeological properties (such as pumping, slug, and flowmeter tests) typically rely on borehole access to the subsurface. Because their spatial extent is commonly limited to the vicinity near the wellbores, these methods often cannot provide sufficient information to describe key controls on subsurface flow and transport. The field of hydrogeophysics has evolved in recent years to explore the potential that geophysical methods hold for improving the quantification of subsurface properties and processes relevant for hydrological investigations. This chapter is intended to familiarize hydrogeologists and water-resource professionals with the state of the art as well as existing challenges associated with hydrogeophysics. We provide a review of the key components of hydrogeophysical studies, which include: geophysical methods commonly used for shallow subsurface characterization; petrophysical relationships used to link the geophysical properties to hydrological properties and state variables; and estimation or inversion methods used to integrate hydrological and geophysical measurements in a consistent manner. We demonstrate the use of these different geophysical methods, petrophysical relationships, and estimation approaches through several field-scale case studies. Among other applications, the case studies illustrate the use of hydrogeophysical approaches to quantify subsurface architecture that influence flow (such as hydrostratigraphy and preferential pathways); delineate anomalous subsurface fluid bodies (such as contaminant plumes); monitor hydrological processes (such as infiltration, freshwater-seawater interface dynamics, and flow through fractures); and estimate hydrological properties (such as hydraulic conductivity) and state variables (such as water content). The case studies have been chosen to illustrate how hydrogeophysical approaches can yield insights about complex subsurface hydrological processes, provide input that improves flow and transport predictions, and provide quantitative information over field-relevant spatial scales. The chapter concludes by describing existing hydrogeophysical challenges and associated research needs. In particular, we identify the area of quantitative watershed hydrogeophysics as a frontier area, where significant effort is required to advance the estimation of hydrological properties and processes (and their uncertainties) over spatial scales relevant to the management of water resources and contaminants.