923 resultados para Quantile regressions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate fiscal sustainability by using a quantile autoregression (QAR) model. We propose a novel methodology to separate periods of nonstationarity from stationary ones, which allows us to identify various trajectories of public debt that are compatible with fiscal sustainability. We use such trajectories to construct a debt ceiling, that is, the largest value of public debt that does not jeopardize long-run fiscal sustainability. We make out-of-sample forecast of such a ceiling and show how it could be used by Policy makers interested in keeping the public debt on a sustainable path. We illustrate the applicability of our results using Brazilian data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical evidence shows that larger firms pay higher wages than smaller ones. This wage premium is called the firm size wage effect. The firm size effect on wages may be attributed to many factors, as differentials on productivity, efficiency wage, to prevent union formation, or rent sharing. The present study uses quantile regression to investigate the finn size wage effect. By offering insight into who benefits from the wage premi um, quantile regression helps eliminate and refine possible explanations. Estimated results are consistent with the hypothesis that the higher wages paid by large firms can be explained by the difference in monitoring costs that large firms face. Results also suggest that more highly skilled workers are more often found at larger firms .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The estimation of labor supply elasticities has been an important issue m the economic literature. Yet all works have estimated conditional mean labor supply functions only. The objective of this paper is to obtain more information on labor supply, by estimating the conditional quantile labor supply function. vI/e use a sample of prime age urban males employees in Brazil. Two stage estimators are used as the net wage and virtual income are found to be endogenous to the model. Contrary to previous works using conditional mean estimators, it is found that labor supply elasticities vary significantly and asymmetrically across hours of work. vVhile the income and wage elasticities at the standard work week are zero, for those working longer hours the elasticities are negative.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents calculations of semiparametric efficiency bounds for quantile treatment effects parameters when se1ection to treatment is based on observable characteristics. The paper also presents three estimation procedures forthese parameters, alI ofwhich have two steps: a nonparametric estimation and a computation ofthe difference between the solutions of two distinct minimization problems. Root-N consistency, asymptotic normality, and the achievement ofthe semiparametric efficiency bound is shown for one ofthe three estimators. In the final part ofthe paper, an empirical application to a job training program reveals the importance of heterogeneous treatment effects, showing that for this program the effects are concentrated in the upper quantiles ofthe earnings distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation surveys the literature on economic growth. I review a substantial number of articles published by some of the most renowned researchers engaged in the study of economic growth. The literature is so vast that before undertaking new studies it is very important to know what has been done in the field. The dissertation has six chapters. In Chapter 1, I introduce the reader to the topic of economic growth. In Chapter 2, I present the Solow model and other contributions to the exogenous growth theory proposed in the literature. I also briefly discuss the endogenous approach to growth. In Chapter 3, I summarize the variety of econometric problems that affect the cross-country regressions. The factors that contribute to economic growth are highlighted and the validity of the empirical results is discussed. In Chapter 4, the existence of convergence, whether conditional or not, is analyzed. The literature using both cross-sectional and panel data is reviewed. An analysis on the topic of convergence using a quantile-regression framework is also provided. In Chapter 5, the controversial relationship between financial development and economic growth is analyzed. Particularly, I discuss the arguments in favour and against the Schumpeterian view that considers financial development as an important determinant of innovation and economic growth. Chapter 6 concludes the dissertation. Summing up, the literature appears to be not fully conclusive about the main determinants of economic growth, the existence of convergence and the impact of finance on growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to measure gene expression on a genome-wide scale is one of the most promising accomplishments in molecular biology. Microarrays, the technology that first permitted this, were riddled with problems due to unwanted sources of variability. Many of these problems are now mitigated, after a decade’s worth of statistical methodology development. The recently developed RNA sequencing (RNA-seq) technology has generated much excitement in part due to claims of reduced variability in comparison to microarrays. However, we show RNA-seq data demonstrates unwanted and obscuring variability similar to what was first observed in microarrays. In particular, we find GC-content has a strong sample specific effect on gene expression measurements that, if left uncorrected, leads to false positives in downstream results. We also report on commonly observed data distortions that demonstrate the need for data normalization. Here we describe statistical methodology that improves precision by 42% without loss of accuracy. Our resulting conditional quantile normalization (CQN) algorithm combines robust generalized regression to remove systematic bias introduced by deterministic features such as GC-content, and quantile normalization to correct for global distortions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article addresses the issue of kriging-based optimization of stochastic simulators. Many of these simulators depend on factors that tune the level of precision of the response, the gain in accuracy being at a price of computational time. The contribution of this work is two-fold: first, we propose a quantile-based criterion for the sequential design of experiments, in the fashion of the classical expected improvement criterion, which allows an elegant treatment of heterogeneous response precisions. Second, we present a procedure for the allocation of the computational time given to each measurement, allowing a better distribution of the computational effort and increased efficiency. Finally, the optimization method is applied to an original application in nuclear criticality safety. This article has supplementary material available online. The proposed criterion is available in the R package DiceOptim.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prediction at ungauged sites is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. Regression models relate physiographic and climatic basin characteristics to flood quantiles, which can be estimated from observed data at gauged sites. However, these models assume linear relationships between variables Prediction intervals are estimated by the variance of the residuals in the estimated model. Furthermore, the effect of the uncertainties in the explanatory variables on the dependent variable cannot be assessed. This paper presents a methodology to propagate the uncertainties that arise in the process of predicting flood quantiles at ungauged basins by a regression model. In addition, Bayesian networks were explored as a feasible tool for predicting flood quantiles at ungauged sites. Bayesian networks benefit from taking into account uncertainties thanks to their probabilistic nature. They are able to capture non-linear relationships between variables and they give a probability distribution of discharges as result. The methodology was applied to a case study in the Tagus basin in Spain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research proposes a methodology to improve computed individual prediction values provided by an existing regression model without having to change either its parameters or its architecture. In other words, we are interested in achieving more accurate results by adjusting the calculated regression prediction values, without modifying or rebuilding the original regression model. Our proposition is to adjust the regression prediction values using individual reliability estimates that indicate if a single regression prediction is likely to produce an error considered critical by the user of the regression. The proposed method was tested in three sets of experiments using three different types of data. The first set of experiments worked with synthetically produced data, the second with cross sectional data from the public data source UCI Machine Learning Repository and the third with time series data from ISO-NE (Independent System Operator in New England). The experiments with synthetic data were performed to verify how the method behaves in controlled situations. In this case, the outcomes of the experiments produced superior results with respect to predictions improvement for artificially produced cleaner datasets with progressive worsening with the addition of increased random elements. The experiments with real data extracted from UCI and ISO-NE were done to investigate the applicability of the methodology in the real world. The proposed method was able to improve regression prediction values by about 95% of the experiments with real data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For non-negative random variables with finite means we introduce an analogous of the equilibrium residual-lifetime distribution based on the quantile function. This allows us to construct new distributions with support (0, 1), and to obtain a new quantile-based version of the probabilistic generalization of Taylor's theorem. Similarly, for pairs of stochastically ordered random variables we come to a new quantile-based form of the probabilistic mean value theorem. The latter involves a distribution that generalizes the Lorenz curve. We investigate the special case of proportional quantile functions and apply the given results to various models based on classes of distributions and measures of risk theory. Motivated by some stochastic comparisons, we also introduce the “expected reversed proportional shortfall order”, and a new characterization of random lifetimes involving the reversed hazard rate function.