978 resultados para Panel model estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we investigate the channel estimation problem for multiple-input multiple-output (MIMO) relay communication systems with time-varying channels. The time-varying characteristic of the channels is described by the complex-exponential basis expansion model (CE-BEM). We propose a superimposed channel training algorithm to estimate the individual first-hop and second-hop time-varying channel matrices for MIMO relay systems. In particular, the estimation of the second-hop time-varying channel matrix is performed by exploiting the superimposed training sequence at the relay node, while the first-hop time-varying channel matrix is estimated through the source node training sequence and the estimated second-hop channel. To improve the performance of channel estimation, we derive the optimal structure of the source and relay training sequences that minimize the mean-squared error (MSE) of channel estimation. We also optimize the relay amplification factor that governs the power allocation between the source and relay training sequences. Numerical simulations demonstrate that the proposed superimposed channel training algorithm for MIMO relay systems with time-varying channels outperforms the conventional two-stage channel estimation scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding practical ways to robustly estimate abundance or density trends in threatened species is a key facet for effective conservation management. Further identifying less expensive monitoring methods that provide adequate data for robust population density estimates can facilitate increased investment into other conservation initiatives needed for species recovery. Here we evaluated and compared inference-and cost-effectiveness criteria for three field monitoring-density estimation protocols to improve conservation activities for the threatened Komodo dragon (Varanus komodoensis). We undertook line-transect counts, cage trapping and camera monitoring surveys for Komodo dragons at 11 sites within protected areas in Eastern Indonesia to collect data to estimate density using distance sampling methods or the Royle-Nichols abundance induced heterogeneity model. Distance sampling estimates were considered poor due to large confidence intervals, a high coefficient of variation and that false absences were obtained in 45 % of sites where other monitoring methods detected lizards present. The Royle-Nichols model using presence/absence data obtained from cage trapping and camera monitoring produced highly correlated density estimates, obtained similar measures of precision and recorded no false absences in data collation. However because costs associated with camera monitoring were considerably less than cage trapping methods, albeit marginally more expensive than distance sampling, better inference from this method is advocated for ongoing population monitoring of Komodo dragons. Further the cost-savings achieved by adopting this field monitoring method could facilitate increased expenditure on alternative management strategies that could help address current declines in two Komodo dragon populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning from small number of examples is a challenging problem in machine learning. An effective way to improve the performance is through exploiting knowledge from other related tasks. Multi-task learning (MTL) is one such useful paradigm that aims to improve the performance through jointly modeling multiple related tasks. Although there exist numerous classification or regression models in machine learning literature, most of the MTL models are built around ridge or logistic regression. There exist some limited works, which propose multi-task extension of techniques such as support vector machine, Gaussian processes. However, all these MTL models are tied to specific classification or regression algorithms and there is no single MTL algorithm that can be used at a meta level for any given learning algorithm. Addressing this problem, we propose a generic, model-agnostic joint modeling framework that can take any classification or regression algorithm of a practitioner’s choice (standard or custom-built) and build its MTL variant. The key observation that drives our framework is that due to small number of examples, the estimates of task parameters are usually poor, and we show that this leads to an under-estimation of task relatedness between any two tasks with high probability. We derive an algorithm that brings the tasks closer to their true relatedness by improving the estimates of task parameters. This is achieved by appropriate sharing of data across tasks. We provide the detail theoretical underpinning of the algorithm. Through our experiments with both synthetic and real datasets, we demonstrate that the multi-task variants of several classifiers/regressors (logistic regression, support vector machine, K-nearest neighbor, Random Forest, ridge regression, support vector regression) convincingly outperform their single-task counterparts. We also show that the proposed model performs comparable or better than many state-of-the-art MTL and transfer learning baselines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drinking water distribution networks risk exposure to malicious or accidental contamination. Several levels of responses are conceivable. One of them consists to install a sensor network to monitor the system on real time. Once a contamination has been detected, this is also important to take appropriate counter-measures. In the SMaRT-OnlineWDN project, this relies on modeling to predict both hydraulics and water quality. An online model use makes identification of the contaminant source and simulation of the contaminated area possible. The objective of this paper is to present SMaRT-OnlineWDN experience and research results for hydraulic state estimation with sampling frequency of few minutes. A least squares problem with bound constraints is formulated to adjust demand class coefficient to best fit the observed values at a given time. The criterion is a Huber function to limit the influence of outliers. A Tikhonov regularization is introduced for consideration of prior information on the parameter vector. Then the Levenberg-Marquardt algorithm is applied that use derivative information for limiting the number of iterations. Confidence intervals for the state prediction are also given. The results are presented and discussed on real networks in France and Germany.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Canada releases over 150 billion litres of untreated and undertreated wastewater into the water environment every year1. To clean up urban wastewater, new Federal Wastewater Systems Effluent Regulations (WSER) on establishing national baseline effluent quality standards that are achievable through secondary wastewater treatment were enacted on July 18, 2012. With respect to the wastewater from the combined sewer overflows (CSO), the Regulations require the municipalities to report the annual quantity and frequency of effluent discharges. The City of Toronto currently has about 300 CSO locations within an area of approximately 16,550 hectares. The total sewer length of the CSO area is about 3,450 km and the number of sewer manholes is about 51,100. A system-wide monitoring of all CSO locations has never been undertaken due to the cost and practicality. Instead, the City has relied on estimation methods and modelling approaches in the past to allow funds that would otherwise be used for monitoring to be applied to the reduction of the impacts of the CSOs. To fulfill the WSER requirements, the City is now undertaking a study in which GIS-based hydrologic and hydraulic modelling is the approach. Results show the usefulness of this for 1) determining the flows contributing to the combined sewer system in the local and trunk sewers for dry weather flow, wet weather flow, and snowmelt conditions; 2) assessing hydraulic grade line and surface water depth in all the local and trunk sewers under heavy rain events; 3) analysis of local and trunk sewer capacities for future growth; and 4) reporting of the annual quantity and frequency of CSOs as per the requirements in the new Regulations. This modelling approach has also allowed funds to be applied toward reducing and ultimately eliminating the adverse impacts of CSOs rather than expending resources on unnecessary and costly monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent efforts toward a world with freer trade, like WTO/GATT or regional Preferential Trade Agreements(PTAs), were put in doubt after McCallum's(1995) finding of a large border effect between US and Canadian provinces. Since then, there has been a great amount of research on this topic employing the gravity equation. This dissertation has two goals. The first goal is to review comprehensively the recent literature about the gravity equation, including its usages, econometric specifications, and the efforts to provide it with microeconomic foundations. The second goal is the estimation of the Brazilian border effect (or 'home-bias trade puzzle') using inter-state and international trade flow data. It is used a pooled cross-section Tobit model. The lowest border effect estimated was 15, which implies that Brazilian states trade among themselves 15 times more than they trade with foreign countries. Further research using industry disaggregated data is needed to qualify the estimated border effect with respect to which part of that effect can be attributed to actual trade costs and which part is the outcome of the endogenous location problem of the firm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate and test two alternative functional forms, which have been used in the growth literature, representing the aggregate production function for a panel of countries: the model of Mankiw, Romer and Weil (Quarterly Journal of Economics, 1992), and a mincerian formulation of schooling-returns to skills. Estimation is performed using instrumental-variable techniques, and both functional forms are confronted using a Box-Cox test, since human capital inputs enter in levels in the mincerian specification and in logs in the extended neoclassical growth model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excessive labor turnover may be considered, to a great extent, an undesirable feature of a given economy. This follows from considerations such as underinvestment in human capital by firms. Understanding the determinants and the evolution of turnover in a particular labor market is therefore of paramount importance, including policy considerations. The present paper proposes an econometric analysis of turnover in the Brazilian labor market, based on a partial observability bivariate probit model. This model considers the interdependence of decisions taken by workers and firms, helping to elucidate the causes that lead each of them to end an employment relationship. The Employment and Unemployment Survey (PED) conducted by the State System of Data Analysis (SEADE) and by the Inter-Union Department of Statistics and Socioeconomic Studies (DIEESE) provides data at the individual worker level, allowing for the estimation of the joint probabilities of decisions to quit or stay on the job on the worker’s side, and to maintain or fire the employee on the firm’s side, during a given time period. The estimated parameters relate these estimated probabilities to the characteristics of workers, job contracts, and to the potential macroeconomic determinants in different time periods. The results confirm the theoretical prediction that the probability of termination of an employment relationship tends to be smaller as the worker acquires specific skills. The results also show that the establishment of a formal employment relationship reduces the probability of a quit decision by the worker, and also the firm’s firing decision in non-industrial sectors. With regard to the evolution of quit probability over time, the results show that an increase in the unemployment rate inhibits quitting, although this tends to wane as the unemployment rate rises.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate and test two alternative functional forms representing the aggregate production function for a panel of countries: the extended neoclassical growth model, and a mincerian formulation of schooling-returns to skills. Estimation is performed using instrumentalvariable techniques, and both functional forms are confronted using a Box-Cox test, since human capital inputs enter in levels in the mincerian specification and in logs in the extended neoclassical growth model. Our evidence rejects the extended neoclassical growth model in favor of the mincerian specification, with an estimated capital share of about 42%, a marginal return to education of about 7.5% per year, and an estimated productivity growth of about 1.4% per year. Differences in productivity cannot be disregarded as an explanation of why output per worker varies so much across countries: a variance decomposition exercise shows that productivity alone explains 54% of the variation in output per worker across countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most studies around that try to verify the existence of regulatory risk look mainly at developed countries. Looking at regulatory risk in emerging market regulated sectors is no less important to improving and increasing investment in those markets. This thesis comprises three papers comprising regulatory risk issues. In the first Paper I check whether CAPM betas capture information on regulatory risk by using a two-step procedure. In the first step I run Kalman Filter estimates and then use these estimated betas as inputs in a Random-Effect panel data model. I find evidence of regulatory risk in electricity, telecommunications and all regulated sectors in Brazil. I find further evidence that regulatory changes in the country either do not reduce or even increase the betas of the regulated sectors, going in the opposite direction to the buffering hypothesis as proposed by Peltzman (1976). In the second Paper I check whether CAPM alphas say something about regulatory risk. I investigate a methodology similar to those used by some regulatory agencies around the world like the Brazilian Electricity Regulatory Agency (ANEEL) that incorporates a specific component of regulatory risk in setting tariffs for regulated sectors. I find using SUR estimates negative and significant alphas for all regulated sectors especially the electricity and telecommunications sectors. This runs in the face of theory that predicts alphas that are not statistically different from zero. I suspect that the significant alphas are related to misspecifications in the traditional CAPM that fail to capture true regulatory risk factors. On of the reasons is that CAPM does not consider factors that are proven to have significant effects on asset pricing, such as Fama and French size (ME) and price-to-book value (ME/BE). In the third Paper, I use two additional factors as controls in the estimation of alphas, and the results are similar. Nevertheless, I find evidence that the negative alphas may be the result of the regulated sectors premiums associated with the three Fama and French factors, particularly the market risk premium. When taken together, ME and ME/BE regulated sectors diminish the statistical significance of market factors premiums, especially for the electricity sector. This show how important is the inclusion of these factors, which unfortunately is scarce in emerging markets like Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to assess the role of real effective exchange rate volatility on long-run economic growth for a set of 82 advanced and emerging economies using a panel data set ranging from 1970 to 2009. With an accurate measure for exchange rate volatility, the results for the two-step system GMM panel growth models show that a more (less) volatile RER has significant negative (positive) impact on economic growth and the results are robust for different model specifications. In addition to that, exchange rate stability seems to be more important to foster long-run economic growth than exchange rate misalignment

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates economic growth’s pattern of variation across and within countries using a Time-Varying Transition Matrix Markov-Switching Approach. The model developed follows the approach of Pritchett (2003) and explains the dynamics of growth based on a collection of different states, each of which has a sub-model and a growth pattern, by which countries oscillate over time. The transition matrix among the different states varies over time, depending on the conditioning variables of each country, with a linear dynamic for each state. We develop a generalization of the Diebold’s EM Algorithm and estimate an example model in a panel with a transition matrix conditioned on the quality of the institutions and the level of investment. We found three states of growth: stable growth, miraculous growth, and stagnation. The results show that the quality of the institutions is an important determinant of long-term growth, whereas the level of investment has varying roles in that it contributes positively in countries with high-quality institutions but is of little relevance in countries with medium- or poor-quality institutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis at hand adds to the existing literature by investigating the relationship between economic growth and outward foreign direct investments (OFDI) on a set of 16 emerging countries. Two different econometric techniques are employed: a panel data regression analysis and a time-series causality analysis. Results from the regression analysis indicate a positive and significant correlation between OFDI and economic growth. Additionally, the coefficient for the OFDI variable is robust in the sense specified by the Extreme Bound Analysis (EBA). On the other hand, the findings of the causality analysis are particularly heterogeneous. The vector autoregression (VAR) and the vector error correction model (VECM) approaches identify unidirectional Granger causality running either from OFDI to GDP or from GDP to OFDI in six countries. In four economies causality among the two variables is bidirectional, whereas in five countries no causality relationship between OFDI and GDP seems to be present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Housing is an important component of wealth for a typical household in many countries. The objective of this paper is to investigate the effect of real-estate price variation on welfare, trying to close a gap between the welfare literature in Brazil and that in the U.S., the U.K., and other developed countries. Our first motivation relates to the fact that real estate is probably more important here than elsewhere as a proportion of wealth, which potentially makes the impact of a price change bigger here. Our second motivation relates to the fact that real-estate prices boomed in Brazil in the last five years. Prime real estate in Rio de Janeiro and São Paulo have tripled in value in that period, and a smaller but generalized increase has been observed throughout the country. Third, we have also seen a recent consumption boom in Brazil in the last five years. Indeed, the recent rise of some of the poor to middle-income status is well documented not only for Brazil but for other emerging countries as well. Regarding consumption and real-estate prices in Brazil, one cannot imply causality from correlation, but one can do causal inference with an appropriate structural model and proper inference, or with a proper inference in a reduced-form setup. Our last motivation is related to the complete absence of studies of this kind in Brazil, which makes ours a pioneering study. We assemble a panel-data set for the determinants of non-durable consumption growth by Brazilian states, merging the techniques and ideas in Campbell and Cocco (2007) and in Case, Quigley and Shiller (2005). With appropriate controls, and panel-data methods, we investigate whether house-price variation has a positive effect on non-durable consumption. The results show a non-negligible significant impact of the change in the price of real estate on welfare consumption), although smaller then what Campbell and Cocco have found. Our findings support the view that the channel through which house prices affect consumption is a financial one.