853 resultados para heterogeneous regressions algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A motivação para este trabalho vem dos principais resultados de Carvalho e Schwartzman (2008), onde a heterogeneidade surge a partir de diferentes regras de ajuste de preço entre os setores. Os momentos setoriais da duração da rigidez nominal são su cientes para explicar certos efeitos monetários. Uma vez que concordamos que a heterogeneidade é relevante para o estudo da rigidez de preços, como poderíamos escrever um modelo com o menor número possível de setores, embora com um mínimo de heterogeneidade su ciente para produzir qualquer impacto monetário desejado, ou ainda, qualquer três momentos da duração? Para responder a esta questão, este artigo se restringe a estudar modelos com hazard-constante e considera que o efeito acumulado e a dinâmica de curto-prazo da política monetária são boas formas de se resumir grandes economias heterogêneas. Mostramos que dois setores são su cientes para resumir os efeitos acumulados de choques monetários, e economias com 3 setores são boas aproximações para a dinâmica destes efeitos. Exercícios numéricos para a dinâmica de curto prazo de uma economia com rigidez de informação mostram que aproximar 500 setores usando apenas 3 produz erros inferiores a 3%. Ou seja, se um choque monetário reduz o produto em 5%, a economia aproximada produzirá um impacto entre 4,85% e 5,15%. O mesmo vale para a dinâmica produzida por choques de nível de moeda em uma economia com rigidez de preços. Para choques na taxa de crescimento da moeda, a erro máximo por conta da aproximação é de 2,4%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with evaluating value at risk estimates. It is well known that using only binary variables to do this sacrifices too much information. However, most of the specification tests (also called backtests) avaliable in the literature, such as Christoffersen (1998) and Engle and Maganelli (2004) are based on such variables. In this paper we propose a new backtest that does not realy solely on binary variable. It is show that the new backtest provides a sufficiant condition to assess the performance of a quantile model whereas the existing ones do not. The proposed methodology allows us to identify periods of an increased risk exposure based on a quantile regression model (Koenker & Xiao, 2002). Our theorical findings are corroborated through a monte Carlo simulation and an empirical exercise with daily S&P500 time series.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is comprised of three chapters. The first article studies the determinants of the labor force participation of elderly American males and investigates the factors that may account for the changes in retirement between 1950 and 2000. We develop a life-cycle general equilibrium model with endogenous retirement that embeds Social Security legislation and Medicare. Individuals are ex ante heterogeneous with respect to their preferences for leisure and face uncertainty about labor productivity, health status and out-of-pocket medical expenses. The model is calibrated to the U.S. economy in 2000 and is able to reproduce very closely the retirement behavior of the American population. It reproduces the peaks in the distribution of Social Security applications at ages 62 and 65 and the observed facts that low earners and unhealthy individuals retire earlier. It also matches very closely the increase in retirement from 1950 to 2000. Changes in Social Security policy - which became much more generous - and the introduction of Medicare account for most of the expansion of retirement. In contrast, the isolated impact of the increase in longevity was a delaying of retirement. In the second article, I develop an overlapping generations model of criminal behavior, which extends prior research on crime by taking into account individuals' labor supply decisions and the stigma effect that affects convicted offenders, lowering their likelihood of employment. I use the model to guide a quantitative assessment of the determinants of crime and of a counterfactual experiment in which an income redistribution policy is thought as an alternative to greater law enforcement. The model economy considered in this paper is populated by heterogeneous agents who live for a realistic number of periods, have preferences over consumption and leisure, and differ in terms of their age, their skills as well as their employment shocks. In addition, savings may be precautionary and allow partial insurance against the labor income shocks. Because of the lack of full insurance, this model generates an endogenous distribution of wealth across consumers, enabling us to assess the welfare implications of the redistribution policy experiment. I calibrated the model using the US data for 1980 and then use the model to investigate the changes in criminality between 1980 and 1996. The main results that come out of this study are: 1) Law enforcement policy was the most important factor behind the fall in criminality in the period, while the increase in inequality was the most important single factor promoting crime; 2) Stigmatization is not a free-cost crime control policy; 3) Income redistribution can be a powerful alternative policy to fight crime. Finally, the third article studies the impact of HIV/AIDS on per capita income and education. It explores two channels from HIV/AIDS to income that have not been sufficiently stressed by the literature: the reduction of the incentives to study due to shorter expected longevity and the reduction of productivity of experienced workers. In the model individuals live for three periods, may get infected in the second period and with some probability die of Aids before reaching the third period of their life. Parents care for the welfare of the future generations so that they will maximize lifetime utility of their dynasty. The simulations predict that the most affected countries in Sub-Saharan Africa will be in the future, on average, thirty percent poorer than they would be without AIDS. Schooling will decline in some cases by forty percent. These figures are dramatically reduced with widespread medical treatment, as it increases the survival probability and productivity of infected individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a model of comparative advantage with monopolistic competition, that incorporates heterogeneous firms and endogenous mark-ups. We analyse how these features vary across countries with different factor endowments, and across markets of different size. In this model we can obtain trade gains via two channels. First, when we open the economy, most productive firms start to export their product, then, they demand more producing factors and wages rises, thus, those firms that are less productive will be forced to stop to produce. Second channel is via endogenous mark-ups, when we open the economy, the competition gets ``tougher'', then, mark-ups falls, thus, those firms that are less productive will stop to produce. We also show that comparative advantage works as a ``third channel'' of trade gains, because, all trade gains results are magnified in comparative advantage industry of both countries. We also make a numerical exercise to see how endogenous variables of the model vary when trade costs fall.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the impact of profit sharing on the incentives that individuals face to set up their own business. It presents a model of capital accumulation in which individuals are equally skilled to be workers but differ in their abilities to manage a firmo It is shown that profit sharing can inhibit entrepreneurial initiatives, reducing the number of firms in operation, the aggregate output and the economy's long run capital stock.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the optimal “inflation tax” in an environment with heterogeneous agents and non-linear income taxes. We first derive the general conditions needed for the optimality of the Friedman rule in this setup. These general conditions are distinct in nature and more easily interpretable than those obtained in the literature with a representative agent and linear taxation. We then study two standard monetary specifications and derive their implications for the optimality of the Friedman rule. For the shopping-time model the Friedman rule is optimal with essentially no restrictions on preferences or transaction technologies. For the cash-credit model the Friedman rule is optimal if preferences are separable between the consumption goods and leisure, or if leisure shifts consumption towards the credit good. We also study a generalized model which nests both models as special cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Essa tese é constituída por três artigos: "Tax Filing Choices for the Household", "Optimal Tax for the Household: Collective and Unitary Approaches" e "Vertical Differentiation and Heterogeneous Firms".

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study a dynamic model of coordination with timing frictions and payoff heterogeneity. There is a unique equilibrium, characterized by thresholds that determine the choices of each type of agent. We characterize equilibrium for the limiting cases of vanishing timing frictions and vanishing shocks to fundamentals. A lot of conformity emerges: despite payoff heterogeneity, agents’ equilibrium thresholds partially coincide as long as there exists a set of beliefs that would make this coincidence possible – though they never fully coincide. In case of vanishing frictions, the economy behaves almost as if all agents were equal to an average type. Conformity is not inefficient. The efficient solution would have agents following others even more often and giving less importance to the fundamental

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the current global scene of distributional disparities within-nations. There are six main conclusions. First, about 80 per cent of the world’s population now live in regions whose median country has a Gini not far from 40. Second, as outliers are now only located among middle-income and rich countries, the ‘upwards’ side of the ‘Inverted-U’ between inequality and income per capita has evaporated (and with it the statistical support there was for the hypothesis that posits that, for whatever reason, ‘things have to get worse before they can get better’). Third, among middle-income countries Latin America and mineral-rich Southern Africa are uniquely unequal, while Eastern Europe follows a distributional path similar to the Nordic countries. Fourth, among rich countries there is a large (and growing) distributional diversity. Fifth, within a global trend of rising inequality, there are two opposite forces at work. One is ‘centrifugal’, and leads to an increased diversity in the shares appropriated by the top 10 and bottom 40 per cent. The other is ‘centripetal’, and leads to a growing uniformity in the income-share appropriated by deciles 5 to 9. Therefore, half of the world’s population (the middle and upper-middle classes) have acquired strong ‘property rights’ over half of their respective national incomes; the other half, however, is increasingly up for grabs between the very rich and the poor. And sixth, Globalisation is thus creating a distributional scenario in which what really matters is the income-share of the rich — because the rest ‘follows’ (middle classes able to defend their shares, and workers with ever more precarious jobs in ever more ‘flexible’ labour markets). Therefore, anybody attempting to understand the within-nations disparity of inequality should always be reminded of this basic distributional fact following the example of Clinton’s campaign strategist: by sticking a note on their notice-boards saying “It’s the share of the rich, stupid”.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SOUZA, Rodrigo B. ; MEDEIROS, Adelardo A. D. ; NASCIMENTO, João Maria A. ; GOMES, Heitor P. ; MAITELLI, André L. A Proposal to the Supervision of Processes in an Industrial Environment with Heterogeneous Systems. In: INTERNATIONAL CONFERENCE OF THE IEEEOF THE INDUSTRUI ELECTRONICS SOCIETY,32., Paris, 2006, Paris. Anais... Paris: IECON, 2006

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Foram utilizados quatorze modelos de regressão aleatória, para ajustar 86.598 dados de produção de leite no dia do controle de 2.155 primeiras lactações de vacas Caracu, truncadas aos 305 dias. Os modelos incluíram os efeitos fixos de grupo contemporâneo e a covariável idade da vaca ao parto. Uma regressão ortogonal de ordem cúbica foi usada para modelar a trajetória média da população. Os efeitos genéticos aditivos e de ambiente permanente foram modelados por meio de regressões aleatórias, usando polinômios ortogonais de Legendre, de ordens cúbicas. Diferentes estruturas de variâncias residuais foram testadas e consideradas por meio de classes contendo 1, 10, 15 e 43 variâncias residuais e de funções de variâncias (FV) usando polinômios ordinários e ortogonais, cujas ordens variaram de quadrática até sêxtupla. Os modelos foram comparados usando o teste da razão de verossimilhança, o Critério de Informação de Akaike e o Critério de Informação Bayesiano de Schwar. Os testes indicaram que, quanto maior a ordem da função de variâncias, melhor o ajuste. Dos polinômios ordinários, a função de sexta ordem foi superior. Os modelos com classes de variâncias residuais foram aparentemente superiores àqueles com funções de variância. O modelo com homogeneidade de variâncias foi inadequado. O modelo com 15 classes heterogêneas foi o que melhor ajustou às variâncias residuais, entretanto, os parâmetros genéticos estimados foram muito próximos para os modelos com 10, 15 ou 43 classes de variâncias ou com FV de sexta ordem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Utilizaram-se 17.767 registros de peso de 4.210 cordeiros da raça Santa Inês com o objetivo de comparar modelos de regressão aleatória com diferentes estruturas para modelar a variância residual em estudos genéticos da curva de crescimento. Os efeitos fixos incluídos na análise foram: grupo contemporâneo e idade da ovelha no parto. As regressões fixas e aleatórias foram ajustadas por meio de polinômios de Legendre de ordens 4 e 3, respectivamente. A variância residual foi ajustada por meio de classes heterogêneas e por funções de variância empregando polinômios ordinários e de Legendre de ordens 2 a 8. O modelo considerando homogeneidade de variâncias residuais mostrou-se inadequado. de acordo com os critérios utilizados, a variância residual contendo sete classes heterogêneas proporcionou melhor ajuste, embora um mais parcimonioso, com cinco classes, pudesse ser utilizado sem perdas na qualidade de ajuste da variância nos dados. O ajuste de funções de variância com qualquer ordem foi melhor que o obtido por meio de classes. O polinômio ordinário de ordem 6 proporcionou melhor ajuste entre as estruturas testadas. A modelagem do resíduo interferiu nas estimativas de variâncias e parâmetros genéticos. Além da alteração da classificação dos reprodutores, a magnitude dos valores genéticos preditos apresenta variações significativas, de acordo com o ajuste da variância residual empregado.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Equipment maintenance is the major cost factor in industrial plants, it is very important the development of fault predict techniques. Three-phase induction motors are key electrical equipments used in industrial applications mainly because presents low cost and large robustness, however, it isn t protected from other fault types such as shorted winding and broken bars. Several acquisition ways, processing and signal analysis are applied to improve its diagnosis. More efficient techniques use current sensors and its signature analysis. In this dissertation, starting of these sensors, it is to make signal analysis through Park s vector that provides a good visualization capability. Faults data acquisition is an arduous task; in this way, it is developed a methodology for data base construction. Park s transformer is applied into stationary reference for machine modeling of the machine s differential equations solution. Faults detection needs a detailed analysis of variables and its influences that becomes the diagnosis more complex. The tasks of pattern recognition allow that systems are automatically generated, based in patterns and data concepts, in the majority cases undetectable for specialists, helping decision tasks. Classifiers algorithms with diverse learning paradigms: k-Neighborhood, Neural Networks, Decision Trees and Naïves Bayes are used to patterns recognition of machines faults. Multi-classifier systems are used to improve classification errors. It inspected the algorithms homogeneous: Bagging and Boosting and heterogeneous: Vote, Stacking and Stacking C. Results present the effectiveness of constructed model to faults modeling, such as the possibility of using multi-classifiers algorithm on faults classification

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markovian algorithms for estimating the global maximum or minimum of real valued functions defined on some domain Omega subset of R-d are presented. Conditions on the search schemes that preserve the asymptotic distribution are derived. Global and local search schemes satisfying these conditions are analysed and shown to yield sharper confidence intervals when compared to the i.i.d. case.