866 resultados para Linear Assets


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the optimal “inflation tax” in an environment with heterogeneous agents and non-linear income taxes. We first derive the general conditions needed for the optimality of the Friedman rule in this setup. These general conditions are distinct in nature and more easily interpretable than those obtained in the literature with a representative agent and linear taxation. We then study two standard monetary specifications and derive their implications for the optimality of the Friedman rule. For the shopping-time model the Friedman rule is optimal with essentially no restrictions on preferences or transaction technologies. For the cash-credit model the Friedman rule is optimal if preferences are separable between the consumption goods and leisure, or if leisure shifts consumption towards the credit good. We also study a generalized model which nests both models as special cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We evaluate the forecasting performance of a number of systems models of US shortand long-term interest rates. Non-linearities, induding asymmetries in the adjustment to equilibrium, are shown to result in more accurate short horizon forecasts. We find that both long and short rates respond to disequilibria in the spread in certain circumstances, which would not be evident from linear representations or from single-equation analyses of the short-term interest rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase in the importance of intangibles in business competitiveness has made investment selection more challenging to investors that, under high information asymmetry, tend to charge higher premiums to provide capital or simply deny it. Private Equity and Venture Capital (PE/VC) organizations developed contemporarily with the increase in the relevance of intangible assets in the economy. They form a specialized breed of financial intermediaries that are better prepared to deal with information asymmetry. This paper is the result of ten interviews with PE/VC organizations in Brazil. Its objective is to describe the selection process, criteria and indicators used by these organizations to identify and measure intangible assets, as well as the methods used to valuate prospective investments. Results show that PE/VC organizations rely on sophisticated methods to assess investment proposals, with specific criteria and indicators to assess the main classes of intangible assets. However, no value is given to these assets individually. The information gathered is used to understand the sources of cash flows and risks, which are then combined by discounted cash flow methods to estimate firm's value. Due to PE/VC organizations extensive experience with innovative Small and Medium-sized Enterprises (SMEs), we believe that shedding light on how PE/VC organizations deal with intangible assets brings important insights to the intangible assets debate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life cycle general equilibrium models with heterogeneous agents have a very hard time reproducing the American wealth distribution. A common assumption made in this literature is that all young adults enter the economy with no initial assets. In this article, we relax this assumption – not supported by the data - and evaluate the ability of an otherwise standard life cycle model to account for the U.S. wealth inequality. The new feature of the model is that agents enter the economy with assets drawn from an initial distribution of assets, which is estimated using a non-parametric method applied to data from the Survey of Consumer Finances. We found that heterogeneity with respect to initial wealth is key for this class of models to replicate the data. According to our results, American inequality can be explained almost entirely by the fact that some individuals are lucky enough to be born into wealth, while others are born with few or no assets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider multistage stochastic linear optimization problems combining joint dynamic probabilistic constraints with hard constraints. We develop a method for projecting decision rules onto hard constraints of wait-and-see type. We establish the relation between the original (in nite dimensional) problem and approximating problems working with projections from di erent subclasses of decision policies. Considering the subclass of linear decision rules and a generalized linear model for the underlying stochastic process with noises that are Gaussian or truncated Gaussian, we show that the value and gradient of the objective and constraint functions of the approximating problems can be computed analytically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta videoaula explica sobre a dependência linear, ou seja, um tipo de relação entre os vetores de um conjunto. Um conjunto de vetores é linearmente dependente se, e só se, um dos vetores do conjunto for combinação linear dos demais vetores do conjunto. Assim, quando escrevemos um vetor como combinação linear e seus coeficientes são todos nulos, ele é linearmente dependente. Quando não existe relação linear entre os vetores diz-se que o conjunto é linearmente independente

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quando a matriz de um sistema linear se encontra na forma escalonada reduzida por linhas é fácil escrever sua solução nesse sistema. Assim, esta videoaula explica como obter a solução geral apresentando as metodologias que serão usadas dependendo de cada caso e as maneiras distintas de achar uma solução geral de um sistema.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O presente trabalho analisa soluções de controlo não-linear baseadas em Redes Neuronais e apresenta a sua aplicação a um caso prático, desde o algoritmo de treino até à implementação física em hardware. O estudo inicial do estado da arte da utilização das Redes Neuronais para o controlo leva à proposta de soluções iterativas para a definição da arquitectura das mesmas e para o estudo das técnicas de Regularização e Paragem de Treino Antecipada, através dos Algoritmos Genéticos e à proposta de uma forma de validação dos modelos obtidos. Ao longo da tese são utilizadas quatro malhas para o controlo baseado em modelos, uma das quais uma contribuição original, e é implementado um processo de identificação on-line, tendo por base o algoritmo de treino Levenberg-Marquardt e a técnica de Paragem de Treino Antecipada que permite o controlo de um sistema, sem necessidade de recorrer ao conhecimento prévio das suas características. O trabalho é finalizado com um estudo do hardware comercial disponível para a implementação de Redes Neuronais e com o desenvolvimento de uma solução de hardware utilizando uma FPGA. De referir que o trabalho prático de teste das soluções apresentadas é realizado com dados reais provenientes de um forno eléctrico de escala reduzida.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta dissertação tem como principal objectivo comparar os métodos propostos pelo Eurocódigo 2, através de uma análise paramétrica dos efeitos de segunda ordem em pilares de betão armado. Os métodos estudados foram o método simplificado baseado na Curvatura Nominal e o Método Geral. Através da comparação destes dois métodos verificou-se se o dimensionamento efectuado para o primeiro método é correcto e se encontra do lado da segurança. Para este efeito são analisado noventa e seis pórticos, onde setenta e dois são pórticos simples e vinte e quatro são pórticos parede. Todos os pórticos apresentam uma estrutura simples, constituída por dois elementos verticais, pilar ou parede, e um elemento horizontal, viga. Os noventa e seis pórticos são divididos em dois grupos. Um grupo em que a ligação entre pilar viga é rotulada e o outro grupo onde a ligação é rígida. Dentro de cada grupo é definido oito conjuntos de pórticos cuja diferença reside na percentagem de carga que cada um dos elementos verticais recebe, sendo estas características iguais para os dois grupos. Cada conjunto é constituído por seis pórticos em que é variada a esbelteza. Em todos os modelos analisados manteve-se as mesmas características de materiais e acções. Efectuou-se um primeiro dimensionamento através do método baseado na curvatura nominal, onde obteve-se os esforços e armaduras, que serão utilizados para a modelação no método geral. Este método geral, consiste numa análise não linear, e é efectuado num programa de análise não linear de elementos finitos, Atena. A modelação efectuou-se da seguinte forma: aplicou-se numa primeira fase a totalidade da carga vertical aplicada à estrutura de forma incremental, e quando esta atingiu a carga de dimensionamento, aplicou-se a carga horizontal também de forma incremental até ao colapso da estrutura. De acordo com os resultados obtidos, pode concluir-se que o dimensionamento efectuado pelo método simplificado está do lado da segurança, sendo em alguns casos em que possui pilares esbeltos demasiado conservativo, onde obteve-se mais 200% de carga horizontal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When a company desires to invest in a project, it must obtain resources needed to make the investment. The alternatives are using firm s internal resources or obtain external resources through contracts of debt and issuance of shares. Decisions involving the composition of internal resources, debt and shares in the total resources used to finance the activities of a company related to the choice of its capital structure. Although there are studies in the area of finance on the debt determinants of firms, the issue of capital structure is still controversial. This work sought to identify the predominant factors that determine the capital structure of Brazilian share capital, non-financial firms. This work was used a quantitative approach, with application of the statistical technique of multiple linear regression on data in panel. Estimates were made by the method of ordinary least squares with model of fixed effects. About 116 companies were selected to participate in this research. The period considered is from 2003 to 2007. The variables and hypotheses tested in this study were built based on theories of capital structure and in empirical researches. Results indicate that the variables, such as risk, size, and composition of assets and firms growth influence their indebtedness. The profitability variable was not relevant to the composition of indebtedness of the companies analyzed. However, analyzing only the long-term debt, comes to the conclusion that the relevant variables are the size of firms and, especially, the composition of its assets (tangibility).This sense, the smaller the size of the undertaking or the greater the representation of fixed assets in total assets, the greater its propensity to long-term debt. Furthermore, this research could not identify a predominant theory to explain the capital structure of Brazilian

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective is to analyze the relationship between risk and number of stocks of a portfolio for an individual investor when stocks are chosen by "naive strategy". For this, we carried out an experiment in which individuals select actions to reproduce this relationship. 126 participants were informed that the risk of first choice would be an asset average of all standard deviations of the portfolios consist of a single asset, and the same procedure should be used for portfolios composed of two, three and so on, up to 30 actions . They selected the assets they want in their portfolios without the support of a financial analysis. For comparison we also tested a hypothetical simulation of 126 investors who selected shares the same universe, through a random number generator. Thus, each real participant is compensated for random hypothetical investor facing the same opportunity. Patterns were observed in the portfolios of individual participants, characterizing the curves for the components of the samples. Because these groupings are somewhat arbitrary, it was used a more objective measure of behavior: a simple linear regression for each participant, in order to predict the variance of the portfolio depending on the number of assets. In addition, we conducted a pooled regression on all observations by analyzing cross-section. The result of pattern occurs on average but not for most individuals, many of which effectively "de-diversify" when adding seemingly random bonds. Furthermore, the results are slightly worse using a random number generator. This finding challenges the belief that only a small number of titles is necessary for diversification and shows that there is only applicable to a large sample. The implications are important since many individual investors holding few stocks in their portfolios

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main activities in the petroleum engineering is to estimate the oil production in the existing oil reserves. The calculation of these reserves is crucial to determine the economical feasibility of your explotation. Currently, the petroleum industry is facing problems to analyze production due to the exponentially increasing amount of data provided by the production facilities. Conventional reservoir modeling techniques like numerical reservoir simulation and visualization were well developed and are available. This work proposes intelligent methods, like artificial neural networks, to predict the oil production and compare the results with the ones obtained by the numerical simulation, method quite a lot used in the practice to realization of the oil production prediction behavior. The artificial neural networks will be used due your learning, adaptation and interpolation capabilities