989 resultados para stochastic load factor


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose new spanning tests that assess if the initial and additional assets share theeconomically meaningful cost and mean representing portfolios. We prove their asymptoticequivalence to existing tests under local alternatives. We also show that unlike two-step oriterated procedures, single-step methods such as continuously updated GMM yield numericallyidentical overidentifyng restrictions tests, so there is arguably a single spanning test.To prove these results, we extend optimal GMM inference to deal with singularities in thelong run second moment matrix of the influence functions. Finally, we test for spanningusing size and book-to-market sorted US stock portfolios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the Pricing Equation in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) which relies on the fact that its logarithm is the serial-correlation ìcommon featureîin every asset return of the economy. Our estimator is a simple function of asset returns, does not depend on any parametric function representing preferences, is suitable for testing di§erent preference speciÖcations or investigating intertemporal substitution puzzles, and can be a basis to construct an estimator of the risk-free rate. For post-war data, our estimator is close to unity most of the time, yielding an average annual real discount rate of 2.46%. In formal testing, we cannot reject standard preference speciÖcations used in the literature and estimates of the relative risk-aversion coe¢ cient are between 1 and 2, and statistically equal to unity. Using our SDF estimator, we found little signs of the equity-premium puzzle for the U.S.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the Pricing Equation, in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) mimicking portfolio which relies on the fact that its logarithm is the ìcommon featureîin every asset return of the economy. Our estimator is a simple function of asset returns and does not depend on any parametric function representing preferences, making it suitable for testing di§erent preference speciÖcations or investigating intertemporal substitution puzzles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the Pricing Equation in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) which relies on the fact that its logarithm is the "common feature" in every asset return of the economy. Our estimator is a simple function of asset returns and does not depend on any parametric function representing preferences. The techniques discussed in this paper were applied to two relevant issues in macroeconomics and finance: the first asks what type of parametric preference-representation could be validated by asset-return data, and the second asks whether or not our SDF estimator can price returns in an out-of-sample forecasting exercise. In formal testing, we cannot reject standard preference specifications used in the macro/finance literature. Estimates of the relative risk-aversion coefficient are between 1 and 2, and statistically equal to unity. We also show that our SDF proxy can price reasonably well the returns of stocks with a higher capitalization level, whereas it shows some difficulty in pricing stocks with a lower level of capitalization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We aim to provide a review of the stochastic discount factor bounds usually applied to diagnose asset pricing models. In particular, we mainly discuss the bounds used to analyze the disaster model of Barro (2006). Our attention is focused in this disaster model since the stochastic discount factor bounds that are applied to study the performance of disaster models usually consider the approach of Barro (2006). We first present the entropy bounds that provide a diagnosis of the analyzed disaster model which are the methods of Almeida and Garcia (2012, 2016); Ghosh et al. (2016). Then, we discuss how their results according to the disaster model are related to each other and also present the findings of other methodologies that are similar to these bounds but provide different evidence about the performance of the framework developed by Barro (2006).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper analyzes empirically the volatility of consumption-based stochastic discount factors as a measure of implicit economic fears by studying its relationship with future economic and stock market cycles. Time-varying economic fears seem to be well captured by the volatility of stochastic discount factors. In particular, the volatility of recursive utility-based stochastic discount factor with contemporaneous growth explains between 9 and 34 percent of future changes in industrial production at short and long horizons respectively. They also explain ex-ante uncertainty and risk aversion. However, future stock market cycles are better explained by a similar stochastic discount factor with long-run consumption growth. This specification of the stochastic discount factor presents higher volatility and lower pricing errors than the specification with contemporaneous consumption growth.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this report, sixteen secondary and primary bridge standards for two types of bridges are rated for AASHTO HS20-44 vehicle configuration utilizing Load Factor methodology. The ratings apply only to those bridges which: (1) are built according to the applicable bridge standard plans, (2) have no structural deterioration or damage, and (3) have no added wearing surface in excess of one-half inch integral wearing surface.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Short term load forecasting is one of the key inputs to optimize the management of power system. Almost 60-65% of revenue expenditure of a distribution company is against power purchase. Cost of power depends on source of power. Hence any optimization strategy involves optimization in scheduling power from various sources. As the scheduling involves many technical and commercial considerations and constraints, the efficiency in scheduling depends on the accuracy of load forecast. Load forecasting is a topic much visited in research world and a number of papers using different techniques are already presented. The accuracy of forecast for the purpose of merit order dispatch decisions depends on the extent of the permissible variation in generation limits. For a system with low load factor, the peak and the off peak trough are prominent and the forecast should be able to identify these points to more accuracy rather than minimizing the error in the energy content. In this paper an attempt is made to apply Artificial Neural Network (ANN) with supervised learning based approach to make short term load forecasting for a power system with comparatively low load factor. Such power systems are usual in tropical areas with concentrated rainy season for a considerable period of the year

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die Maßnahmen zur Förderung der Windenergie in Deutschland haben wichtige Anstöße zur technologischen Weiterentwicklung geliefert und die Grundlagen für den enormen Anlagenzubau geschaffen. Die installierte Windleistung hat heute eine beachtliche Größenordnung erreicht und ein weiteres Wachstum in ähnlichen Dimensionen ist auch für die nächsten Jahre zu erwarten. Die aus Wind erzeugte elektrische Leistung deckt bereits heute in einigen Netzbereichen die Netzlast zu Schwachlastzeiten. Dies zeigt, dass die Windenergie ein nicht mehr zu vernachlässigender Faktor in der elektrischen Energieversorgung geworden ist. Im Rahmen der Kraftwerkseinsatzplanung sind Betrag und Verlauf der Windleistung des folgenden Tages mittlerweile zu wichtigen und zugleich schwierig zu bestimmenden Variablen geworden. Starke Schwankungen und falsche Prognosen der Windstromeinspeisung verursachen zusätzlichen Bedarf an Regel- und Ausgleichsleistung durch die Systemführung. Das im Rahmen dieser Arbeit entwickelte Prognosemodell liefert die zu erwartenden Windleistungen an 16 repräsentativen Windparks bzw. Gruppen von Windparks für bis zu 48 Stunden im Voraus. Aufgrund von prognostizierten Wetterdaten des deutschen Wetterdienstes (DWD) werden die Leistungen der einzelnen Windparks mit Hilfe von künstlichen neuronalen Netzen (KNN) berechnet. Diese Methode hat gegenüber physikalischen Verfahren den Vorteil, dass der komplexe Zusammenhang zwischen Wettergeschehen und Windparkleistung nicht aufwendig analysiert und detailliert mathematisch beschrieben werden muss, sondern anhand von Daten aus der Vergangenheit von den KNN gelernt wird. Das Prognosemodell besteht aus zwei Modulen. Mit dem ersten wird, basierend auf den meteorologischen Vorhersagen des DWD, eine Prognose für den Folgetag erstellt. Das zweite Modul bezieht die online gemessenen Leistungsdaten der repräsentativen Windparks mit ein, um die ursprüngliche Folgetagsprognose zu verbessern und eine sehr genaue Kurzzeitprognose für die nächsten drei bis sechs Stunden zu berechnen. Mit den Ergebnissen der Prognosemodule für die repräsentativen Standorte wird dann über ein Transformationsmodell, dem so genannten Online-Modell, die Gesamteinspeisung in einem größeren Gebiet berechnet. Das Prognoseverfahren hat seine besonderen Vorzüge in der Genauigkeit, den geringen Rechenzeiten und den niedrigen Betriebskosten, da durch die Verwendung des bereits implementierten Online-Modells nur eine geringe Anzahl von Vorhersage- und Messstandorten benötigt wird. Das hier vorgestellte Prognosemodell wurde ursprünglich für die E.ON-Netz GmbH entwickelt und optimiert und ist dort seit Juli 2001 im Einsatz. Es lässt sich jedoch auch leicht an andere Gebiete anpassen. Benötigt werden dazu nur die Messdaten der Leistung ausgewählter repräsentativer Windparks sowie die dazu gehörenden Wettervorhersagen, um die KNN entsprechend zu trainieren.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we construct common-factor portfolios using a novel linear transformation of standard factor models extracted from large data sets of asset returns. The simple transformation proposed here keeps the basic properties of the usual factor transformations, although some new interesting properties are further attached to them. Some theoretical advantages are shown to be present. Also, their practical importance is confirmed in two applications: the performance of common-factor portfolios are shown to be superior to that of asset returns and factors commonly employed in the finance literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the considerable increase of the losses in electric utilities of developing countries, such as Brazil, there is an investigation for loss calculation methodologies, considering both technical (inherent of the system) and non-technical (usually associated to the electricity theft) losses. In general, all distribution networks know the load factor, obtained by measuring parameters directly from the network. However, the loss factor, important for the energy loss cost calculation, can only be obtained in a laborious way. Consequently, several formulas have been developed for obtaining the loss factor. Generally, it is used the expression that relates both factors, through the use of a coefficient k. Last reviews introduce a range of factor k within 0.04 - 0.30. In this work, an analysis with real life load curves is presented, determining new values for the coefficient k in a Brazilian electric utility. © 2006 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Given that the total amount of losses in a distribution system is known, with a reliable methodology for the technical loss calculation, the non-technical losses can be obtained by subtraction. A usual method of calculation technical losses in the electric utilities uses two important factors: load factor and the loss factor. The load factor is usually obtained with energy and demand measurements, whereas, to compute the loss factor it is necessary the learning of demand and energy loss, which are not, in general, prone of direct measurements. In this work, a statistical analysis of this relationship using the curves of a sampling of consumers in a specific company is presented. These curves will be summarized in different bands of coefficient k. Then, it will be possible determine where each group of consumer has its major concentration of points. ©2008 IEEE.