980 resultados para stochastic load factor


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this work is to determine the membership functions for the construction of a fuzzy controller to evaluate the energy situation of the company with respect to load and power factors. The energy assessment of a company is performed by technicians and experts based on the indices of load and power factors, and analysis of the machines used in production processes. This assessment is conducted periodically to detect whether the procedures performed by employees in relation to how of use electricity energy are correct. With a fuzzy controller, this performed can be done by machines. The construction of a fuzzy controller is initially characterized by the definition of input and output variables, and their associated membership functions. We also need to define a method of inference and a processor output. Finally, you need the help of technicians and experts to build a rule base, consisting of answers that provide these professionals in function of characteristics of the input variables. The controller proposed in this paper has as input variables load and power factors, and output the company situation. Their membership functions representing fuzzy sets called by linguistic qualities, as “VERY BAD” and “GOOD”. With the method of inference Mandani and the processor to exit from the Center of Area chosen, the structure of a fuzzy controller is established, simply by the choice by technicians and experts of the field energy to determine a set of rules appropriate for the chosen company. Thus, the interpretation of load and power factors by software comes to meeting the need of creating a single index that indicates an overall basis (rational and efficient) as the energy is being used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the application of fuzzy theory to support the decision of implementing energy efficiency program in sawmills operating in the processing of Pinustaeda and Pinuselliotii. The justification of using a system based on fuzzy theory for analysis of consumption and the specific factors involved, such is the diversity of rates / factors. With the fuzzy theory, we can build a reliable system for verifying actual energy efficiency. The indices and factors characteristic of industrial activity were measured and used as the basis for the fuzzy system. We developed a management system and technology. The system involves the management practices in energy efficiency, maintenance of plant and equipment and the presence of qualified staff. The technological system involves the power factor, load factor, the factor of demand and the specific consumption. The first response provides the possibility of increased energy efficiency and the second level of energy efficiency in the industry studied. With this tool, programs can be developed for energy conservation and energy efficiency in the industrial timber with wide application in this area that is as diverse as production processes. The same systems developed can be used in other industrial activities, provided they are used indices and characteristic features of the sectors involved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

SSome factors including the deregulation in the U.S and the liberalization in Europe of the airline industry are essential to understanding why the number of partnership agreements between airlines has increased during the last 25 years. These events, coupled with the continuous economic downturn and the 9/11 catastrophe seem to be the perfect framework for the tendency to develop airline strategic alliances. However, it has been observed that this trend was not followed during the period 2005-2008. The purpose of this paper is to analyze if a benefit was experienced by the major airlines who became a member of the current 3 big alliances compared to the major airlines that decided not to become a member or were not admitted into the alliances during 2005-2008. The methodology of this report includes an analysis of several airlines’ performance figures. These performance figures include the revenue passenger kilometers (RPKs), the passenger load factor (PLF) and also the market share (MS). The figures will be compared between the aligned airlines and others which have similar business models. The value of this paper is to reveal whether being aligned provides advantages to major airlines under a bearish airline market in a globalized environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La evaluación de la seguridad de estructuras antiguas de fábrica es un problema abierto.El material es heterogéneo y anisótropo, el estado previo de tensiones difícil de conocer y las condiciones de contorno inciertas. A comienzos de los años 50 se demostró que el análisis límite era aplicable a este tipo de estructuras, considerándose desde entonces como una herramienta adecuada. En los casos en los que no se produce deslizamiento la aplicación de los teoremas del análisis límite estándar constituye una herramienta formidable por su simplicidad y robustez. No es necesario conocer el estado real de tensiones. Basta con encontrar cualquier solución de equilibrio, y que satisfaga las condiciones de límite del material, en la seguridad de que su carga será igual o inferior a la carga real de inicio de colapso. Además esta carga de inicio de colapso es única (teorema de la unicidad) y se puede obtener como el óptimo de uno cualquiera entre un par de programas matemáticos convexos duales. Sin embargo, cuando puedan existir mecanismos de inicio de colapso que impliquen deslizamientos, cualquier solución debe satisfacer tanto las restricciones estáticas como las cinemáticas, así como un tipo especial de restricciones disyuntivas que ligan las anteriores y que pueden plantearse como de complementariedad. En este último caso no está asegurada la existencia de una solución única, por lo que es necesaria la búsqueda de otros métodos para tratar la incertidumbre asociada a su multiplicidad. En los últimos años, la investigación se ha centrado en la búsqueda de un mínimo absoluto por debajo del cual el colapso sea imposible. Este método es fácil de plantear desde el punto de vista matemático, pero intratable computacionalmente, debido a las restricciones de complementariedad 0 y z 0 que no son ni convexas ni suaves. El problema de decisión resultante es de complejidad computacional No determinista Polinomial (NP)- completo y el problema de optimización global NP-difícil. A pesar de ello, obtener una solución (sin garantía de exito) es un problema asequible. La presente tesis propone resolver el problema mediante Programación Lineal Secuencial, aprovechando las especiales características de las restricciones de complementariedad, que escritas en forma bilineal son del tipo y z = 0; y 0; z 0 , y aprovechando que el error de complementariedad (en forma bilineal) es una función de penalización exacta. Pero cuando se trata de encontrar la peor solución, el problema de optimización global equivalente es intratable (NP-difícil). Además, en tanto no se demuestre la existencia de un principio de máximo o mínimo, existe la duda de que el esfuerzo empleado en aproximar este mínimo esté justificado. En el capítulo 5, se propone hallar la distribución de frecuencias del factor de carga, para todas las soluciones de inicio de colapso posibles, sobre un sencillo ejemplo. Para ello, se realiza un muestreo de soluciones mediante el método de Monte Carlo, utilizando como contraste un método exacto de computación de politopos. El objetivo final es plantear hasta que punto está justificada la busqueda del mínimo absoluto y proponer un método alternativo de evaluación de la seguridad basado en probabilidades. Las distribuciones de frecuencias, de los factores de carga correspondientes a las soluciones de inicio de colapso obtenidas para el caso estudiado, muestran que tanto el valor máximo como el mínimo de los factores de carga son muy infrecuentes, y tanto más, cuanto más perfecto y contínuo es el contacto. Los resultados obtenidos confirman el interés de desarrollar nuevos métodos probabilistas. En el capítulo 6, se propone un método de este tipo basado en la obtención de múltiples soluciones, desde puntos de partida aleatorios y calificando los resultados mediante la Estadística de Orden. El propósito es determinar la probabilidad de inicio de colapso para cada solución.El método se aplica (de acuerdo a la reducción de expectativas propuesta por la Optimización Ordinal) para obtener una solución que se encuentre en un porcentaje determinado de las peores. Finalmente, en el capítulo 7, se proponen métodos híbridos, incorporando metaheurísticas, para los casos en que la búsqueda del mínimo global esté justificada. Abstract Safety assessment of the historic masonry structures is an open problem. The material is heterogeneous and anisotropic, the previous state of stress is hard to know and the boundary conditions are uncertain. In the early 50's it was proven that limit analysis was applicable to this kind of structures, being considered a suitable tool since then. In cases where no slip occurs, the application of the standard limit analysis theorems constitutes an excellent tool due to its simplicity and robustness. It is enough find any equilibrium solution which satisfy the limit constraints of the material. As we are certain that this load will be equal to or less than the actual load of the onset of collapse, it is not necessary to know the actual stresses state. Furthermore this load for the onset of collapse is unique (uniqueness theorem), and it can be obtained as the optimal from any of two mathematical convex duals programs However, if the mechanisms of the onset of collapse involve sliding, any solution must satisfy both static and kinematic constraints, and also a special kind of disjunctive constraints linking the previous ones, which can be formulated as complementarity constraints. In the latter case, it is not guaranted the existence of a single solution, so it is necessary to look for other ways to treat the uncertainty associated with its multiplicity. In recent years, research has been focused on finding an absolute minimum below which collapse is impossible. This method is easy to set from a mathematical point of view, but computationally intractable. This is due to the complementarity constraints 0 y z 0 , which are neither convex nor smooth. The computational complexity of the resulting decision problem is "Not-deterministic Polynomialcomplete" (NP-complete), and the corresponding global optimization problem is NP-hard. However, obtaining a solution (success is not guaranteed) is an affordable problem. This thesis proposes solve that problem through Successive Linear Programming: taking advantage of the special characteristics of complementarity constraints, which written in bilinear form are y z = 0; y 0; z 0 ; and taking advantage of the fact that the complementarity error (bilinear form) is an exact penalty function. But when it comes to finding the worst solution, the (equivalent) global optimization problem is intractable (NP-hard). Furthermore, until a minimum or maximum principle is not demonstrated, it is questionable that the effort expended in approximating this minimum is justified. XIV In chapter 5, it is proposed find the frequency distribution of the load factor, for all possible solutions of the onset of collapse, on a simple example. For this purpose, a Monte Carlo sampling of solutions is performed using a contrast method "exact computation of polytopes". The ultimate goal is to determine to which extent the search of the global minimum is justified, and to propose an alternative approach to safety assessment based on probabilities. The frequency distributions for the case study show that both the maximum and the minimum load factors are very infrequent, especially when the contact gets more perfect and more continuous. The results indicates the interest of developing new probabilistic methods. In Chapter 6, is proposed a method based on multiple solutions obtained from random starting points, and qualifying the results through Order Statistics. The purpose is to determine the probability for each solution of the onset of collapse. The method is applied (according to expectations reduction given by the Ordinal Optimization) to obtain a solution that is in a certain percentage of the worst. Finally, in Chapter 7, hybrid methods incorporating metaheuristics are proposed for cases in which the search for the global minimum is justified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A eficiência e a racionalidade energética da iluminação pública têm relevante importância no sistema elétrico, porque contribui para diminuir a necessidade de investimentos na construção de novas fontes geradoras de energia elétrica e nos desperdícios energéticos. Apresenta-se como objetivo deste trabalho de pesquisa o desenvolvimento e aplicação do IDE (índice de desempenho energético), fundamentado no sistema de inferência nebulosa e indicadores de eficiência e racionalidade de uso da energia elétrica. A opção em utilizar a inferência nebulosa deve-se aos fatos de sua capacidade de reproduzir parte do raciocínio humano, e estabelecer relação entre a diversidade de indicadores envolvidos. Para a consecução do sistema de inferência nebulosa, foram definidas como variáveis de entrada: os indicadores de eficiência e racionalidade; o método de inferência foi baseado em regras produzidas por especialista em iluminação pública, e como saída um número real que caracteriza o IDE. Os indicadores de eficiência e racionalidade são divididos em duas classes: globais e específicos. Os indicadores globais são: FP (fator de potência), FC (fator de carga) e FD (fator de demanda). Os indicadores específicos são: FU (fator de utilização), ICA (consumo de energia por área iluminada), IE (intensidade energética) e IL (intensidade de iluminação natural). Para a aplicação deste trabalho, foi selecionada e caracterizada a iluminação pública da Cidade Universitária \"Armando de Salles Oliveira\" da Universidade de São Paulo. Sendo assim, o gestor do sistema de iluminação, a partir do índice desenvolvido neste trabalho, dispõe de condições para avaliar o uso da energia elétrica e, desta forma, elaborar e simular estratégias com o objetivo de economizá-la.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main purpose of this paper is to propose a methodology to obtain a hedge fund tail risk measure. Our measure builds on the methodologies proposed by Almeida and Garcia (2015) and Almeida, Ardison, Garcia, and Vicente (2016), which rely in solving dual minimization problems of Cressie Read discrepancy functions in spaces of probability measures. Due to the recently documented robustness of the Hellinger estimator (Kitamura et al., 2013), we adopt within the Cressie Read family, this specific discrepancy as loss function. From this choice, we derive a minimum Hellinger risk-neutral measure that correctly prices an observed panel of hedge fund returns. The estimated risk-neutral measure is used to construct our tail risk measure by pricing synthetic out-of-the-money put options on hedge fund returns of ten specific categories. We provide a detailed description of our methodology, extract the aggregate Tail risk hedge fund factor for Brazilian funds, and as a by product, a set of individual Tail risk factors for each specific hedge fund category.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to compare performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implication for generation policy in Korea as outlined in this study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to assess the performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implications for generation policy in Korea as outlined in this study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most research on stock prices is based on the present value model or the more general consumption-based model. When applied to real economic data, both of them are found unable to account for both the stock price level and its volatility. Three essays here attempt to both build a more realistic model, and to check whether there is still room for bubbles in explaining fluctuations in stock prices. In the second chapter, several innovations are simultaneously incorporated into the traditional present value model in order to produce more accurate model-based fundamental prices. These innovations comprise replacing with broad dividends the more narrow traditional dividends that are more commonly used, a nonlinear artificial neural network (ANN) forecasting procedure for these broad dividends instead of the more common linear forecasting models for narrow traditional dividends, and a stochastic discount rate in place of the constant discount rate. Empirical results show that the model described above predicts fundamental prices better, compared with alternative models using linear forecasting process, narrow dividends, or a constant discount factor. Nonetheless, actual prices are still largely detached from fundamental prices. The bubble-like deviations are found to coincide with business cycles. The third chapter examines possible cointegration of stock prices with fundamentals and non-fundamentals. The output gap is introduced to form the non-fundamental part of stock prices. I use a trivariate Vector Autoregression (TVAR) model and a single equation model to run cointegration tests between these three variables. Neither of the cointegration tests shows strong evidence of explosive behavior in the DJIA and S&P 500 data. Then, I applied a sup augmented Dickey-Fuller test to check for the existence of periodically collapsing bubbles in stock prices. Such bubbles are found in S&P data during the late 1990s. Employing econometric tests from the third chapter, I continue in the fourth chapter to examine whether bubbles exist in stock prices of conventional economic sectors on the New York Stock Exchange. The ‘old economy’ as a whole is not found to have bubbles. But, periodically collapsing bubbles are found in Material and Telecommunication Services sectors, and the Real Estate industry group.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

I study the link between capital markets and sources of macroeconomic risk. In chapter 1 I show that expected inflation risk is priced in the cross section of stock returns even after controlling for cash flow growth and volatility risks. Motivated by this evidence I study a long run risk model with a built-in inflation non-neutrality channel that allows me to decompose the real stochastic discount factor into news about current and expected cash flow growth, news about expected inflation and news about volatility. The model can successfully price a broad menu of assets and provides a setting for analyzing cross sectional variation in expected inflation risk premium. For industries like retail and durable goods inflation risk can account for nearly a third of the overall risk premium while the energy industry and a broad commodity index act like inflation hedges. Nominal bonds are exposed to expected inflation risk and have inflation premiums that increase with bond maturity. The price of expected inflation risk was very high during the 70's and 80's, but has come down a lot since being very close to zero over the past decade. On average, the expected inflation price of risk is negative, consistent with the view that periods of high inflation represent a "bad" state of the world and are associated with low economic growth and poor stock market performance. In chapter 2 I look at the way capital markets react to predetermined macroeconomic announcements. I document significantly higher excess returns on the US stock market on macro release dates as compared to days when no macroeconomic news hit the market. Almost the entire equity premium since 1997 is being realized on days when macroeconomic news are released. At high frequency, there is a pattern of returns increasing in the hours prior to the pre-determined announcement time, peaking around the time of the announcement and dropping thereafter.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Load and resistance factor design (LRFD) approach for the design of reinforced soil walls is presented to produce designs with consistent and uniform levels of risk for the whole range of design applications. The evaluation of load and resistance factors for the reinforced soil walls based on reliability theory is presented. A first order reliability method (FORM) is used to determine appropriate ranges for the values of the load and resistance factors. Using pseudo-static limit equilibrium method, analysis is conducted to evaluate the external stability of reinforced soil walls subjected to earthquake loading. The potential failure mechanisms considered in the analysis are sliding failure, eccentricity failure of resultant force (or overturning failure) and bearing capacity failure. The proposed procedure includes the variability associated with reinforced backfill, retained backfill, foundation soil, horizontal seismic acceleration and surcharge load acting on the wall. Partial factors needed to maintain the stability against three modes of failure by targeting component reliability index of 3.0 are obtained for various values of coefficients of variation (COV) of friction angle of backfill and foundation soil, distributed dead load surcharge, cohesion of the foundation soil and horizontal seismic acceleration. A comparative study between LRFD and allowable stress design (ASD) is also presented with a design example. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The deregulation of electricity markets has diversified the range of financial transaction modes between independent system operator (ISO), generation companies (GENCO) and load-serving entities (LSE) as the main interacting players of a day-ahead market (DAM). LSEs sell electricity to end-users and retail customers. The LSE that owns distributed generation (DG) or energy storage units can supply part of its serving loads when the nodal price of electricity rises. This opportunity stimulates them to have storage or generation facilities at the buses with higher locational marginal prices (LMP). The short-term advantage of this model is reducing the risk of financial losses for LSEs in DAMs and its long-term benefit for the LSEs and the whole system is market power mitigation by virtually increasing the price elasticity of demand. This model also enables the LSEs to manage the financial risks with a stochastic programming framework.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coxsackievirus B3 (CVB3) infection can result in myocarditis, which in turn may lead to a protracted immune response and subsequent dilated cardiomyopathy. Human decay-accelerating factor (DAF), a binding receptor for CVB3, was synthesized as a soluble IgG1-Fc fusion protein (DAF-Fc). In vitro, DAF-Fc was able to inhibit complement activity and block infection by CVB3, although blockade of infection varied widely among strains of CVB3. To determine the effects of DAF-Fc in vivo, 40 adolescent A/J mice were infected with a myopathic strain of CVB3 and given DAF-Fc treatment 3 days before infection, during infection, or 3 days after infection; the mice were compared with virus alone and sham-infected animals. Sections of heart, spleen, kidney, pancreas, and liver were stained with hematoxylin and eosin and submitted to in situ hybridization for both positive-strand and negative-strand viral RNA to determine the extent of myocarditis and viral infection, respectively. Salient histopathologic features, including myocardial lesion area, cell death, calcification and inflammatory cell infiltration, pancreatitis, and hepatitis were scored without knowledge of the experimental groups. DAF-Fc treatment of mice either preceding or concurrent with CVB3 infection resulted in a significant decrease in myocardial lesion area and cell death and a reduction in the presence of viral RNA. All DAF-Fc treatment groups had reduced infectious CVB3 recoverable from the heart after infection. DAF-Fc may be a novel therapeutic agent for active myocarditis and acute dilated cardiomyopathy if given early in the infectious period, although more studies are needed to determine its mechanism and efficacy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this article we use factor models to describe a certain class of covariance structure for financiaI time series models. More specifical1y, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. We build on previous work by allowing the factor loadings, in the factor mo deI structure, to have a time-varying structure and to capture changes in asset weights over time motivated by applications with multi pIe time series of daily exchange rates. We explore and discuss potential extensions to the models exposed here in the prediction area. This discussion leads to open issues on real time implementation and natural model comparisons.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The past decade has wítenessed a series of (well accepted and defined) financial crises periods in the world economy. Most of these events aI,"e country specific and eventually spreaded out across neighbor countries, with the concept of vicinity extrapolating the geographic maps and entering the contagion maps. Unfortunately, what contagion represents and how to measure it are still unanswered questions. In this article we measure the transmission of shocks by cross-market correlation\ coefficients following Forbes and Rigobon's (2000) notion of shift-contagion,. Our main contribution relies upon the use of traditional factor model techniques combined with stochastic volatility mo deIs to study the dependence among Latin American stock price indexes and the North American indexo More specifically, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. From a theoretical perspective, we improve currently available methodology by allowing the factor loadings, in the factor model structure, to have a time-varying structure and to capture changes in the series' weights over time. By doing this, we believe that changes and interventions experienced by those five countries are well accommodated by our models which learns and adapts reasonably fast to those economic and idiosyncratic shocks. We empirically show that the time varying covariance structure can be modeled by one or two common factors and that some sort of contagion is present in most of the series' covariances during periods of economical instability, or crisis. Open issues on real time implementation and natural model comparisons are thoroughly discussed.