46 resultados para Forward premium
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The electricity industry throughout the world, which has long been dominated by vertically integrated utilities, has experienced major changes. Deregulation, unbundling, wholesale and retail wheeling, and real-time pricing were abstract concepts a few years ago. Today market forces drive the price of electricity and reduce the net cost through increased competition. As power markets continue to evolve, there is a growing need for advanced modeling approaches. This article addresses the challenge of maximizing the profit (or return) of power producers through the optimization of their share of customers. Power producers have fixed production marginal costs and decide the quantity of energy to sell in both day-ahead markets and a set of target clients, by negotiating bilateral contracts involving a three-rate tariff. Producers sell energy by considering the prices of a reference week and five different types of clients with specific load profiles. They analyze several tariffs and determine the best share of customers, i.e., the share that maximizes profit. © 2014 IEEE.
Resumo:
The electricity industry throughout the world, which has long been dominated by vertically integrated utilities, has experienced major changes. Deregulation, unbundling, wholesale and retail wheeling, and real-time pricing were abstract concepts a few years ago. Today market forces drive the price of electricity and reduce the net cost through increased competition. As power markets continue to evolve, there is a growing need for advanced modeling approaches. This article addresses the challenge of maximizing the profit (or return) of power producers through the optimization of their share of customers. Power producers have fixed production marginal costs and decide the quantity of energy to sell in both day-ahead markets and a set of target clients, by negotiating bilateral contracts involving a three-rate tariff. Producers sell energy by considering the prices of a reference week and five different types of clients with specific load profiles. They analyze several tariffs and determine the best share of customers, i.e., the share that maximizes profit. © 2014 IEEE.
Resumo:
We investigate a mechanism that generates exact solutions of scalar field cosmologies in a unified way. The procedure investigated here permits to recover almost all known solutions, and allows one to derive new solutions as well. In particular, we derive and discuss one novel solution defined in terms of the Lambert function. The solutions are organised in a classification which depends on the choice of a generating function which we have denoted by x(phi) that reflects the underlying thermodynamics of the model. We also analyse and discuss the existence of form-invariance dualities between solutions. A general way of defining the latter in an appropriate fashion for scalar fields is put forward.
Resumo:
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Resumo:
Com vista a revolucionar o sector das comunicações móveis, muito à custa dos elevados débitos prometidos, a tecnologia LTE recorre a uma técnica que se prevê que seja bastante utilizada nas futuras redes de comunicações móveis: Relaying. Juntamente com esta técnica, o LTE recorre à técnica MIMO, para melhorar a qualidade da transmissão em ambientes hostis e oferecer elevados ritmos de transmissão. No planeamento das próximas redes LTE, o recurso à técnica Relaying é frequente. Esta técnica, tem como objectivo aumentar a cobertura e/ou capacidade da rede, e ainda melhorar o seu desempenho em condições de fronteira de célula. A performance de uma RS depende da sua localização, das condições de propagação do canal rádio a que tanto a RS como o EU estão sujeitos, e ainda da capacidade que a RS tem de receber, processar e reencaminhar a informação. O objectivo da tese é estudar a relação existente entre o posicionamento de uma RS e o seu desempenho. Desta forma, pretende-se concluir qual a posição ideal de uma RS (tanto do tipo AF como SDF). Para além deste estudo, é apresentado um comparativo do desempenho dos modos MIMO TD e OL-SM, onde se conclui em que condições deverão ser utilizados, numa rede LTE equipada com FRSs.
Resumo:
Hoje em dia, há cada vez mais informação audiovisual e as transmissões ou ficheiros multimédia podem ser partilhadas com facilidade e eficiência. No entanto, a adulteração de conteúdos vídeo, como informação financeira, notícias ou sessões de videoconferência utilizadas num tribunal, pode ter graves consequências devido à importância desse tipo de informação. Surge então, a necessidade de assegurar a autenticidade e a integridade da informação audiovisual. Nesta dissertação é proposto um sistema de autenticação de vídeo H.264/Advanced Video Coding (AVC), denominado Autenticação de Fluxos utilizando Projecções Aleatórias (AFPA), cujos procedimentos de autenticação, são realizados ao nível de cada imagem do vídeo. Este esquema permite um tipo de autenticação mais flexível, pois permite definir um limite máximo de modificações entre duas imagens. Para efectuar autenticação é utilizada uma nova técnica de autenticação de imagens, que combina a utilização de projecções aleatórias com um mecanismo de correcção de erros nos dados. Assim é possível autenticar cada imagem do vídeo, com um conjunto reduzido de bits de paridade da respectiva projecção aleatória. Como a informação de vídeo é tipicamente, transportada por protocolos não fiáveis pode sofrer perdas de pacotes. De forma a reduzir o efeito das perdas de pacotes, na qualidade do vídeo e na taxa de autenticação, é utilizada Unequal Error Protection (UEP). Para validação e comparação dos resultados implementou-se um sistema clássico que autentica fluxos de vídeo de forma típica, ou seja, recorrendo a assinaturas digitais e códigos de hash. Ambos os esquemas foram avaliados, relativamente ao overhead introduzido e da taxa de autenticação. Os resultados mostram que o sistema AFPA, utilizando um vídeo com qualidade elevada, reduz o overhead de autenticação em quatro vezes relativamente ao esquema que utiliza assinaturas digitais e códigos de hash.
Resumo:
A adequada previsão do comportamento diferido do betão, designadamente da retracção, é essencial no projecto de uma obra de grandes dimensões, permitindo conceber, dimensionar e adoptar as disposições construtivas para um comportamento estrutural que satisfaça os requisitos de segurança, utilização e durabilidade. O actual momento é marcado por uma transição em termos da regulamentação de estruturas, com a eminente substituição da regulamentação nacional por regulamentação europeia. No caso das estruturas de betão, o Regulamento de Estruturas de Betão Armado e Pré-Esforçado (REBAP), em vigor desde 1983, será substituído pelo Eurocódigo 2. Paralelamente, a Federation International du Betón publicou o Model Code 2010 (MC2010), um documento que certamente terá forte influência na evolução da regulamentação das estruturas de betão. Neste contexto, o presente trabalho tem como objectivo estabelecer uma comparação entre os diferentes modelos de previsão da retracção incluídos nos documentos normativos referidos, identificando as principais diferenças e semelhanças entre eles e quantificando a influência dos diferentes factores considerados na sua formulação, de forma a avaliar o impacto que a introdução destes modelos de previsão irá ter no projecto de estruturas de betão. Com o propósito de aferir a forma como estes modelos reflectem a realidade do fenómeno em estudo, procedeu-se à aplicação destes modelos de previsão ao betão de duas obras cujo comportamento estrutural é observado pelo LNEC, concretamente a ponte Miguel Torga, sobre o rio Douro, na Régua, e a ponte sobre o rio Angueira, no distrito de Bragança. Em ambas as obras tinha sido efectuada a caracterização in situ da retracção, tendo-se comparado os valores experimentais assim obtidos com os valores provenientes da aplicação dos modelos de previsão considerados neste trabalho. Finalmente são apresentadas algumas conclusões obtidas com o trabalho desenvolvido nesta dissertação, bem como algumas sugestões para desenvolvimentos futuros.
Resumo:
A large area colour imager optically addressed is presented. The colour imager consists of a thin wide band gap p-i-n a-SiC:H filtering element deposited on the top of a thick large area a-SiC:H(-p)/a-Si:H(-i)/a-SiC:H(-n) image sensor, which reveals itself an intrinsic colour filter. In order to tune the external applied voltage for full colour discrimination the photocurrent generated by a modulated red light is measured under different optical and electrical bias. Results reveal that the integrated device behaves itself as an imager and a filter giving information not only on the position where the optical image is absorbed but also on it wavelength and intensity. The amplitude and sign of the image signals are electrically tuneable. In a wide range of incident fluxes and under reverse bias, the red and blue image signals are opposite in sign and the green signal is suppressed allowing blue and red colour recognition. The green information is obtained under forward bias, where the blue signal goes down to zero and the red and green remain constant. Combining the information obtained at this two applied voltages a RGB colour image picture can be acquired without the need of the usual colour filters or pixel architecture. A numerical simulation supports the colour filter analysis.
Resumo:
An optimized ZnO:Al/a-pin SixC1-x:H/Al configuration for the laser scanned photodiode (LSP) imaging detector is proposed and the read-out parameters improved. The effect of the sensing element structure, cell configuration and light source flux are investigated and correlated with the sensor output characteristics. Data reveals that for sensors with wide band gap doped layers an increase on the image signal optimized to the blue is achieved with a dynamic range of two orders of magnitude, a responsivity of 6 mA W-1 and a sensitivity of 17 muW cm(-2) at 530 nm. The main output characteristics such as image responsivity, resolution, linearity and dynamic range were analyzed under reverse, forward and short circuit modes. The results show that the sensor performance can be optimized in short circuit mode. A trade-off between the scan time and the required resolution is needed since the spot size limits the resolution due to the cross-talk between dark and illuminated regions leading to blurring effects.
Resumo:
In this paper, we present results on the use of multilayered a-SiC:H heterostructures as a device for wavelength-division demultiplexing of optical signals. These devices are useful in optical communications applications that use the wavelength division multiplexing technique to encode multiple signals into the same transmission medium. The device is composed of two stacked p-i-n photodiodes, both optimized for the selective collection of photo generated carriers. Band gap engineering was used to adjust the photogeneration and recombination rate profiles of the intrinsic absorber regions of each photodiode to short and long wavelength absorption in the visible spectrum. The photocurrent signal using different input optical channels was analyzed at reverse and forward bias and under steady state illumination. A demux algorithm based on the voltage controlled sensitivity of the device was proposed and tested. An electrical model of the WDM device is presented and supported by the solution of the respective circuit equations.
Resumo:
The purpose of this paper is to analyze whether companies with a greater commitment to corporate social responsibility (SRI companies) perform differently on the stock market compared to companies that disregard SRI. Over recent years, this relationship has been taken up at both a theoretical and practical level, and has led to extensive scientific research of an empirical nature involving the examination of the relationships existing between the financial and social, environmental and corporate governance performance of a company and the relationship between SRI and investment decisions in the financial market. More specifically, this work provides empirical evidence for the Spanish market as to whether or not belonging to a group of companies the market classes as sustainable results in return premiums that set them apart from companies classed as conventional, and finds no differences in the stock market performance of companies considered to be SRI or conventional.
Resumo:
This paper analyzes the risk-return trade-off in European equities considering both temporal and cross-sectional dimensions. In our analysis, we introduce not only the market portfolio but also 15 industry portfolios comprising the entire market. Several bivariate GARCH models are estimated to obtain the covariance matrix between excess market returns and the industrial portfolios and the existence of a risk-return trade-off is analyzed through a cross-sectional approach using the information in all portfolios. It is obtained evidence for a positive and significant risk-return trade-off in the European market. This conclusion is robust for different GARCH specifications and is even more evident after controlling for the main financial crisis during the sample period.
Resumo:
We study the design of optimal insurance contracts when the insurer can default on its obligations. In our model default arises endogenously from the interaction of the insurance premium, the indemnity schedule and the insurer’s assets. This allows us to understand the joint effect of insolvency risk and background risk on efficient contracts. The results may shed light on the aggregate risk retention sched- ules observed in catastrophe reinsurance markets, and can assist in the design of (re)insurance programs and guarantee funds.
Resumo:
This paper studies the evolution of the default risk premia for European firms during the years surrounding the recent credit crisis. We employ the information embedded in Credit Default Swaps (CDS) and Moody’s KMV EDF default probabilities to analyze the common factors driving this risk premia. The risk premium is characterized in several directions: Firstly, we perform a panel data analysis to capture the relationship between CDS spreads and actual default probabilities. Secondly, we employ the intensity framework of Jarrow et al. (2005) in order to measure the theoretical effect of risk premium on expected bond returns. Thirdly, we carry out a dynamic panel data to identify the macroeconomic sources of risk premium. Finally, a vector autoregressive model analyzes which proportion of the co-movement is attributable to financial or macro variables. Our estimations report coefficients for risk premium substantially higher than previously referred for US firms and a time varying behavior. A dominant factor explains around 60% of the common movements in risk premia. Additionally, empirical evidence suggests a public-to-private risk transfer between the sovereign CDS spreads and corporate risk premia.
Resumo:
Financial literature and financial industry use often zero coupon yield curves as input for testing hypotheses, pricing assets or managing risk. They assume this provided data as accurate. We analyse implications of the methodology and of the sample selection criteria used to estimate the zero coupon bond yield term structure on the resulting volatility of spot rates with different maturities. We obtain the volatility term structure using historical volatilities and Egarch volatilities. As input for these volatilities we consider our own spot rates estimation from GovPX bond data and three popular interest rates data sets: from the Federal Reserve Board, from the US Department of the Treasury (H15), and from Bloomberg. We find strong evidence that the resulting zero coupon bond yield volatility estimates as well as the correlation coefficients among spot and forward rates depend significantly on the data set. We observe relevant differences in economic terms when volatilities are used to price derivatives.