940 resultados para perfect hedging
Resumo:
We propose a new selective multi-carrier index keying in orthogonal frequency division multiplexing (OFDM) systems that opportunistically modulate both a small subset of sub-carriers and their indices. Particularly, we investigate the performance enhancement in two cases of error propagation sensitive and compromised deviceto-device (D2D) communications. For the performance evaluation, we focus on analyzing the error propagation probability (EPP) introducing the exact and upper bound expressions on the detection error probability, in the presence of both imperfect and perfect detection of active multi-carrier indices. The average EPP results in closedform are generalized for various fading distribution using the moment generating function, and our numerical results clearly show that the proposed approach is desirable for reliable and energy-efficient D2D applications.
Resumo:
The problem of determining a maximum matching or whether there exists a perfect matching, is very common in a large variety of applications and as been extensively studied in graph theory. In this paper we start to introduce a characterisation of a family of graphs for which its stability number is determined by convex quadratic programming. The main results connected with the recognition of this family of graphs are also introduced. It follows a necessary and sufficient condition which characterise a graph with a perfect matching and an algorithmic strategy, based on the determination of the stability number of line graphs, by convex quadratic programming, applied to the determination of a perfect matching. A numerical example for the recognition of graphs with a perfect matching is described. Finally, the above algorithmic strategy is extended to the determination of a maximum matching of an arbitrary graph and some related results are presented.
Resumo:
This thesis consists of an introductory chapter (essay I) and five more empirical essays on electricity markets and CO2 spot price behaviour, derivatives pricing analysis and hedging. Essay I presents the structure of the thesis and electricity markets functioning and characteristics, as well as the type of products traded, to be analyzed on the following essays. In the second essay we conduct an empirical study on co-movements in electricity markets resorting to wavelet analysis, discussing long-term dynamics and markets integration. Essay three is about hedging performance and multiscale relationships in the German electricity spot and futures markets, also using wavelet analysis. We concentrate the investigation on the relationship between coherence evolution and hedge ratio analysis, on a time-frequency-scale approach, between spot and futures which conditions the effectiveness of the hedging strategy. Essays four, five and six are interrelated between them and with the other two previous essays given the nature of the commodity analyzed, CO2 emission allowances, traded in electricity markets. Relationships between electricity prices, primary energy fuel prices and carbon dioxide permits are analyzed on essay four. The efficiency of the European market for allowances is examined taking into account markets heterogeneity. Essay five analyzes stylized statistical properties of the recent traded asset CO2 emission allowances, for spot and futures returns, examining also the relation linking convenience yield and risk premium, for the German European Energy Exchange (EEX) between October 2005 and October 2009. The study was conducted through empirical estimations of CO2 allowances risk premium, convenience yield, and their relation. Future prices from an ex-post perspective are examined to show evidence for significant negative risk premium, or else a positive forward premium. Finally, essay six analyzes emission allowances futures hedging effectiveness, providing evidence for utility gains increases with investor’s preference over risk. Deregulation of electricity markets has led to higher uncertainty in electricity prices and by presenting these essays we try to shed new lights about structuring, pricing and hedging in this type of markets.
Resumo:
O século I, que desabrochou numa Idade de Ouro, não findaria sob o signo da boa Fortuna inaugurada pelo primeiro Princeps. O século de Augusto conheceria o seu fim! A Literatura não pôde furtar-se ao fatum de todo um Império e, depois de 69, juntamente com a Magna Vrbs, aguardava um tempo que fosse, finalmente, capaz de uma renovação. Para os anos oitenta do século I, prometiam os Flavianos e as suas consecuções uma nova Aurea Aetas… Porém, revelou-se impossível recuperar o passado: então, como nunca antes, os abastados demandavam a púrpura e a populaça clamava por panem et circenses. E a mudança definitiva dos tempos tinha na produção artística das suas maiores provas — a clientela condenara os autores ao abandono! Longe os círculos de Mecenas, apoiando Horácios e Virgílios que podiam abraçar em exclusivo a sua arte… Marcus Valerius Martialis foi não apenas um autor cuja existência se ressentiria dos constrangimentos que esta época reservou aos poetas, como o que faria da sua obra o mais fiel espelho do seu tempo. Aliás, não fora a sua obra e não se compreenderia cabalmente como foi possível a um escritor sobreviver a esses tempos e trazer à luz o seu trabalho — a uma luz muito especial, na verdade: Hic est quem legis ille, quem requiris, / toto notus in orbe Martialis (1.1.1-2)! Para cantar o novo Império e o seu quotidiano, onde conviviam, a um tempo, a grandeza e a torpeza, nada melhor que uma rude auena, jocosa e mordaz... O epigrama, não a epopeia, era a nova voz de Roma! E Marcial, elevando a sua auena, aplicou toda a sua mestria na celebração da sua Roma e dos Romanos seus concidadãos — hominem pagina nostra sapit (10.4.10). Teremos nós perdido um épico talentoso que se devotou e à sua arte a um género menor ou teremos ganho um cantor ímpar que viveu em perfeita harmonia com o seu tempo? Alcançando a imortalidade, reservada, antes, para os épicos, Marcial alcançou o seu objetivo: si […] / [...] fas est cineri me superesse meo (7.44.7- 8). E, no entanto, o feito singular de Marcial foi dar cumprimento às suas palavras — angusta cantare licet uidearis auena, / dum tua multorum uincat auena tubas. (8.3.21-22) —, escrevendo, sob a forma de epigramas, a primeira e, talvez, a única epopeia do quotidiano!
Resumo:
The continuous demand for highly efficient wireless transmitter systems has triggered an increased interest in switching mode techniques to handle the required power amplification. The RF carrier amplitude-burst transmitter, i.e. a wireless transmitter chain where a phase-modulated carrier is modulated in amplitude in an on-off mode, according to some prescribed envelope-to-time conversion, such as pulse-width or sigma-delta modulation, constitutes a promising architecture capable of efficiently transmitting signals of highly demanding complex modulation schemes. However, the tested practical implementations present results that are way behind the theoretically advanced promises (perfect linearity and efficiency). My original contribution to knowledge presented in this thesis is the first thorough study and model of the power efficiency and linearity characteristics that can be actually achieved with this architecture. The analysis starts with a brief revision of the theoretical idealized behavior of these switched-mode amplifier systems, followed by the study of the many sources of impairments that appear when the real system is implemented. In particular, a special attention is paid to the dynamic load modulation caused by the often ignored interaction between the narrowband signal reconstruction filter and the usual single-ended switched-mode power amplifier, which, among many other performance impairments, forces a two transistor implementation. The performance of this architecture is clearly explained based on the presented theory, which is supported by simulations and corresponding measured results of a fully working implementation. The drawn conclusions allow the development of a set of design rules for future improvements, one of which is proposed and verified in this thesis. It suggests a significant modification to this traditional architecture, where now the phase modulated carrier is always on – and thus allowing a single transistor implementation – and the amplitude is impressed into the carrier phase according to a bi-phase code.
Resumo:
La compression des données est la technique informatique qui vise à réduire la taille de l’information pour minimiser l’espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d’un problème que nous appelons la redondance causée par la multiplicité d’encodages. La multiplicité d’encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu’une technique de compression a la possibilité, au cours du processus d’encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d’environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n’a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c’est-à-dire qu’elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s’appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu’ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d’améliorer l’efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s’appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l’adaptation du HuBR à l’ACBR. Nous présentons aussi l’analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l’aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c’est qu’elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l’utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d’obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d’obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d’un dictionnaire pluriellement analysable. En outre, nous montrons qu’ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l’algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d’ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l’algorithme de Knuth.
Resumo:
Relatório da Prática de Ensino Supervisionada, Mestrado em Ensino da Economia e Contabilidade, Universidade de Lisboa, 2014
Resumo:
Tese de mestrado em Química Tecnológica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016
Resumo:
Arthur Schopenhauer proposed a theory of colour as a consequence of his first hand knowledge of J.W. Goethe’s experiments with color phenomena. This colour theory can be used to explore an interesting proposition Schopenhauer made about architecture. For Schopenhauer, architecture is about feelings, not about functions or forms, its purpose as an art is to reveal the principles of primitive forces, specifically gravity and rigidity. For Schopenhauer, architecture expresses these forces in the poised equilibrium of massive structures built out of stone. Schopenhauer was inclined to believed that architecture had already achieved its most perfect expression in Greek temple architecture. However; he did offer one possibility for architectural research: this was the suggestion that architecture was also concerned with the expression of light. It seems never to have occurred to Schopenhauer to use his colour theory to speculate about light in architecture. This paper explores some of the implications of Schopenhauer’s theory of colour for his aesthetics of architecture?
Resumo:
This paper addresses the optimal involvement in derivatives electricity markets of a power producer to hedge against the pool price volatility. To achieve this aim, a swarm intelligence meta-heuristic optimization technique for long-term risk management tool is proposed. This tool investigates the long-term opportunities for risk hedging available for electric power producers through the use of contracts with physical (spot and forward contracts) and financial (options contracts) settlement. The producer risk preference is formulated as a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance of return and the expectation are based on a forecasted scenario interval determined by a long-term price range forecasting model. This model also makes use of particle swarm optimization (PSO) to find the best parameters allow to achieve better forecasting results. On the other hand, the price estimation depends on load forecasting. This work also presents a regressive long-term load forecast model that make use of PSO to find the best parameters as well as in price estimation. The PSO technique performance has been evaluated by comparison with a Genetic Algorithm (GA) based approach. A case study is presented and the results are discussed taking into account the real price and load historical data from mainland Spanish electricity market demonstrating the effectiveness of the methodology handling this type of problems. Finally, conclusions are dully drawn.
Resumo:
This paper presents the characterization of an indoor Wimax radio channel using the Finite-Difference Time-Domain (FDTD) [1] method complemented with the Convolutional Perfect Matched Layer (CPML) technique [2]. An indoor 2D scenario is simulated in the 3.5GHz band (IEEE 802.16d-2004 and IEEE 802.16e-2005 [3]). In this study, we used two complementary techniques in both analysis, technique A and B for fading based on delay spread and technique C and D for fading based on Doppler spread. Both techniques converge to the same result. Simulated results define the channel as flat, slow and without inter-symbolic interference (ISI), making the application of the spatial diversity the most appropriate scheme.
Resumo:
In this paper, the design of low profile antennas by using Electromagnetic Band Gap (EBG) structures is introduced. Taking advantage of the fact that they can behave as Perfect Magnetic Conductor (PMC), it is shown that these structures exhibit dual band in-phase reflection at WLAN (Wireless Local Area Network) bands, the 2.4 GHz and 5.2 GHz bands. These structures are applied to PIFA (Planar Inverted-F Antenna) and the results show that it is possible to obtain low profile PIFA's.
Resumo:
Com o crescimento previsível e exponencial das redes de comunicações móveis motivado pela mobilidade, flexibilidade e também comodidade do utilizador levam a que este se torne na fatia mais importante do mundo das telecomunicações dos dias que correm. Assim é importante estudar e caracterizar canais rádio para as mais diversas gamas de frequências utilizadas nas mais variadas tecnologias. O objectivo principal desta dissertação de Mestrado é caracterizar um canal rádio para a tecnologia sem fios Worldwide Inter-operability for Microwave Access (Wimax para as frequências de 3,5 GHz e 5 GHz) actualmente vista pela comunidade científica como a tecnologia sem fios com maiores perspectivas de sucesso. Para tal, determinaram-se o Perfil de Atraso de Potência (PAP) e também a Potência em Função da Distância (PFD) recorrendo ao método computacional de simulação Finite-Difference Time-Domain (FDTD). De forma a estudar e caracterizar o canal rádio, em termos de desvanecimento relativo ao espalhamento de atraso, usaram-se dois métodos alternativos que têm como entrada o PAP. Para caracterizar o canal quanto ao desvanecimento baseado em espalhamento de Doppler, recorreu-se também a duas técnicas alternativas tendo como entrada o PFD. Em ambas as situações os dois métodos alternativos convergiram para os mesmos resultados. A caracterização é feita em dois cenários diferentes: um em que consideramos que a maioria dos obstáculos são condutores eléctricos perfeitos (CEP) e que passaremos a designar Cenário PEC, e um segundo cenário em que os obstáculos têm propriedades electromagnéticas diferentes, e que passará a ser designado por Cenário MIX. Em ambos os cenários de análise concluiu-se que o canal é plano, lento e sem ISI.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Electrónica e Telecomunicações
Resumo:
We study the effects of product differentiation in a Stackelberg model with demand uncertainty for the first mover. We do an ex-ante and ex-post analysis of the profits of the leader and of the follower firms in terms of product differentiation and of the demand uncertainty. We show that even with small uncertainty about the demand, the follower firm can achieve greater profits than the leader, if their products are sufficiently differentiated. We also compute the probability of the second firm having higher profit than the leading firm, subsequently showing the advantages and disadvantages of being either the leader or the follower firm.