920 resultados para Output gap


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lehet-e beszélni a 2011-ig felgyülemlett empirikus tapasztalatok tükrében egy egységes válságlefolyásról, amely a fejlett ipari országok egészére általában jellemző, és a meghatározó országok esetében is megragadható? Megállapíthatók-e olyan univerzális változások a kibocsátás, a munkapiacok, a fogyasztás, valamint a beruházás tekintetében, amelyek jól illeszkednek a korábbi tapasztalatokhoz, nem kevésbé az ismert makromodellek predikcióihoz? A válasz – legalábbis jelen sorok írásakor – nemleges: sem a válság lefolyásának jellegzetességeiben és a makrogazdasági teljesítmények romlásának ütemében, sem a visszacsúszás mértékében és időbeli kiterjedésében sincsenek jól azonosítható közös jegyek, olyanok, amelyek a meglévő elméleti keretekbe jól beilleszthetők. A tanulmány áttekinti a válsággal és a makrogazdasági sokkokkal foglalkozó empirikus irodalom – a pénzügyi globalizáció értelmezései nyomán – relevánsnak tartott munkáit. Ezt követően egy 60 év távlatát átfogó vizsgálatban próbáljuk megítélni a recessziós időszakokban az amerikai gazdaság teljesítményét azzal a célkitűzéssel, hogy az elmúlt válság súlyosságának megítélése kellően objektív lehessen, legalább a fontosabb makrováltozók elmozdulásának nagyságrendje tekintetében. / === / Based on the empirical evidence accumulated until 2011, using official statistics from the OECD data bank and the US Commerce Department, the article addresses the question whether one can, or cannot, speak about generally observable recession/crisis patterns, such that were to be universally recognized in all major industrial countries (the G7). The answer to this question is a firm no. Changes and volatility in most major macroeconomic indicators such as output-gap, labor market distortions and large deviations from trend in consumption and in investment did all, respectively, exhibit wide differences in depth and width across the G7 countries. The large deviations in output-gaps and especially strong distortions in labor market inputs and hours per capita worked over the crisis months can hardly be explained by the existing model classes of DSGE and those of the real business cycle. Especially bothering are the difficulties in fitting the data into any established model whether business cycle or some other types, in which financial distress reduces economic activity. It is argued that standard business cycle models with financial market imperfections have no mechanism for generating deviation from standard theory, thus they do not shed light on the key factors underlying the 2007–2009 recession. That does not imply that the financial crisis is unimportant in understanding the recession, but it does indicate however, that we do not fully understand the channels through which financial distress reduced labor input. Long historical trends on the privately held portion of the federal debt in the US economy indicate that the standard macro proposition of public debt crowding out private investment and thus inhibiting growth, can be strongly challenged in so far as this ratio is neither a direct indicator of growth slowing down, nor for recession.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Az első rész az USA és a legfejlettebb ipari országok, a G7 konjunkturális ingadozásait kívánja értelmezni egy pénzügyileg jóval globalizáltabb világgazdaságban egy hosszabb1970-2010 és egy rövidebb 2001-2010 közötti időszakban. Mindenekelőtt arra keresve választ, hogy mennyire lehetett előre látni a súlyos pénzügyi válság és outputvesztés jövetelét. Továbbá arra, hogy a 2011-ig felgyülemlett empirikus tapasztalatok tükrében vajon beszélhetünk-e egységes, a fejlett ipari országok, /G7/ egészére általában jellemző, és a meghatározó országok esetében is megragadható válságlefolyásról? Megállapíthatók-e olyan univerzálisan megjelenő változások a kibocsátás, a munkapiacok, a fogyasztás, a beruházás tekintetében, amelyek jól illeszkednek a korábbi tapasztalatokhoz, nem kevésbé az ismert makro modellek predikcióihoz? A válasz nemleges. Sem a válság lefolyásának jellegzetességei és a makrogazdasági teljesítmények romlásának ütemei, sem a visszacsúszás mértékei és időbeli kiterjedésében a vizsgált fejlett országok nem mutattak jól azonosítható közös jegyeket, olyanokat, amelyeket a meglévő elméleti keretekbe jól beilleszthetők. A válság lefolyása és mélysége sokféle volt a G7 ország-csoportban. A korábbi válságértelmezések, főleg a pénzügyi csatornák szerepei tekintetében és a nemzetközi konjunkturális összefonódás jelentőségét és mechanizmusait, valamint a globális válságterjedés illetően elégtelennek bizonyultak. A tanulmány áttekinti a válsággal és makrogazdasági sokkokkal foglalkozó empirikus irodalom pénzügyi globalizáció értelmezési nyomán relevánsnak tartott gyakran idézett munkákat. Ezt követően egy hosszú történelemi, a II. vh. utáni 60 év távlatát átfogó vizsgálatban próbáljuk megítélni a recessziós időszakokban az amerikai gazdaság teljesítményét, annak érdekében, hogy az elmúlt válság súlyosságának megítélése legalább a fontosabb makro-változók változásának a nagyságrendje tekintetében a helyére kerüljön. A tartós output rés /output gap/ és munkapiaci eltérések magyarázata más és más elemeket takart az USA-ban, Japánban és Németországban. A pénzügyi csatornákban keletkező, a növekedést és a konjunktúrát érdemben befolyásoló, torzító és sokk-gerjesztő mechanizmusok nem tejesen új-keletűek, az USA-ban. A privát szféra eladósodottsági mutatói - a szövetségi kormány adósság-terheinek cipelésében - a bevett makro felfogással ellentétben - az elmúlt 30 évben nem mutattak szoros és egyirányú (negatív) összefüggést a növekedéssel és recessziókkal. A második rész a pénzügyi globalizáció után kialakult nemzeti alkalmazkodás lehetőségeit vizsgálja, különös tekintettel a kis nyitott gazdaságokra, és így Magyarországra nézve. E tanulmány a globális pénzügyi folyamatok két fontos kérdését taglalja: a nemzetközi tőkeáramlás fokozott liberalizációjából húzható előnyök közgazdasági lényegét; valamint a fokozott nemzetközi tőkeáramláshoz leginkább illeszkedő, „adekvát” árfolyamrendszer kérdését. A következetések részben elméletiek, részben gyakorlatorientáltak. Megerősítésre kerül azon állítás, hogy a tőkeforgalom liberalizációjának és a megvalósítandó árfolyam-politikának a kérdése mind a mai napig erősen problematikus. Sem a tőkeforgalom liberalizációját, sem a megvalósítandó adekvát árfolyamrezsimet illetően nem lehet egységes és elméletileg minden tekintetben megalapozott álláspontról beszélni. Az ún. „lehetetlen szentháromság”, a külföldi és a belföldi célok szimultán követésének különös nehézsége a kis nyitott gazdaságok, és így a magyar gazdaságban még fokozottabban érvényesül. A nemzetközi pénzügyi integráltság magas foka miatt a hagyományos eszközökkel – kamat és fiskális gazdaságélénkítő lépésekkel - nem lehetséges, egy irányba mutató, vagy egymást nem gyengítő, szimultán lépésekkel szabályozni a belföldi és külföldi hitelkeresletet, illetve a konjunktúrát. A kamatpolitika, a forint- és devizahitelezés nehézségei ezt fokozottan illusztrálják Magyarországon is. Ugyanakkor a mindenkori gazdaságpolitika nem bújhat ki azon kényszer alól, hogy egy változó globális pénzügyi környezetben is tatható arányokat találjon a belföldi és a külgazdasági célok között. „Királyi út” azonban nincs a gazdaságpolitika számára. Ez a megállapítás igaz a jegybanki szerepvállalásra is, amely a felduzzadt magyar devizaadósság által okozott bankrendszer szintű kockázatok kezelésére irányul.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cikkünkben a magyar monetáris politikát vizsgáljuk olyan szempontból, hogy kamatdöntései meghozatalakor figyelembe vette-e az országkockázatot, és ha igen, hogyan. A kérdés megválaszolásához a monetáris politika elemzésének leggyakoribb eszközét használjuk: az ország monetáris politikáját leíró Taylor-szabályokat becslünk. A becslést több kockázati mérőszámmal is elvégeztük több, különféle Taylor-szabályt használva. Az érzékenységvizsgálatban az inflációhoz és a kibocsátási réshez is alkalmaztunk más, az alapspecifikációban szereplőtől eltérő mérőszámokat. Eredményeink szerint a Magyar Nemzeti Bank kamatdöntései jól leírhatók egy rugalmas, inflációs célkövető rezsimmel: a Taylor-szabályban szignifikáns szerepe van az inflációs céltól való eltérésének és - a szabályok egy része esetén - a kibocsátási résnek. Emellett a döntéshozók figyelembe vették az országkockázatot is, annak növekedésére a kamat emelésével válaszoltak. Az országkockázat Taylor-szabályba történő beillesztése a megfelelő kockázati mérőszám kiválasztása esetén jelentős mértékben képes javítani a Taylor-szabály illeszkedését. _____ The paper investigates the degree to which Hungarian monetary policy has considered country risk in its decisions and if so, how. The answer was sought through the commonest method of analysing a countrys monetary policy: Taylor rules for describing it. The estimation of the rule was prepared using several risk indicators and applying various types of Taylor rules. As a sensitivity analysis, other indicators of inflation and output gap were employed than in the base rule. This showed that the interest-rate decisions of the National Bank of Hungary can be well described by a flexible inflation targeting regime: in the Taylor rules, deviation of inflation from its target has a significant role and the output gap is also significant in one part of the rules. The decision-makers also considered country risk and responded to an increase in it by raising interest rates. Insertion of country risk into the Taylor rule could improve the models fit to an important degree when choosing an appropriate risk measure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Az alsó kamatkorlát melletti előretekintő iránymutatás a kamatpolitika helyettesítője, a várakozások és a pénzpiaci hozamok befolyásolásának nem konvencionális monetáris politikai eszköze. A jegybank a piaci szereplők számára előre jelezheti az alacsony kamatkörnyezet tartós fennmaradását (delphoi típus), és kötelezettséget is vállalhat erre (odüsszeuszi típus). Ez utóbbi esetben megváltozik a jegybanki reakciófüggvény: a jegybanki kamatdöntésekben nem az inflációs kilátások és az inflációs cél közötti eltérés, illetve a kibocsátási rés jut kitüntetett szerephez, hanem valamilyen állapotváltozó alakulása vagy az időtényező. A reakciófüggvény változásának hitelessége esetén a hozamok csökkenésére lehet számítani. Ha a jegybank a kamatszint fenntartásának feltételéül az állapotváltozók olyan értékeit jelöli meg, amelyek teljesülése esetén az inflációs célkövetés szabályai szerint amúgy sem emelt volna kamatot, akkor az előretekintő iránymutatás „üres beszéd”, és a hozamokra gyakorolt hatás is elmaradhat. Egyelőre nem eldönthető, hogy az előretekintő iránymutatás a kamatpolitika átmeneti helyettesítőjéből annak tartós kiegészítő elemévé válik-e. _____ Forward guidance is a substitute for interest-rate policy where the zero lower boundary applies. It is an unconventional monetary policy instrument intended to influence market yields and expectations. The central bank may give signals (forecasts) to the market on lasting maintenance of a low interest-rate environment (Delphi type) or may commit itself to do so (Odysseus type). In the latter case the reaction function changes: instead of inflation prospects and output gap, the main role in central bank rate decisions becomes the evolution of given macroeconomic state variables or the time factor. If changes of the reaction function are credible, a drop in security yields are expected. Forward guidance is just cheap talk if such values in economic state variables are set as conditions for keeping interest rates unchanged, which based on the rules of inflation targeting would not trigger an interest rate increase, regardless. In that case, no impact on yields may occur. For the time being it cannot be decided whether forward guidance is transitory or a lasting instrument of monetary policy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most research on stock prices is based on the present value model or the more general consumption-based model. When applied to real economic data, both of them are found unable to account for both the stock price level and its volatility. Three essays here attempt to both build a more realistic model, and to check whether there is still room for bubbles in explaining fluctuations in stock prices. In the second chapter, several innovations are simultaneously incorporated into the traditional present value model in order to produce more accurate model-based fundamental prices. These innovations comprise replacing with broad dividends the more narrow traditional dividends that are more commonly used, a nonlinear artificial neural network (ANN) forecasting procedure for these broad dividends instead of the more common linear forecasting models for narrow traditional dividends, and a stochastic discount rate in place of the constant discount rate. Empirical results show that the model described above predicts fundamental prices better, compared with alternative models using linear forecasting process, narrow dividends, or a constant discount factor. Nonetheless, actual prices are still largely detached from fundamental prices. The bubblelike deviations are found to coincide with business cycles. The third chapter examines possible cointegration of stock prices with fundamentals and non-fundamentals. The output gap is introduced to form the nonfundamental part of stock prices. I use a trivariate Vector Autoregression (TVAR) model and a single equation model to run cointegration tests between these three variables. Neither of the cointegration tests shows strong evidence of explosive behavior in the DJIA and S&P 500 data. Then, I applied a sup augmented Dickey-Fuller test to check for the existence of periodically collapsing bubbles in stock prices. Such bubbles are found in S&P data during the late 1990s. Employing econometric tests from the third chapter, I continue in the fourth chapter to examine whether bubbles exist in stock prices of conventional economic sectors on the New York Stock Exchange. The ‘old economy’ as a whole is not found to have bubbles. But, periodically collapsing bubbles are found in Material and Telecommunication Services sectors, and the Real Estate industry group.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most research on stock prices is based on the present value model or the more general consumption-based model. When applied to real economic data, both of them are found unable to account for both the stock price level and its volatility. Three essays here attempt to both build a more realistic model, and to check whether there is still room for bubbles in explaining fluctuations in stock prices. In the second chapter, several innovations are simultaneously incorporated into the traditional present value model in order to produce more accurate model-based fundamental prices. These innovations comprise replacing with broad dividends the more narrow traditional dividends that are more commonly used, a nonlinear artificial neural network (ANN) forecasting procedure for these broad dividends instead of the more common linear forecasting models for narrow traditional dividends, and a stochastic discount rate in place of the constant discount rate. Empirical results show that the model described above predicts fundamental prices better, compared with alternative models using linear forecasting process, narrow dividends, or a constant discount factor. Nonetheless, actual prices are still largely detached from fundamental prices. The bubble-like deviations are found to coincide with business cycles. The third chapter examines possible cointegration of stock prices with fundamentals and non-fundamentals. The output gap is introduced to form the non-fundamental part of stock prices. I use a trivariate Vector Autoregression (TVAR) model and a single equation model to run cointegration tests between these three variables. Neither of the cointegration tests shows strong evidence of explosive behavior in the DJIA and S&P 500 data. Then, I applied a sup augmented Dickey-Fuller test to check for the existence of periodically collapsing bubbles in stock prices. Such bubbles are found in S&P data during the late 1990s. Employing econometric tests from the third chapter, I continue in the fourth chapter to examine whether bubbles exist in stock prices of conventional economic sectors on the New York Stock Exchange. The ‘old economy’ as a whole is not found to have bubbles. But, periodically collapsing bubbles are found in Material and Telecommunication Services sectors, and the Real Estate industry group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Due to their high spatial resolution diodes are often used for small field relative output factor measurements. However, a field size specific correction factor [1] is required and corrects for diode detector over-response at small field sizes. A recent Monte Carlo based study has shown that it is possible to design a diode detector that produces measured relative output factors that are equivalent to those in water. This is accomplished by introducing an air gap at the upstream end of the diode [2]. The aim of this study was to physically construct this diode by placing an ‘air cap’ on the end of a commercially available diode (the PTW 60016 electron diode). The output factors subsequently measured with the new diode design were compared to current benchmark small field output factor measurements. Methods A water-tight ‘cap’ was constructed so that it could be placed over the upstream end of the diode. The cap was able to be offset from the end of the diode, thus creating an air gap. The air gap width was the same as the diode width (7 mm) and the thickness of the air gap could be varied. Output factor measurements were made using square field sizes of side length from 5 to 50 mm, using a 6 MV photon beam. The set of output factor measurements were repeated with the air gap thickness set to 0, 0.5, 1.0 and 1.5 mm. The optimal air gap thickness was found in a similar manner to that proposed by Charles et al. [2]. An IBA stereotactic field diode, corrected using Monte Carlo calculated kq,clin,kq,msr values [3] was used as the gold standard. Results The optimal air thickness required for the PTW 60016 electron diode was 1.0 mm. This was close to the Monte Carlo predicted value of 1.15 mm2. The sensitivity of the new diode design was independent of field size (kq,clin,kq,msr = 1.000 at all field sizes) to within 1 %. Discussion and conclusions The work of Charles et al. [2] has been proven experimentally. An existing commercial diode has been converted into a correction-less small field diode by the simple addition of an ‘air cap’. The method of applying a cap to create the new diode leads to the diode being dual purpose, as without the cap it is still an unmodified electron diode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim A recent Monte Carlo based study has shown that it is possible to design a diode that measures small field output factors equivalent to that in water. This is accomplished by placing an appropriate sized air gap above the silicon chip (1) with experimental results subsequently confirming that a particular Monte Carlo design was accurate (2). The aim of this work was to test if a new correction-less diode could be designed using an entirely experimental methodology. Method: All measurements were performed on a Varian iX at a depth of 5 cm, SSD of 95 cm and field sizes of 5, 6, 8, 10, 20 and 30 mm. Firstly, the experimental transfer of kq,clin,kq,msr from a commonly used diode detector (IBA, stereotactic field diode (SFD)) to another diode detector (Sun Nuclear, unshielded diode, (EDGEe)) was tested. These results were compared to Monte Carlo calculated values of the EDGEe. Secondly, the air gap above the EDGEe silicon chip was optimised empirically. Nine different air gap “tops” were placed above the EDGEe (air depth = 0.3, 0.6, 0.9 mm; air width = 3.06, 4.59, 6.13 mm). The sensitivity of the EDGEe was plotted as a function of air gap thickness for the field sizes measured. Results: The transfer of kq,clin,kq,msr from the SFD to the EDGEe was correct to within the simulation and measurement uncertainties. The EDGEe detector can be made “correction-less” for field sizes of 5 and 6 mm, but was ∼2% from being “correction-less” at field sizes of 8 and 10 mm. Conclusion Different materials will perturb small fields in different ways. A detector is only “correction-less” if all these perturbations happen to cancel out. Designing a “correction-less” diode is a complicated process, thus it is reasonable to expect that Monte Carlo simulations should play an important role.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stabilization of dynamic switched control systems is focused on and based on an operator-based formulation. It is assumed that the controlled object and the controller are described by sequences of closed operator pairs (L, C) on a Hilbert space H of the input and output spaces and it is related to the existence of the inverse of the resulting input-output operator being admissible and bounded. The technical mechanism addressed to get the results is the appropriate use of the fact that closed operators being sufficiently close to bounded operators, in terms of the gap metric, are also bounded. That philosophy is followed for the operators describing the input-output relations in switched feedback control systems so as to guarantee the closed-loop stabilization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1966, Roy Geary, Director of the ESRI, noted “the absence of any kind of import and export statistics for regions is a grave lacuna” and further noted that if regional analyses were to be developed then regional Input-Output Tables must be put on the “regular statistical assembly line”. Forty-five years later, the lacuna lamented by Geary still exists and remains the most significant challenge to the construction of regional Input-Output Tables in Ireland. The continued paucity of sufficient regional data to compile effective regional Supply and Use and Input-Output Tables has retarded the capacity to construct sound regional economic models and provide a robust evidence base with which to formulate and assess regional policy. This study makes a first step towards addressing this gap by presenting the first set of fully integrated, symmetric, Supply and Use and domestic Input-Output Tables compiled for the NUTS 2 regions in Ireland: The Border, Midland and Western region and the Southern & Eastern region. These tables are general purpose in nature and are consistent fully with the official national Supply & Use and Input-Output Tables, and the regional accounts. The tables are constructed using a survey-based or bottom-up approach rather than employing modelling techniques, yielding more robust and credible tables. These tables are used to present a descriptive statistical analysis of the two administrative NUTS 2 regions in Ireland, drawing particular attention to the underlying structural differences of regional trade balances and composition of Gross Value Added in those regions. By deriving regional employment multipliers, Domestic Demand Employment matrices are constructed to quantify and illustrate the supply chain impact on employment. In the final part of the study, the predictive capability of the Input-Output framework is tested over two time periods. For both periods, the static Leontief production function assumptions are relaxed to allow for labour productivity. Comparative results from this experiment are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last 15 years, the acceleration in media consolidation has presented a series of policy challenges around diversity of editorial output. While policy debates on national ownership limits and other regulatory interventions are important, developments at the local level are often marginalised. And yet, the direction of travel—towards more consolidation and more deregulation—has arguably been more debilitating for democracy at the local level, where the vast majority of citizens interact with hospitals, schools, transport systems and local councils. The decline of local media—including, in some towns, the wholesale disappearance of local newspapers—leaves citizens starved of information and local institutions less accountable. This article uses an existing conceptual framework for assessing whether and how journalism makes a real-life contribution to democratic life at the local level. Against this normative framework, it then assesses the contribution of hyperlocal media sites to local democracy. We present findings from the most extensive survey of the hyperlocal sector to date, a collaboration with research partners at Cardiff and Birmingham City Universities and Talk About Local, which analysed online questionnaires from over 180 local online media initiatives. Our research offers a unique insight into the funding, operational problems and sustainability of community media sites, and suggests they have the potential to fulfil a vital democratic and civic role. These data inform our conclusions and recommendations for policy initiatives that would invigorate hyperlocal sites and therefore provide a real alternative for otherwise democratically impoverished local communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A low inductance, triggered spark gap switch suitable for a high-current fast discharge system has been developed. The details of the design and fabrication of this pressurized spark gap, which uses only commonly available materials are described. A transverse discharge Blumlein-driven N2 laser incorporating this device gives a peak output power of 700 kW with a FWHM of 3 ns and an efficiency of 0.51%, which is remarkably high for a pulsed nitrogen laser system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gap junctions between neurons form the structural substrate for electrical synapses. Connexin 36 (Cx36, and its non-mammalian ortholog connexin 35) is the major neuronal gap junction protein in the central nervous system (CNS), and contributes to several important neuronal functions including neuronal synchronization, signal averaging, network oscillations, and motor learning. Connexin 36 is strongly expressed in the retina, where it is an obligatory component of the high-sensitivity rod photoreceptor pathway. A fundamental requirement of the retina is to adapt to broadly varying inputs in order to maintain a dynamic range of signaling output. Modulation of the strength of electrical coupling between networks of retinal neurons, including the Cx36-coupled AII amacrine cell in the primary rod circuit, is a hallmark of retinal luminance adaptation. However, very little is known about the mechanisms regulating dynamic modulation of Cx36-mediated coupling. The primary goal of this work was to understand how cellular signaling mechanisms regulate coupling through Cx36 gap junctions. We began by developing and characterizing phospho-specific antibodies against key regulatory phosphorylation sites on Cx36. Using these tools we showed that phosphorylation of Cx35 in fish models varies with light adaptation state, and is modulated by acute changes in background illumination. We next turned our focus to the well-studied and readily identifiable AII amacrine cell in mammalian retina. Using this model we showed that increased phosphorylation of Cx36 is directly related to increased coupling through these gap junctions, and that the dopamine-stimulated uncoupling of the AII network is mediated by dephosphorylation of Cx36 via protein kinase A-stimulated protein phosphatase 2A activity. We then showed that increased phosphorylation of Cx36 on the AII amacrine network is driven by depolarization of presynaptic ON-type bipolar cells as well as background light increments. This increase in phosphorylation is mediated by activation of extrasynaptic NMDA receptors associated with Cx36 gap junctions on AII amacrine cells and by Ca2+-calmodulin-dependent protein kinase II activation. Finally, these studies indicated that coupling is regulated locally at individual gap junction plaques. This work provides a framework for future study of regulation of Cx36-mediated coupling, in which increased phosphorylation of Cx36 indicates increased neuronal coupling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las fuentes de alimentación de modo conmutado (SMPS en sus siglas en inglés) se utilizan ampliamente en una gran variedad de aplicaciones. La tarea más difícil para los diseñadores de SMPS consiste en lograr simultáneamente la operación del convertidor con alto rendimiento y alta densidad de energía. El tamaño y el peso de un convertidor de potencia está dominado por los componentes pasivos, ya que estos elementos son normalmente más grandes y más pesados que otros elementos en el circuito. Para una potencia de salida dada, la cantidad de energía almacenada en el convertidor que ha de ser entregada a la carga en cada ciclo de conmutación, es inversamente proporcional a la frecuencia de conmutación del convertidor. Por lo tanto, el aumento de la frecuencia de conmutación se considera un medio para lograr soluciones más compactas con los niveles de densidad de potencia más altos. La importancia de investigar en el rango de alta frecuencia de conmutación radica en todos los beneficios que se pueden lograr: además de la reducción en el tamaño de los componentes pasivos, el aumento de la frecuencia de conmutación puede mejorar significativamente prestaciones dinámicas de convertidores de potencia. Almacenamiento de energía pequeña y el período de conmutación corto conducen a una respuesta transitoria del convertidor más rápida en presencia de las variaciones de la tensión de entrada o de la carga. Las limitaciones más importantes del incremento de la frecuencia de conmutación se relacionan con mayores pérdidas del núcleo magnético convencional, así como las pérdidas de los devanados debido a los efectos pelicular y proximidad. También, un problema potencial es el aumento de los efectos de los elementos parásitos de los componentes magnéticos - inductancia de dispersión y la capacidad entre los devanados - que causan pérdidas adicionales debido a las corrientes no deseadas. Otro factor limitante supone el incremento de las pérdidas de conmutación y el aumento de la influencia de los elementos parásitos (pistas de circuitos impresos, interconexiones y empaquetado) en el comportamiento del circuito. El uso de topologías resonantes puede abordar estos problemas mediante el uso de las técnicas de conmutaciones suaves para reducir las pérdidas de conmutación incorporando los parásitos en los elementos del circuito. Sin embargo, las mejoras de rendimiento se reducen significativamente debido a las corrientes circulantes cuando el convertidor opera fuera de las condiciones de funcionamiento nominales. A medida que la tensión de entrada o la carga cambian las corrientes circulantes incrementan en comparación con aquellos en condiciones de funcionamiento nominales. Se pueden obtener muchos beneficios potenciales de la operación de convertidores resonantes a más alta frecuencia si se emplean en aplicaciones con condiciones de tensión de entrada favorables como las que se encuentran en las arquitecturas de potencia distribuidas. La regulación de la carga y en particular la regulación de la tensión de entrada reducen tanto la densidad de potencia del convertidor como el rendimiento. Debido a la relativamente constante tensión de bus que se encuentra en arquitecturas de potencia distribuidas los convertidores resonantes son adecuados para el uso en convertidores de tipo bus (transformadores cc/cc de estado sólido). En el mercado ya están disponibles productos comerciales de transformadores cc/cc de dos puertos que tienen muy alta densidad de potencia y alto rendimiento se basan en convertidor resonante serie que opera justo en la frecuencia de resonancia y en el orden de los megahercios. Sin embargo, las mejoras futuras en el rendimiento de las arquitecturas de potencia se esperan que vengan del uso de dos o más buses de distribución de baja tensión en vez de una sola. Teniendo eso en cuenta, el objetivo principal de esta tesis es aplicar el concepto del convertidor resonante serie que funciona en su punto óptimo en un nuevo transformador cc/cc bidireccional de puertos múltiples para atender las necesidades futuras de las arquitecturas de potencia. El nuevo transformador cc/cc bidireccional de puertos múltiples se basa en la topología de convertidor resonante serie y reduce a sólo uno el número de componentes magnéticos. Conmutaciones suaves de los interruptores hacen que sea posible la operación en las altas frecuencias de conmutación para alcanzar altas densidades de potencia. Los problemas posibles con respecto a inductancias parásitas se eliminan, ya que se absorben en los Resumen elementos del circuito. El convertidor se caracteriza con una muy buena regulación de la carga propia y cruzada debido a sus pequeñas impedancias de salida intrínsecas. El transformador cc/cc de puertos múltiples opera a una frecuencia de conmutación fija y sin regulación de la tensión de entrada. En esta tesis se analiza de forma teórica y en profundidad el funcionamiento y el diseño de la topología y del transformador, modelándolos en detalle para poder optimizar su diseño. Los resultados experimentales obtenidos se corresponden con gran exactitud a aquellos proporcionados por los modelos. El efecto de los elementos parásitos son críticos y afectan a diferentes aspectos del convertidor, regulación de la tensión de salida, pérdidas de conducción, regulación cruzada, etc. También se obtienen los criterios de diseño para seleccionar los valores de los condensadores de resonancia para lograr diferentes objetivos de diseño, tales como pérdidas de conducción mínimas, la eliminación de la regulación cruzada o conmutación en apagado con corriente cero en plena carga de todos los puentes secundarios. Las conmutaciones en encendido con tensión cero en todos los interruptores se consiguen ajustando el entrehierro para obtener una inductancia magnetizante finita en el transformador. Se propone, además, un cambio en los señales de disparo para conseguir que la operación con conmutaciones en apagado con corriente cero de todos los puentes secundarios sea independiente de la variación de la carga y de las tolerancias de los condensadores resonantes. La viabilidad de la topología propuesta se verifica a través una extensa tarea de simulación y el trabajo experimental. La optimización del diseño del transformador de alta frecuencia también se aborda en este trabajo, ya que es el componente más voluminoso en el convertidor. El impacto de de la duración del tiempo muerto y el tamaño del entrehierro en el rendimiento del convertidor se analizan en un ejemplo de diseño de transformador cc/cc de tres puertos y cientos de vatios de potencia. En la parte final de esta investigación se considera la implementación y el análisis de las prestaciones de un transformador cc/cc de cuatro puertos para una aplicación de muy baja tensión y de decenas de vatios de potencia, y sin requisitos de aislamiento. Abstract Recently, switch mode power supplies (SMPS) have been used in a great variety of applications. The most challenging issue for designers of SMPS is to achieve simultaneously high efficiency operation at high power density. The size and weight of a power converter is dominated by the passive components since these elements are normally larger and heavier than other elements in the circuit. If the output power is constant, the stored amount of energy in the converter which is to be delivered to the load in each switching cycle is inversely proportional to the converter’s switching frequency. Therefore, increasing the switching frequency is considered a mean to achieve more compact solutions at higher power density levels. The importance of investigation in high switching frequency range comes from all the benefits that can be achieved. Besides the reduction in size of passive components, increasing switching frequency can significantly improve dynamic performances of power converters. Small energy storage and short switching period lead to faster transient response of the converter against the input voltage and load variations. The most important limitations for pushing up the switching frequency are related to increased conventional magnetic core loss as well as the winding loss due to the skin and proximity effect. A potential problem is also increased magnetic parasitics – leakage inductance and capacitance between the windings – that cause additional loss due to unwanted currents. Higher switching loss and the increased influence of printed circuit boards, interconnections and packaging on circuit behavior is another limiting factor. Resonant power conversion can address these problems by using soft switching techniques to reduce switching loss incorporating the parasitics into the circuit elements. However the performance gains are significantly reduced due to the circulating currents when the converter operates out of the nominal operating conditions. As the input voltage or the load change the circulating currents become higher comparing to those ones at nominal operating conditions. Multiple Input-Output Many potential gains from operating resonant converters at higher switching frequency can be obtained if they are employed in applications with favorable input voltage conditions such as those found in distributed power architectures. Load and particularly input voltage regulation reduce a converter’s power density and efficiency. Due to a relatively constant bus voltage in distributed power architectures the resonant converters are suitable for bus voltage conversion (dc/dc or solid state transformation). Unregulated two port dc/dc transformer products achieving very high power density and efficiency figures are based on series resonant converter operating just at the resonant frequency and operating in the megahertz range are already available in the market. However, further efficiency improvements of power architectures are expected to come from using two or more separate low voltage distribution buses instead of a single one. The principal objective of this dissertation is to implement the concept of the series resonant converter operating at its optimum point into a novel bidirectional multiple port dc/dc transformer to address the future needs of power architectures. The new multiple port dc/dc transformer is based on a series resonant converter topology and reduces to only one the number of magnetic components. Soft switching commutations make possible high switching frequencies to be adopted and high power densities to be achieved. Possible problems regarding stray inductances are eliminated since they are absorbed into the circuit elements. The converter features very good inherent load and cross regulation due to the small output impedances. The proposed multiple port dc/dc transformer operates at fixed switching frequency without line regulation. Extensive theoretical analysis of the topology and modeling in details are provided in order to compare with the experimental results. The relationships that show how the output voltage regulation and conduction losses are affected by the circuit parasitics are derived. The methods to select the resonant capacitor values to achieve different design goals such as minimum conduction losses, elimination of cross regulation or ZCS operation at full load of all the secondary side bridges are discussed. ZVS turn-on of all the switches is achieved by relying on the finite magnetizing inductance of the Abstract transformer. A change of the driving pattern is proposed to achieve ZCS operation of all the secondary side bridges independent on load variations or resonant capacitor tolerances. The feasibility of the proposed topology is verified through extensive simulation and experimental work. The optimization of the high frequency transformer design is also addressed in this work since it is the most bulky component in the converter. The impact of dead time interval and the gap size on the overall converter efficiency is analyzed on the design example of the three port dc/dc transformer of several hundreds of watts of the output power for high voltage applications. The final part of this research considers the implementation and performance analysis of the four port dc/dc transformer in a low voltage application of tens of watts of the output power and without isolation requirements.