884 resultados para Performance analysis


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nell’ambito della ricerca scientifica nel campo dello sport, la Performance Analysis si sta ritagliando un crescente spazio di interesse. Per Performance Analysis si intende l’analisi della prestazione agonistica sia dal punto di vista biomeccanico che dal punto di vista dell’analisi notazionale. In questa tesi è stata analizzata la prestazione agonistica nel tennistavolo attraverso lo strumento dell’analisi notazionale, partendo dallo studio degli indicatori di prestazione più importanti dal punto di vista tecnico-tattico e dalla loro selezione attraverso uno studio sull’attendibilità nella raccolta dati. L’attenzione è stata posta quindi su un aspetto tecnico originale, il collegamento spostamenti e colpi, ricordando che una buona tecnica di spostamento permette di muoversi rapidamente nella direzione della pallina per effettuare il colpo migliore. Infine, l’obbiettivo principale della tesi è stato quello di confrontare le tre categorie di atleti selezionate: alto livello mondiale maschile (M), alto livello junior europeo (J) ed alto livello mondiale femminile (F). La maggior parte delle azioni cominciano con un servizio corto al centro del tavolo, proseguono con una risposta in push (M) o in flik di rovescio (J). Il colpo che segue è principalmente il top spin di dritto dopo un passo pivot o un top di rovescio senza spostamento. Gli alteti M e J contrattaccano maggiormente con top c. top di dritto e le atlete F prediligono colpi meno spregiudicati, bloccando di rovescio e proseguendo con drive di rovescio. Attraverso lo studio della prestazione di atleti di categorie e generi diversi è possibile migliorare le scelte strategiche prima e durante gli incontri. Le analisi statistiche multivariate (modelli log-lineari) hanno permesso di validare con metodo scientifico sia le procedure già utilizzate in letteratura che quelle innovative messe a punto per la prima volta in occasione di questo studio.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hepatitis C virus (HCV) vaccine efficacy may crucially depend on immunogen length and coverage of viral sequence diversity. However, covering a considerable proportion of the circulating viral sequence variants would likely require long immunogens, which for the conserved portions of the viral genome, would contain unnecessarily redundant sequence information. In this study, we present the design and in vitro performance analysis of a novel "epitome" approach that compresses frequent immune targets of the cellular immune response against HCV into a shorter immunogen sequence. Compression of immunological information is achieved by partial overlapping shared sequence motifs between individual epitopes. At the same time, sequence diversity coverage is provided by taking advantage of emerging cross-reactivity patterns among epitope variants so that epitope variants associated with the broadest variant cross-recognition are preferentially included. The processing and presentation analysis of specific epitopes included in such a compressed, in vitro-expressed HCV epitome indicated effective processing of a majority of tested epitopes, although re-presentation of some epitopes may require refined sequence design. Together, the present study establishes the epitome approach as a potential powerful tool for vaccine immunogen design, especially suitable for the induction of cellular immune responses against highly variable pathogens.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Financial, economic, and biological data collected from cow-calf producers who participated in the Illinois and Iowa Standardized Performance Analysis (SPA) programs were used in this study. Data used were collected for the 1996 through 1999 calendar years, with each herd within year representing one observation. This resulted in a final database of 225 observations (117 from Iowa and 108 from Illinois) from commercial herds with a range in size from 20 to 373 cows. Two analyses were conducted, one utilizing financial cost of production data, the other economic cost of production data. Each observation was analyzed as the difference from the mean for that given year. The independent variable utilized in both the financial and economic models as an indicator of profit was return to unpaid labor and management per cow (RLM). Used as dependent variables were the five factors that make up total annual cow cost: feed cost, operating cost, depreciation cost, capital charge, and hired labor, all on an annual cost per cow basis. In the economic analysis, family labor was also included. Production factors evaluated as dependent variables in both models were calf weight, calf price, cull weight, cull price, weaning percentage, and calving distribution. Herd size and investment were also analyzed. All financial factors analyzed were significantly correlated to RLM (P < .10) except cull weight, and cull price. All economic factors analyzed were significantly correlated to RLM (P < .10) except calf weight, cull weight and cull price. Results of the financial prediction equation indicate that there are eight measurements capable of explaining over 82 percent of the farm-to-farm variation in RLM. Feed cost is the overriding factor driving RLM in both the financial and economic stepwise regression analyses. In both analyses over 50 percent of the herd-to-herd variation in RLM could be explained by feed cost. Financial feed cost is correlated (P < .001) to operating cost, depreciation cost, and investment. Economic feed cost is correlated (P < .001) with investment and operating cost, as well as capital charge. Operating cost, depreciation, and capital charge were all negatively correlated (P < .10) to herd size, and positively correlated (P < .01) to feed cost in both analyses. Operating costs were positively correlated with capital charge and investment (P < .01) in both analyses. In the financial regression model, depreciation cost was the second critical factor explaining almost 9 percent of the herd-to-herd variation in RLM followed by operating cost (5 percent). Calf weight had a greater impact than calf price on RLM in both the financial and economic regression models. Calf weight was the fourth indicator of RLM in the financial model and was similar in magnitude to operating cost. Investment was not a significant variable in either regression model; however, it was highly correlated to a number of the significant cost variables including feed cost, depreciation cost, and operating cost (P < .001, financial; P < .10, economic). Cost factors were far more influential in driving RLM than production, reproduction, or producer controlled marketing factors. Of these cost factors, feed cost had by far the largest impact. As producers focus attention on factors that affect the profitability of the operation, feed cost is the most critical control point because it was responsible for over 50 percent of the herd-to-herd variation in profit.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fifteen beef cow-calf producers in southern Iowa were selected based on locality, management level, historical date of grazing initiation and desire to participate in the project. In 1997 and 1998, all producers kept records of production and economic data using the Integrated Resource Management-Standardized Performance Analysis (IRM-SPA) records program. At the initiation of grazing on each farm in 1997 and 1998, Julian date, degree-days, cumulative precipitation, and soil moisture, phosphorus, and potassium concentrations were determined. Also determined were pH, temperature, and load-bearing capacity; and forage mass, sward height, morphology and dry matter concentration. Over the grazing season, forage production, measured both by cumulative mass and sward height, forage in vitro digestible dry matter concentration, and crude protein concentration were determined monthly. In the fall of 1996 the primary species in pastures on farms used in this project were cool-season grasses, which composed 76% of the live forage whereas legumes and weeds composed 8.3 and 15.3%, respectively. The average number of paddocks was 4.1, reflecting a low intensity rotational stocking system on most farms. The average dates of grazing initiation were May 5 and April 29 in 1997 and 1998, respectively, with standard deviations of 14.8 and 14.1 days. Because the average soil moisture of 23% was dry and did not differ between years, it seems that most producers delayed the initiation of grazing to avoid muddy conditions by initiating grazing at a nearly equal soil moisture. However, Julian date, degree-days, soil temperature and morphology index at grazing initiation were negatively related to seasonal forage production, measured as mass or sward height, in 1998. And forage mass and height at grazing initiation were negatively related to seasonal forage production, measured as sward height, in 1997. Moreover, the concentrations of digestible dry matter at the initiation of and during the grazing season and the concentrations of crude protein during the grazing season were lower than desired for optimal animal performance. Because the mean seasonal digestible dry matter concentration was negatively related to initial forage mass in 1997 and mean seasonal crude proteins concentrations were negatively related to the Julian date, degree-days, and morphology indeces in both years, it seems that delaying the initiation of grazing until pasture soils are not muddy, is limiting the quality as well as the quantity of pasture forage. In 1997, forage production and digestibility were positively related to the soil phosphorus concentration. Soil potassium concentration was positively related to forage digestibility in 1997 and forage production and crude protein concentration in 1998. Increasing the number of paddocks increased forage production, measured as sward height, in 1997, and forage digestible dry matter concentration in 1998. Increasing yields or the concentrations of digestible dry matter or crude protein of pasture forage reduced the costs of purchased feed per cow.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper deals with scheduling batch (i.e., discontinuous), continuous, and semicontinuous production in process industries (e.g., chemical, pharmaceutical, or metal casting industries) where intermediate storage facilities and renewable resources (processing units and manpower) of limited capacity have to be observed. First, different storage configurations typical of process industries are discussed. Second, a basic scheduling problem covering the three above production modes is presented. Third, (exact and truncated) branch-and-bound methods for the basic scheduling problem and the special case of batch scheduling are proposed and subjected to an experimental performance analysis. The solution approach presented is flexible and in principle simple, and it can (approximately) solve relatively large problem instances with sufficient accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The article investigates the intriguing interplay of digital comics and live-action elements in a detailed performance analysis of TeZukA (2011) by choreographer Sidi Larbi Cherkaoui. This dance theatre production enacts the life story of Osamu Tezuka and some of his famous manga characters, interweaving performers and musicians with large-scale projections of the mangaka’s digitised comics. During the show, the dancers perform different ‘readings’ of the projected manga imagery: e.g. they swipe panels as if using portable touchscreen displays, move synchronously to animated speed lines, and create the illusion of being drawn into the stories depicted on the screen. The main argument is that TeZukA makes visible, demonstrates and reflects upon different ways of delivering, reading and interacting with digital comics. In order to verify this argument, the paper uses ideas developed in comics and theatre studies to draw more specifically on the use of digital comics in this particular performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multiuser multiple-input multiple-output (MIMO) downlink (DL) transmission schemes experience both multiuser interference as well as inter-antenna interference. The singular value decomposition provides an appropriate mean to process channel information and allows us to take the individual user’s channel characteristics into account rather than treating all users channels jointly as in zero-forcing (ZF) multiuser transmission techniques. However, uncorrelated MIMO channels has attracted a lot of attention and reached a state of maturity. By contrast, the performance analysis in the presence of antenna fading correlation, which decreases the channel capacity, requires substantial further research. The joint optimization of the number of activated MIMO layers and the number of bits per symbol along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers has to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The integration of powerful partial evaluation methods into practical compilers for logic programs is still far from reality. This is related both to 1) efficiency issues and to 2) the complications of dealing with practical programs. Regarding efnciency, the most successful unfolding rules used nowadays are based on structural orders applied over (covering) ancestors, i.e., a subsequence of the atoms selected during a derivation. Unfortunately, maintaining the structure of the ancestor relation during unfolding introduces significant overhead. We propose an efficient, practical local unfolding rule based on the notion of covering ancestors which can be used in combination with any structural order and allows a stack-based implementation without losing any opportunities for specialization. Regarding the second issue, we propose assertion-based techniques which allow our approach to deal with real programs that include (Prolog) built-ins and external predicates in a very extensible manner. Finally, we report on our implementation of these techniques in a practical partial evaluator, embedded in a state of the art compiler which uses global analysis extensively (the Ciao compiler and, specifically, its preprocessor CiaoPP). The performance analysis of the resulting system shows that our techniques, in addition to dealing with practical programs, are also significantly more efficient in time and somewhat more efficient in memory than traditional tree-based implementations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, the authors provide a methodology to design nonparametric permutation tests and, in particular, nonparametric rank tests for applications in detection. In the first part of the paper, the authors develop the optimization theory of both permutation and rank tests in the Neyman?Pearson sense; in the second part of the paper, they carry out a comparative performance analysis of the permutation and rank tests (detectors) against the parametric ones in radar applications. First, a brief review of some contributions on nonparametric tests is realized. Then, the optimum permutation and rank tests are derived. Finally, a performance analysis is realized by Monte-Carlo simulations for the corresponding detectors, and the results are shown in curves of detection probability versus signal-to-noise ratio

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las fuentes de alimentación de modo conmutado (SMPS en sus siglas en inglés) se utilizan ampliamente en una gran variedad de aplicaciones. La tarea más difícil para los diseñadores de SMPS consiste en lograr simultáneamente la operación del convertidor con alto rendimiento y alta densidad de energía. El tamaño y el peso de un convertidor de potencia está dominado por los componentes pasivos, ya que estos elementos son normalmente más grandes y más pesados que otros elementos en el circuito. Para una potencia de salida dada, la cantidad de energía almacenada en el convertidor que ha de ser entregada a la carga en cada ciclo de conmutación, es inversamente proporcional a la frecuencia de conmutación del convertidor. Por lo tanto, el aumento de la frecuencia de conmutación se considera un medio para lograr soluciones más compactas con los niveles de densidad de potencia más altos. La importancia de investigar en el rango de alta frecuencia de conmutación radica en todos los beneficios que se pueden lograr: además de la reducción en el tamaño de los componentes pasivos, el aumento de la frecuencia de conmutación puede mejorar significativamente prestaciones dinámicas de convertidores de potencia. Almacenamiento de energía pequeña y el período de conmutación corto conducen a una respuesta transitoria del convertidor más rápida en presencia de las variaciones de la tensión de entrada o de la carga. Las limitaciones más importantes del incremento de la frecuencia de conmutación se relacionan con mayores pérdidas del núcleo magnético convencional, así como las pérdidas de los devanados debido a los efectos pelicular y proximidad. También, un problema potencial es el aumento de los efectos de los elementos parásitos de los componentes magnéticos - inductancia de dispersión y la capacidad entre los devanados - que causan pérdidas adicionales debido a las corrientes no deseadas. Otro factor limitante supone el incremento de las pérdidas de conmutación y el aumento de la influencia de los elementos parásitos (pistas de circuitos impresos, interconexiones y empaquetado) en el comportamiento del circuito. El uso de topologías resonantes puede abordar estos problemas mediante el uso de las técnicas de conmutaciones suaves para reducir las pérdidas de conmutación incorporando los parásitos en los elementos del circuito. Sin embargo, las mejoras de rendimiento se reducen significativamente debido a las corrientes circulantes cuando el convertidor opera fuera de las condiciones de funcionamiento nominales. A medida que la tensión de entrada o la carga cambian las corrientes circulantes incrementan en comparación con aquellos en condiciones de funcionamiento nominales. Se pueden obtener muchos beneficios potenciales de la operación de convertidores resonantes a más alta frecuencia si se emplean en aplicaciones con condiciones de tensión de entrada favorables como las que se encuentran en las arquitecturas de potencia distribuidas. La regulación de la carga y en particular la regulación de la tensión de entrada reducen tanto la densidad de potencia del convertidor como el rendimiento. Debido a la relativamente constante tensión de bus que se encuentra en arquitecturas de potencia distribuidas los convertidores resonantes son adecuados para el uso en convertidores de tipo bus (transformadores cc/cc de estado sólido). En el mercado ya están disponibles productos comerciales de transformadores cc/cc de dos puertos que tienen muy alta densidad de potencia y alto rendimiento se basan en convertidor resonante serie que opera justo en la frecuencia de resonancia y en el orden de los megahercios. Sin embargo, las mejoras futuras en el rendimiento de las arquitecturas de potencia se esperan que vengan del uso de dos o más buses de distribución de baja tensión en vez de una sola. Teniendo eso en cuenta, el objetivo principal de esta tesis es aplicar el concepto del convertidor resonante serie que funciona en su punto óptimo en un nuevo transformador cc/cc bidireccional de puertos múltiples para atender las necesidades futuras de las arquitecturas de potencia. El nuevo transformador cc/cc bidireccional de puertos múltiples se basa en la topología de convertidor resonante serie y reduce a sólo uno el número de componentes magnéticos. Conmutaciones suaves de los interruptores hacen que sea posible la operación en las altas frecuencias de conmutación para alcanzar altas densidades de potencia. Los problemas posibles con respecto a inductancias parásitas se eliminan, ya que se absorben en los Resumen elementos del circuito. El convertidor se caracteriza con una muy buena regulación de la carga propia y cruzada debido a sus pequeñas impedancias de salida intrínsecas. El transformador cc/cc de puertos múltiples opera a una frecuencia de conmutación fija y sin regulación de la tensión de entrada. En esta tesis se analiza de forma teórica y en profundidad el funcionamiento y el diseño de la topología y del transformador, modelándolos en detalle para poder optimizar su diseño. Los resultados experimentales obtenidos se corresponden con gran exactitud a aquellos proporcionados por los modelos. El efecto de los elementos parásitos son críticos y afectan a diferentes aspectos del convertidor, regulación de la tensión de salida, pérdidas de conducción, regulación cruzada, etc. También se obtienen los criterios de diseño para seleccionar los valores de los condensadores de resonancia para lograr diferentes objetivos de diseño, tales como pérdidas de conducción mínimas, la eliminación de la regulación cruzada o conmutación en apagado con corriente cero en plena carga de todos los puentes secundarios. Las conmutaciones en encendido con tensión cero en todos los interruptores se consiguen ajustando el entrehierro para obtener una inductancia magnetizante finita en el transformador. Se propone, además, un cambio en los señales de disparo para conseguir que la operación con conmutaciones en apagado con corriente cero de todos los puentes secundarios sea independiente de la variación de la carga y de las tolerancias de los condensadores resonantes. La viabilidad de la topología propuesta se verifica a través una extensa tarea de simulación y el trabajo experimental. La optimización del diseño del transformador de alta frecuencia también se aborda en este trabajo, ya que es el componente más voluminoso en el convertidor. El impacto de de la duración del tiempo muerto y el tamaño del entrehierro en el rendimiento del convertidor se analizan en un ejemplo de diseño de transformador cc/cc de tres puertos y cientos de vatios de potencia. En la parte final de esta investigación se considera la implementación y el análisis de las prestaciones de un transformador cc/cc de cuatro puertos para una aplicación de muy baja tensión y de decenas de vatios de potencia, y sin requisitos de aislamiento. Abstract Recently, switch mode power supplies (SMPS) have been used in a great variety of applications. The most challenging issue for designers of SMPS is to achieve simultaneously high efficiency operation at high power density. The size and weight of a power converter is dominated by the passive components since these elements are normally larger and heavier than other elements in the circuit. If the output power is constant, the stored amount of energy in the converter which is to be delivered to the load in each switching cycle is inversely proportional to the converter’s switching frequency. Therefore, increasing the switching frequency is considered a mean to achieve more compact solutions at higher power density levels. The importance of investigation in high switching frequency range comes from all the benefits that can be achieved. Besides the reduction in size of passive components, increasing switching frequency can significantly improve dynamic performances of power converters. Small energy storage and short switching period lead to faster transient response of the converter against the input voltage and load variations. The most important limitations for pushing up the switching frequency are related to increased conventional magnetic core loss as well as the winding loss due to the skin and proximity effect. A potential problem is also increased magnetic parasitics – leakage inductance and capacitance between the windings – that cause additional loss due to unwanted currents. Higher switching loss and the increased influence of printed circuit boards, interconnections and packaging on circuit behavior is another limiting factor. Resonant power conversion can address these problems by using soft switching techniques to reduce switching loss incorporating the parasitics into the circuit elements. However the performance gains are significantly reduced due to the circulating currents when the converter operates out of the nominal operating conditions. As the input voltage or the load change the circulating currents become higher comparing to those ones at nominal operating conditions. Multiple Input-Output Many potential gains from operating resonant converters at higher switching frequency can be obtained if they are employed in applications with favorable input voltage conditions such as those found in distributed power architectures. Load and particularly input voltage regulation reduce a converter’s power density and efficiency. Due to a relatively constant bus voltage in distributed power architectures the resonant converters are suitable for bus voltage conversion (dc/dc or solid state transformation). Unregulated two port dc/dc transformer products achieving very high power density and efficiency figures are based on series resonant converter operating just at the resonant frequency and operating in the megahertz range are already available in the market. However, further efficiency improvements of power architectures are expected to come from using two or more separate low voltage distribution buses instead of a single one. The principal objective of this dissertation is to implement the concept of the series resonant converter operating at its optimum point into a novel bidirectional multiple port dc/dc transformer to address the future needs of power architectures. The new multiple port dc/dc transformer is based on a series resonant converter topology and reduces to only one the number of magnetic components. Soft switching commutations make possible high switching frequencies to be adopted and high power densities to be achieved. Possible problems regarding stray inductances are eliminated since they are absorbed into the circuit elements. The converter features very good inherent load and cross regulation due to the small output impedances. The proposed multiple port dc/dc transformer operates at fixed switching frequency without line regulation. Extensive theoretical analysis of the topology and modeling in details are provided in order to compare with the experimental results. The relationships that show how the output voltage regulation and conduction losses are affected by the circuit parasitics are derived. The methods to select the resonant capacitor values to achieve different design goals such as minimum conduction losses, elimination of cross regulation or ZCS operation at full load of all the secondary side bridges are discussed. ZVS turn-on of all the switches is achieved by relying on the finite magnetizing inductance of the Abstract transformer. A change of the driving pattern is proposed to achieve ZCS operation of all the secondary side bridges independent on load variations or resonant capacitor tolerances. The feasibility of the proposed topology is verified through extensive simulation and experimental work. The optimization of the high frequency transformer design is also addressed in this work since it is the most bulky component in the converter. The impact of dead time interval and the gap size on the overall converter efficiency is analyzed on the design example of the three port dc/dc transformer of several hundreds of watts of the output power for high voltage applications. The final part of this research considers the implementation and performance analysis of the four port dc/dc transformer in a low voltage application of tens of watts of the output power and without isolation requirements.