9 resultados para Programmable current source
em Universidad Politécnica de Madrid
Resumo:
This work is related to the improvement of the dynamic performance of the Buck converter by means of introducing an additional power path that virtually increase s the output capacitance during transients, thus improving the output impedance of the converter. It is well known that in VRM applications, with wide load steps, voltage overshoots and undershoots ma y lead to undesired performance of the load. To solve this problem, high-bandwidth high-switching frequency power converter s can be applied to reduce the transient time or a big output capacitor can be applied to reduce the output impedance. The first solution can degrade the efficiency by increasing switching losses of the MOSFETS, and the second solution is penalizing the cost and size of the output filter. The additional energy path, as presented here, is introduced with the Output Impedance Correction Circuit (OICC) based on the Controlled Current Source (CCS). The OICC is using CCS to inject or extract a current n - 1 times larger than the output capacitor current, thus virtually increasing n times the value of the output capacitance during the transients. This feature allows the usage of a low frequency Buck converter with smaller capacitor but satisfying the dynamic requirements.
Resumo:
This work is related to the output impedance improvement of a Multiphase Buck converter with Peak Current Mode Control (PCMC) by means of introducing an additional power path that virtually increases the output capacitance during transients. Various solutions that can be employed to improve the dynamic behavior of the converter system exist, but nearly all solutions are developed for a Single Phase Buck converter with Voltage Mode Control (VMC), while in the VRM applications, due to the high currents, the system is usually implemented as a Multiphase Buck Converter with Current Mode Control. The additional energy path, as presented here, is introduced with the Output Impedance Correction Circuit (OICC) based on the Controlled Current Source (CCS). The OICC is used to inject or extract a current n-1 times larger than the output capacitor current, thus virtually increasing n times the value of the output capacitance during the transients. Furthermore, this work extends the OICC concept to a Multiphase Buck Converter system while comparing proposed solution with the system that has n times bigger output capacitor. In addition, the OICC is implemented as a Synchronous Buck Converter with PCMC, thus reducing its influence on the system efficiency.
Resumo:
An equivalent circuit model is applied in order to describe the operation characteristics of quantum dot intermediate band solar cells (QD-IBSCs), which accounts for the recombination paths of the intermediate band (IB) through conduction band (CB), the valence band (VB) through IB, and the VB-CB transition. In this work, fitting of the measured dark J-V curves for QD-IBSCs (QD region being non-doped or direct Si-doped to n-type) and a reference GaAs p-i-n solar cell (no QDs) were carried out using this model in order to extract the diode parameters. The simulation was then performed using the extracted diode parameters to evaluate solar cell characteristics under concentration. In the case of QDSC with Si-doped (hence partially-filled) QDs, a fast recovery of the open-circuit voltage (Voc) was observed in a range of low concentration due to the IB effect. Further, at around 100X concentration, Si-doped QDSC could outperform the reference GaAs p-i-n solar cell if the current source of IB current source were sixteen times to about 10mA/cm2 compared to our present cell.
Resumo:
A novel formulation for the surface impedance characterization is introduced for the canonical problem of surface fields on a perfect electric conductor (PEC) circular cylinder with a dielectric coating due to a electric current source using the Uniform Theory of Diffraction (UTD) with an Impedance Boundary Condition (IBC). The approach is based on a TE/TM assumption of the surface fields from the original problem. Where this surface impedance fails, an optimization is performed to minimize the error in the SD Green?s function between the original problem and the equivalent one with the IBC. This new approach requires small changes in the available UTD based solution with IBC to include the geodesic ray angle and length dependence in the surface impedance formulas. This asymptotic method, accurate for large separations between source and observer points, in combination with spectral domain (SD) Green?s functions for multidielectric coatings leads to a new hybrid SD-UTD with IBC to calculate mutual coupling among microstrip patches on a multilayer dielectric-coated PEC circular cylinder. Results are compared with the eigenfunction solution in SD, where a very good agreement is met.
Resumo:
A novel formulation for the surface impedance characterization is introduced for the canonical problem of surface fields on a perfect electric conductor (PEC) circular cylinder with a dielectric coating due to a electric current source using the Uniform Theory of Diffraction (UTD) with an Impedance Boundary Condition (IBC). The approach is based on a TE/TM assumption of the surface fields from the original problem. Where this surface impedance fails, an optimization is performed to minimize the error in the SD Green?s function between the original problem and the equivalent one with the IBC. This asymptotic method, accurate for large separations between source and observer points, in combination with spectral domain (SD) Green?s functions for multidielectric coatings leads to a new hybrid SD-UTD with IBC to calculate mutual coupling among microstrip patches on a multilayer dielectric-coated PEC circular cylinder. Results are compared with the eigenfunction solution in SD, where a very good agreement is met.
Resumo:
A novel formulation for the surface impedance characterization is introduced for the canonical problem of surface fields on a perfect electric conductor (PEC) circular cylinder with a dielectric coating due to a electric current source using the Uniform Theory of Diffraction (UTD) with an Impedance Boundary Condition (IBC). The approach is based on a TE/TM assumption of the surface fields from the original problem. Where this surface impedance fails, an optimization is performed to minimize the error in the SD Green's function between the original problem and the equivalent one with the IBC. This new approach requires small changes in the available UTD based solution with IBC to include the geodesic ray angle and length dependence in the surface impedance formulas. This asymptotic method, accurate for large separations between source and observer points, in combination with spectral domain (SD) Green's functions for multidielectric coatings leads to a new hybrid SD-UTD with IBC to calculate mutual coupling among microstrip patches on a multilayer dielectric-coated PEC circular cylinder. Results are compared with the eigenfunction solution in SD, where a very good agreement is met.
Resumo:
BETs is a three-year project financed by the Space Program of the European Commission, aimed at developing an efficient deorbit system that could be carried on board any future satellite launched into Low Earth Orbit (LEO). The operational system involves a conductive tape-tether left bare to establish anodic contact with the ambient plasma as a giant Langmuir probe. As a part of this project, we are carrying out both numerical and experimental approaches to estimate the collected current by the positive part of the tether. This paper deals with experimental measurements performed in the IONospheric Atmosphere Simulator (JONAS) plasma chamber of the Onera-Space Environment Department. The JONAS facility is a 9- m3 vacuum chamber equipped with a plasma source providing drifting plasma simulating LEO conditions in terms of density and temperature. A thin metallic cylinder, simulating the tether, is set inside the chamber and polarized up to 1000 V. The Earth's magnetic field is neutralized inside the chamber. In a first time, tether collected current versus tether polarization is measured for different plasma source energies and densities. In complement, several types of Langmuir probes are used at the same location to allow the extraction of both ion densities and electron parameters by computer modeling (classical Langmuir probe characteristics are not accurate enough in the present situation). These two measurements permit estimation of the discrepancies between the theoretical collection laws, orbital motion limited law in particular, and the experimental data in LEO-like conditions without magnetic fields. In a second time, the spatial variations and the time evolutions of the plasma properties around the tether are investigated. Spherical and emissive Langmuir probes are also used for a more extensive characterization of the plasma in space and time dependent analysis. Results show the ion depletion because of the wake effect and the accumulation of- ions upstream of the tether. In some regimes (at large positive potential), oscillations are observed on the tether collected current and on Langmuir probe collected current in specific sites.
Resumo:
La idea de este proyecto es acercar la imagen de Libertad de Información y su conocida variante Open Source, donde cubriremos en detalle la multitud de puntos que abarca. Está dirigida a todos los usuarios que quieran conocer de primera mano cómo se inició la idea de Libertad Tecnológica hasta sus aplicaciones. No solo para aquellos que quieran emplearla, sino tambien para aquellos que la ya la usan y necesitan recursos para nuevas ideas. De esta forma, nos acercaremos tambien a la idea de libertad que en la tecnología está actualmente en debate. El contenido se estructura siguiendo las siguientes ramas: Historia, desde sus orígenes hasta el presente. Economía, ventajas y desventajas de esta libertad. Problemas legales en distintos niveles Noticias y actualizaciones de aplicaciones. Sociedad, entorno a la aceptación y rechazo por los usuarios, ademas de su influencia en la ética, educación e innovación. Aplicaciones, donde se incluirán la mayoría de las aplicaciones más conocidas en cada una de las ramas de Open Source. ---ABSTRACT---The topic finally chosen in the list of Professional Skills and Issues has been the Freedom of Information and its best known variant Open Source. We will try to cover in detail most of the points that includes history, economics, law, society and the various applications in which it have influenced. It allows all the public to see first-hand the term of Open Source, from its beginnings to applications. Not just for those who want to use it, but for those who already use it and want to find sources and new ideas. It will also get a step closer to the idea of Freedom of Information as currently being debated. The main branches are going to address: History, from its origins to the present. Economic, advantages and disadvantages of being free. Laws, problems in different continents at the legal level. News, latest in its various applications. Society, acceptance or rejection by the people, addition to the factors that influence as ethics, education, and arts innovation. Applications, where most try to include most current applications of each of the variants.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.