969 resultados para Integrated circuit


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work is about the combination of functional ferroelectric oxides with Multiwall Carbon Nanotubes for microelectronic applications, as for example potential 3 Dimensional (3D) Non Volatile Ferroelectric Random Access Memories (NVFeRAM). Miniaturized electronics are ubiquitous now. The drive to downsize electronics has been spurred by needs of more performance into smaller packages at lower costs. But the trend of electronics miniaturization challenges board assembly materials, processes, and reliability. Semiconductor device and integrated circuit technology, coupled with its associated electronic packaging, forms the backbone of high-performance miniaturized electronic systems. However, as size decreases and functionalization increases in the modern electronics further size reduction is getting difficult; below a size limit the signal reliability and device performance deteriorate. Hence miniaturization of siliconbased electronics has limitations. On this background the Road Map for Semiconductor Industry (ITRS) suggests since 2011 alternative technologies, designated as More than Moore; being one of them based on carbon (carbon nanotubes (CNTs) and graphene) [1]. CNTs with their unique performance and three dimensionality at the nano-scale have been regarded as promising elements for miniaturized electronics [2]. CNTs are tubular in geometry and possess a unique set of properties, including ballistic electron transportation and a huge current caring capacity, which make them of great interest for future microelectronics [2]. Indeed CNTs might have a key role in the miniaturization of Non Volatile Ferroelectric Random Access Memories (NVFeRAM). Moving from a traditional two dimensional (2D) design (as is the case of thin films) to a 3D structure (based on a tridimensional arrangement of unidimensional structures) will result in the high reliability and sensing of the signals due to the large contribution from the bottom electrode. One way to achieve this 3D design is by using CNTs. Ferroelectrics (FE) are spontaneously polarized and can have high dielectric constants and interesting pyroelectric, piezoelectric, and electrooptic properties, being a key application of FE electronic memories. However, combining CNTs with FE functional oxides is challenging. It starts with materials compatibility, since crystallization temperature of FE and oxidation temperature of CNTs may overlap. In this case low temperature processing of FE is fundamental. Within this context in this work a systematic study on the fabrication of CNTs - FE structures using low cost low temperature methods was carried out. The FE under study are comprised of lead zirconate titanate (Pb1-xZrxTiO3, PZT), barium titanate (BaTiO3, BT) and bismuth ferrite (BiFeO3, BFO). The various aspects related to the fabrication, such as effect on thermal stability of MWCNTs, FE phase formation in presence of MWCNTs and interfaces between the CNTs/FE are addressed in this work. The ferroelectric response locally measured by Piezoresponse Force Microscopy (PFM) clearly evidenced that even at low processing temperatures FE on CNTs retain its ferroelectric nature. The work started by verifying the thermal decomposition behavior under different conditions of the multiwall CNTs (MWCNTs) used in this work. It was verified that purified MWCNTs are stable up to 420 ºC in air, as no weight loss occurs under non isothermal conditions, but morphology changes were observed for isothermal conditions at 400 ºC by Raman spectroscopy and Transmission Electron Microscopy (TEM). In oxygen-rich atmosphere MWCNTs started to oxidized at 200 ºC. However in argon-rich one and under a high heating rate MWCNTs remain stable up to 1300 ºC with a minimum sublimation. The activation energy for the decomposition of MWCNTs in air was calculated to lie between 80 and 108 kJ/mol. These results are relevant for the fabrication of MWCNTs – FE structures. Indeed we demonstrate that PZT can be deposited by sol gel at low temperatures on MWCNTs. And particularly interesting we prove that MWCNTs decrease the temperature and time for formation of PZT by ~100 ºC commensurate with a decrease in activation energy from 68±15 kJ/mol to 27±2 kJ/mol. As a consequence, monophasic PZT was obtained at 575 ºC for MWCNTs - PZT whereas for pure PZT traces of pyrochlore were still present at 650 ºC, where PZT phase formed due to homogeneous nucleation. The piezoelectric nature of MWCNTs - PZT synthesised at 500 ºC for 1 h was proved by PFM. In the continuation of this work we developed a low cost methodology of coating MWCNTs using a hybrid sol-gel / hydrothermal method. In this case the FE used as a proof of concept was BT. BT is a well-known lead free perovskite used in many microelectronic applications. However, synthesis by solid state reaction is typically performed around 1100 to 1300 ºC what jeopardizes the combination with MWCNTs. We also illustrate the ineffectiveness of conventional hydrothermal synthesis in this process due the formation of carbonates, namely BaCO3. The grown MWCNTs - BT structures are ferroelectric and exhibit an electromechanical response (15 pm/V). These results have broad implications since this strategy can also be extended to other compounds of materials with high crystallization temperatures. In addition the coverage of MWCNTs with FE can be optimized, in this case with non covalent functionalization of the tubes, namely with sodium dodecyl sulfate (SDS). MWCNTs were used as templates to grow, in this case single phase multiferroic BFO nanorods. This work shows that the use of nitric solvent results in severe damages of the MWCNTs layers that results in the early oxidation of the tubes during the annealing treatment. It was also observed that the use of nitric solvent results in the partial filling of MWCNTs with BFO due to the low surface tension (<119 mN/m) of the nitric solution. The opening of the caps and filling of the tubes occurs simultaneously during the refluxing step. Furthermore we verified that MWCNTs have a critical role in the fabrication of monophasic BFO; i.e. the oxidation of CNTs during the annealing process causes an oxygen deficient atmosphere that restrains the formation of Bi2O3 and monophasic BFO can be obtained. The morphology of the obtained BFO nano structures indicates that MWCNTs act as template to grow 1D structure of BFO. Magnetic measurements on these BFO nanostructures revealed a week ferromagnetic hysteresis loop with a coercive field of 956 Oe at 5 K. We also exploited the possible use of vertically-aligned multiwall carbon nanotubes (VA-MWCNTs) as bottom electrodes for microelectronics, for example for memory applications. As a proof of concept BiFeO3 (BFO) films were in-situ deposited on the surface of VA-MWCNTs by RF (Radio Frequency) magnetron sputtering. For in situ deposition temperature of 400 ºC and deposition time up to 2 h, BFO films cover the VA-MWCNTs and no damage occurs either in the film or MWCNTs. In spite of the macroscopic lossy polarization behaviour, the ferroelectric nature, domain structure and switching of these conformal BFO films was verified by PFM. A week ferromagnetic ordering loop was proved for BFO films on VA-MWCNTs having a coercive field of 700 Oe. Our systematic work is a significant step forward in the development of 3D memory cells; it clearly demonstrates that CNTs can be combined with FE oxides and can be used, for example, as the next 3D generation of FERAMs, not excluding however other different applications in microelectronics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Electrónica e Telecomunicações

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Debugging electronic circuits is traditionally done with bench equipment directly connected to the circuit under debug. In the digital domain, the difficulties associated with the direct physical access to circuit nodes led to the inclusion of resources providing support to that activity, first at the printed circuit level, and then at the integrated circuit level. The experience acquired with those solutions led to the emergence of dedicated infrastructures for debugging cores at the system-on-chip level. However, all these developments had a small impact in the analog and mixed-signal domain, where debugging still depends, to a large extent, on direct physical access to circuit nodes. As a consequence, when analog and mixed-signal circuits are integrated as cores inside a system-on-chip, the difficulties associated with debugging increase, which cause the time-to-market and the prototype verification costs to also increase. The present work considers the IEEE1149.4 infrastructure as a means to support the debugging of mixed-signal circuits, namely to access the circuit nodes and also an embedded debug mechanism named mixed-signal condition detector, necessary for watch-/breakpoints and real-time analysis operations. One of the main advantages associated with the proposed solution is the seamless migration to the system-on-chip level, as the access is done through electronic means, thus easing debugging operations at different hierarchical levels.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia de Eletrónica e Computadores

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A monitorização da qualidade da energia eléctrica tem revelado importância crescente na gestão e caracterização da rede eléctrica. Estudos revelam que os custos directos relacionados com perda de qualidade da energia eléctrica podem representar cerca de 1,5 % do PIB nacional. Para além destes, tem-se adicionalmente os custos indirectos o que se traduz num problema que necessita de minimização. No contexto da minimização dos danos causados pela degradação de energia, são utilizados equipamentos com capacidade de caracterizar a energia eléctrica através da sua monitorização. A utilização destes equipamentos têm subjacente normas de qualidade de energia, que impõem requisitos mínimos de modo a enquadrar e classificar eventos ocorridos na rede eléctrica. Deste modo obtêm-se dados coerentes provenientes de diferentes equipamentos. A monitorização dos parâmetros associados à energia eléctrica é frequentemente realizada através da instalação temporária dos esquipamentos na rede eléctrica, o que resulta numa observação de distúrbios a posteriori da sua ocasião. Esta metodologia não permite detectar o evento eléctrico original mas, quando muito, outros que se espera que sejam semelhantes ao ocorrido. Repare-se, no entanto, que existe um conjunto alargado de eventos que não são repetitivos, constituindo assim uma limitação aquela metodologia. Este trabalho descreve uma alternativa à metodologia de utilização tradicional dos equipamentos. A solução consiste em realizar um analisador de energia que faça parte integrante da instalação e permita a monitorização contínua da rede eléctrica. Este equipamento deve ter um custo suficientemente baixo para que seja justificável nesta utilização alternativa. O analisador de qualidade de energia a desenvolver tem por base o circuito integrado ADE7880, que permite obter um conjunto de parâmetros da qualidade de energia eléctrica de acordo com as normas de energia IEC 61000-4-30 e IEC 61000-4-7. Este analisador permite a recolha contínua de dados específicos da rede eléctrica, e que posteriormente serão armazenados e colocados à disposição do utilizador. Deste modo os dados recolhidos serão apresentados ao utilizador para consulta, de maneira a verificar, de modo continuo a eventual ocorrência das anomalias na rede. Os valores adquiridos podem ainda ser reutilizados vantajosamente para muitas outras finalidades tais como efectuar estudos sobre a optimização energética. O trabalho presentemente desenvolvido decorre de uma utilização alternativa do dispositivo WeSense Energy1 desenvolvido pela equipa da Evoleo Technologies. A presente vertente permite obter parâmetros determinados pelo ADE7880 tais como por exemplo harmónicos, eventos transitórios de tensão e corrente e o desfasamento entre fases, realizando assim uma nova versão do dispositivo, o WeSense Energy2. Adicionalmente este trabalho inclui a visualização remota dos através de uma página web.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This Thesis has the main target to make a research about FPAA/dpASPs devices and technologies applied to control systems. These devices provide easy way to emulate analog circuits that can be reconfigurable by programming tools from manufactures and in case of dpASPs are able to be dynamically reconfigurable on the fly. It is described different kinds of technologies commercially available and also academic projects from researcher groups. These technologies are very recent and are in ramp up development to achieve a level of flexibility and integration to penetrate more easily the market. As occurs with CPLD/FPGAs, the FPAA/dpASPs technologies have the target to increase the productivity, reducing the development time and make easier future hardware reconfigurations reducing the costs. FPAA/dpAsps still have some limitations comparing with the classic analog circuits due to lower working frequencies and emulation of complex circuits that require more components inside the integrated circuit. However, they have great advantages in sensor signal condition, filter circuits and control systems. This thesis focuses practical implementations of these technologies to control system PID controllers. The result of the experiments confirms the efficacy of FPAA/dpASPs on signal condition and control systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thermally stable materials with low dielectric constant (k < 3.9) are being hotly pursued. They are essential as interlayer dielectrics/intermetal dielectrics in integrated circuit technology, which reduces parasitic capacitance and decreases the RC time constant. Most of the currently employed materials are based on silicon. Low k films based on organic polymers are supposed to be a viable alternative as they are easily processable and can be synthesized with simpler techniques. It is known that the employment of ac/rf plasma polymerization yields good quality organic thin films, which are homogenous, pinhole free and thermally stable. These polymer thin films are potential candidates for fabricating Schottky devices, storage batteries, LEDs, sensors, super capacitors and for EMI shielding. Recently, great efforts have been made in finding alternative methods to prepare low dielectric constant thin films in place of silicon-based materials. Polyaniline thin films were prepared by employing an rf plasma polymerization technique. Capacitance, dielectric loss, dielectric constant and ac conductivity were evaluated in the frequency range 100 Hz– 1 MHz. Capacitance and dielectric loss decrease with increase of frequency and increase with increase of temperature. This type of behaviour was found to be in good agreement with an existing model. The ac conductivity was calculated from the observed dielectric constant and is explained based on the Austin–Mott model for hopping conduction. These films exhibit low dielectric constant values, which are stable over a wide range of frequencies and are probable candidates for low k applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the most prominent industrial applications of heat transfer science and engineering has been electronics thermal control. Driven by the relentless increase in spatial density of microelectronic devices, integrated circuit chip powers have risen by a factor of 100 over the past twenty years, with a somewhat smaller increase in heat flux. The traditional approaches using natural convection and forced-air cooling are becoming less viable as power levels increase. This paper provides a high-level overview of the thermal management problem from the perspective of a practitioner, as well as speculation on the prospects for electronics thermal engineering in years to come.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tests on printed circuit boards and integrated circuits are widely used in industry,resulting in reduced design time and cost of a project. The functional and connectivity tests in this type of circuits soon began to be a concern for the manufacturers, leading to research for solutions that would allow a reliable, quick, cheap and universal solution. Initially, using test schemes were based on a set of needles that was connected to inputs and outputs of the integrated circuit board (bed-of-nails), to which signals were applied, in order to verify whether the circuit was according to the specifications and could be assembled in the production line. With the development of projects, circuit miniaturization, improvement of the production processes, improvement of the materials used, as well as the increase in the number of circuits, it was necessary to search for another solution. Thus Boundary-Scan Testing was developed which operates on the border of integrated circuits and allows testing the connectivity of the input and the output ports of a circuit. The Boundary-Scan Testing method was converted into a standard, in 1990, by the IEEE organization, being known as the IEEE 1149.1 Standard. Since then a large number of manufacturers have adopted this standard in their products. This master thesis has, as main objective: the design of Boundary-Scan Testing in an image sensor in CMOS technology, analyzing the standard requirements, the process used in the prototype production, developing the design and layout of Boundary-Scan and analyzing obtained results after production. Chapter 1 presents briefly the evolution of testing procedures used in industry, developments and applications of image sensors and the motivation for the use of architecture Boundary-Scan Testing. Chapter 2 explores the fundamentals of Boundary-Scan Testing and image sensors, starting with the Boundary-Scan architecture defined in the Standard, where functional blocks are analyzed. This understanding is necessary to implement the design on an image sensor. It also explains the architecture of image sensors currently used, focusing on sensors with a large number of inputs and outputs.Chapter 3 describes the design of the Boundary-Scan implemented and starts to analyse the design and functions of the prototype, the used software, the designs and simulations of the functional blocks of the Boundary-Scan implemented. Chapter 4 presents the layout process used based on the design developed on chapter 3, describing the software used for this purpose, the planning of the layout location (floorplan) and its dimensions, the layout of individual blocks, checks in terms of layout rules, the comparison with the final design and finally the simulation. Chapter 5 describes how the functional tests were performed to verify the design compliancy with the specifications of Standard IEEE 1149.1. These tests were focused on the application of signals to input and output ports of the produced prototype. Chapter 6 presents the conclusions that were taken throughout the execution of the work.