885 resultados para Integrated Circuit (IC)
Resumo:
Chemorheology (and thus process modeling) of highly filled thermosets used in integrated circuit (IC) packaging has been complicated by their highly filled nature, fast kinetics of curing, and viscoelastic nature. This article summarizes a more thorough chemorheological analysis of a typical IC packaging thermoset material, including novel isothermal and nonisothermal multiwave parallel-plate chemorheology. This new chemorheological analysis may be used to optimize existing and design new IC packaging processes. (C) 1997 John Wiley & Sons, Inc.
Resumo:
The construction of a flow-through cell incorporating an array of gold microelectrodes is described and its application to flow injection analysis with amperometric detection is presented, Simple modification of almost any conventional integrated circuit chip, used as an inexpensive source of pre-assembled gold micro-wires, leads to the rapid and successful preparation of arrays of 8-48 elements, the polymeric encapsulation material from the top face of the chip is removed by abrasion until the gold micro-mires (used to interconnect the silicon circuit to the external contact pins of the chip) are disrupted and their transversal (elliptical) sections become exposed. Once polished, the flat and smooth top surface of the gold microelectrode-array chip (MEAC) is provided with a spacer and fitted under pressure against an acrylic block with the reference and auxiliary electrodes, to form the electrochemical (thin-layer) flow cell, while the contact pins are plugged into a standard IC socket, This design ensures autonomous electric contact with each electrode and allows fast dismantling for polishing or substitution, the performance of flow cells with MEACs was investigated utilizing the technique of reverse pulse amperometry without oxygen removal, A method was established for the determination of the copper concentration in sugar cane spirit, regulated by law for beverages, Samples from industrial producers and small-scale (alembic) brewers were compared, With a 24 MEAC, a detection limit of 30 mu g I-l of copper (4.7 x 10(-7) mol l(-1) of Cu-II for 100 mu l injections) was calculated, Routine operation was established at a frequency of 60-90 determinations per hour, Intercomparison with atomic absorption spectrometric determinations resulted in excellent agreement.
Resumo:
A simple and inexpensive way to fabricate arrays of gold microelectrodes is proposed. Integrated circuit chips are sawed through their middle, normal to the longest axis, leading to destruction of the silicon circuit and rupture of the gold wires that interconnect it with the external terminals. Polishing the resulting rough surface converts the tips of the wires embedded in the chip halves into arrays of gold microdisks of about 25 mu m diameter. The number of active microelectrodes (MEs), of an array depends on the number of pins in the chip, n, being typically (n/2)-4. These MEs can be used individually or externally interconnected in any combination. X-ray images of the chips and micrographs of the resulting surface of the polished arrays have revealed variable distances between neighbor MEs, which are, however, larger than 10 times the radius of the disks. This feature of the MEs prevents diffusional cross-talk between electrodes. The use of these microdisk electrodes for analytical purposes exhibits sigmoidal voltammograms, and chronoamperometric experiments confirm the nonlinear i vs. t(1/2) plots, typical for processes where radial diffusion prevails. Satisfactory uniformity was observed for the response of each electrode of an array, indicating similarity of geometry and disk areas. The potentialities of these MEs were demonstrated by the determination of cadmium at ppb levels using square wave voltammetry with preconcentration. Due to the relative ease with which these MEs can be manufactured and their good performance in (chemical) analysis, wide applications in electrochemistry and electroanalysis is envisioned.
Resumo:
"Supported by the Defense Advanced Research Projects Agency ... and the National Bureau of Standards."
Resumo:
Plasma or "dry" etching is an essential process for the production of modern microelectronic circuits. However, despite intensive research, many aspects of the etch process are not fully understood. The results of studies of the plasma etching of Si and Si02 in fluorine-containing discharges, and the complementary technique of plasma polymerisation are presented in this thesis. Optical emission spectroscopy with argon actinometry was used as the principle plasma diagnostic. Statistical experimental design was used to model and compare Si and Si02 etch rates in CF4 and SF6 discharges as a function of flow, pressure and power. Etch mechanisms m both systems, including the potential reduction of Si etch rates in CF4 due to fluorocarbon polymer formation, are discussed. Si etch rates in CF4 /SF6 mixtures were successfully accounted for by the models produced. Si etch rates in CF4/C2F6 and CHF3 as a function of the addition of oxygen-containing additives (02, N20 and CO2) are shown to be consistent with a simple competition between F, 0 and CFx species for Si surface sites. For the range of conditions studied, Si02 etch rates were not dependent on F-atom concentration, but the presence of fluorine was essential in order to achieve significant etch rates. The influence of a wide range of electrode materials on the etch rate of Si and Si02 in CF4 and CF4 /02 plasmas was studied. It was found that the Si etch rate in a CF4 plasma was considerably enhanced, relative to an anodised aluminium electrode, in the presence of soda glass or sodium or potassium "doped" quartz. The effect was even more pronounced in a CF4 /02 discharge. In the latter system lead and copper electrodes also enhanced the Si etch rate. These results could not be accounted for by a corresponding rise in atomic fluorine concentration. Three possible etch enhancement mechanisms are discussed. Fluorocarbon polymer deposition was studied, both because of its relevance to etch mechanisms and its intrinsic interest, as a function of fluorocarbon source gas (CF4, C2F6, C3F8 and CHF3), process time, RF power and percentage hydrogen addition. Gas phase concentrations of F, H and CF2 were measured by optical emission spectroscopy, and the resultant polymer structure determined by X-ray photoelectron spectroscopy and infrared spectroscopy. Thermal and electrical properties were measured also. Hydrogen additions are shown to have a dominant role in determining deposition rate and polymer composition. A qualitative description of the polymer growth mechanism is presented which accounts for both changes in growth rate and structure, and leads to an empirical deposition rate model.
Resumo:
To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Highly filled thermosets are used in applications such as integrated circuit (IC) packaging. However, a detailed understanding of the effects of the fillers on the macroscopic cure properties is limited by the complex cure of such systems. This work systematically quantifies the effects of filler content on the kinetics, gelation and vitrification of a model silica-filled epoxy/amine system in order to begin to understand the role of the filler in IC packaging cure. At high cure temperatures (100 degreesC and above) there appears to be no effect of fillers on cure kinetics and gelation and vitrification times. However, a decrease in the gelation and vitrification times and increase the reaction rate is seen with increasing filler content at low cure temperatures (60-90 degreesC). An explanation for these results is given in terms of catalysation of the epoxy amine reaction by hydrogen donor species present on the silica surface and interfacial effects.
Resumo:
Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).
Resumo:
The study of the thermal behavior of complex packages as multichip modules (MCM¿s) is usually carried out by measuring the so-called thermal impedance response, that is: the transient temperature after a power step. From the analysis of this signal, the thermal frequency response can be estimated, and consequently, compact thermal models may be extracted. We present a method to obtain an estimate of the time constant distribution underlying the observed transient. The method is based on an iterative deconvolution that produces an approximation to the time constant spectrum while preserving a convenient convolution form. This method is applied to the obtained thermal response of a microstructure as analyzed by finite element method as well as to the measured thermal response of a transistor array integrated circuit (IC) in a SMD package.
Resumo:
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.
Resumo:
The work of the present thesis is focused on the implementation of microelectronic voltage sensing devices, with the purpose of transmitting and extracting analog information between devices of different nature at short distances or upon contact. Initally, chip-to-chip communication has been studied, and circuitry for 3D capacitive coupling has been implemented. Such circuits allow the communication between dies fabricated in different technologies. Due to their novelty, they are not standardized and currently not supported by standard CAD tools. In order to overcome such burden, a novel approach for the characterization of such communicating links has been proposed. This results in shorter design times and increased accuracy. Communication between an integrated circuit (IC) and a probe card has been extensively studied as well. Today wafer probing is a costly test procedure with many drawbacks, which could be overcome by a different communication approach such as capacitive coupling. For this reason wireless wafer probing has been investigated as an alternative approach to standard on-contact wafer probing. Interfaces between integrated circuits and biological systems have also been investigated. Active electrodes for simultaneous electroencephalography (EEG) and electrical impedance tomography (EIT) have been implemented for the first time in a 0.35 um process. Number of wires has been minimized by sharing the analog outputs and supply on a single wire, thus implementing electrodes that require only 4 wires for their operation. Minimization of wires reduces the cable weight and thus limits the patient's discomfort. The physical channel for communication between an IC and a biological medium is represented by the electrode itself. As this is a very crucial point for biopotential acquisitions, large efforts have been carried in order to investigate the different electrode technologies and geometries and an electromagnetic model is presented in order to characterize the properties of the electrode to skin interface.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Contactless integrated circuit cards are one form of application of radio frequency identification. They are used in applications such as access control, identification, and payment in public transport. The contactless IC cards are passive which means that both the data and the energy are transferred to the card without contact using inductive coupling. Antenna design and optimization of the design for contactless IC cards defined by ISO/IEC14443 is studied. The basic operation principles of contactless system are presented and the structure of contactless IC card is illustrated. The structure was divided between the contactless chip and the antenna. The operation of the antenna was covered in depth and the parameters affecting to the performance of the antenna were presented. Also the different antenna technologies and connection technologies were provided. The antenna design process with the parameters and the design tools isillustrated and optimization of the design is studied. To make the design process more ideal a target of development was discovered, which was the implementation of test application. The optimization of the antenna design was presented based on the optimization criteria defined in this study. The solution for the implementation of these criteria and the effect of each criterion was found. For enhancing the performance of the antenna a focus for future study was proposed.