948 resultados para Integrated circuit testing
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia de Eletrónica e Computadores
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
A monitorização da qualidade da energia eléctrica tem revelado importância crescente na gestão e caracterização da rede eléctrica. Estudos revelam que os custos directos relacionados com perda de qualidade da energia eléctrica podem representar cerca de 1,5 % do PIB nacional. Para além destes, tem-se adicionalmente os custos indirectos o que se traduz num problema que necessita de minimização. No contexto da minimização dos danos causados pela degradação de energia, são utilizados equipamentos com capacidade de caracterizar a energia eléctrica através da sua monitorização. A utilização destes equipamentos têm subjacente normas de qualidade de energia, que impõem requisitos mínimos de modo a enquadrar e classificar eventos ocorridos na rede eléctrica. Deste modo obtêm-se dados coerentes provenientes de diferentes equipamentos. A monitorização dos parâmetros associados à energia eléctrica é frequentemente realizada através da instalação temporária dos esquipamentos na rede eléctrica, o que resulta numa observação de distúrbios a posteriori da sua ocasião. Esta metodologia não permite detectar o evento eléctrico original mas, quando muito, outros que se espera que sejam semelhantes ao ocorrido. Repare-se, no entanto, que existe um conjunto alargado de eventos que não são repetitivos, constituindo assim uma limitação aquela metodologia. Este trabalho descreve uma alternativa à metodologia de utilização tradicional dos equipamentos. A solução consiste em realizar um analisador de energia que faça parte integrante da instalação e permita a monitorização contínua da rede eléctrica. Este equipamento deve ter um custo suficientemente baixo para que seja justificável nesta utilização alternativa. O analisador de qualidade de energia a desenvolver tem por base o circuito integrado ADE7880, que permite obter um conjunto de parâmetros da qualidade de energia eléctrica de acordo com as normas de energia IEC 61000-4-30 e IEC 61000-4-7. Este analisador permite a recolha contínua de dados específicos da rede eléctrica, e que posteriormente serão armazenados e colocados à disposição do utilizador. Deste modo os dados recolhidos serão apresentados ao utilizador para consulta, de maneira a verificar, de modo continuo a eventual ocorrência das anomalias na rede. Os valores adquiridos podem ainda ser reutilizados vantajosamente para muitas outras finalidades tais como efectuar estudos sobre a optimização energética. O trabalho presentemente desenvolvido decorre de uma utilização alternativa do dispositivo WeSense Energy1 desenvolvido pela equipa da Evoleo Technologies. A presente vertente permite obter parâmetros determinados pelo ADE7880 tais como por exemplo harmónicos, eventos transitórios de tensão e corrente e o desfasamento entre fases, realizando assim uma nova versão do dispositivo, o WeSense Energy2. Adicionalmente este trabalho inclui a visualização remota dos através de uma página web.
Resumo:
This Thesis has the main target to make a research about FPAA/dpASPs devices and technologies applied to control systems. These devices provide easy way to emulate analog circuits that can be reconfigurable by programming tools from manufactures and in case of dpASPs are able to be dynamically reconfigurable on the fly. It is described different kinds of technologies commercially available and also academic projects from researcher groups. These technologies are very recent and are in ramp up development to achieve a level of flexibility and integration to penetrate more easily the market. As occurs with CPLD/FPGAs, the FPAA/dpASPs technologies have the target to increase the productivity, reducing the development time and make easier future hardware reconfigurations reducing the costs. FPAA/dpAsps still have some limitations comparing with the classic analog circuits due to lower working frequencies and emulation of complex circuits that require more components inside the integrated circuit. However, they have great advantages in sensor signal condition, filter circuits and control systems. This thesis focuses practical implementations of these technologies to control system PID controllers. The result of the experiments confirms the efficacy of FPAA/dpASPs on signal condition and control systems.
Resumo:
Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.
Resumo:
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
Thermally stable materials with low dielectric constant (k < 3.9) are being hotly pursued. They are essential as interlayer dielectrics/intermetal dielectrics in integrated circuit technology, which reduces parasitic capacitance and decreases the RC time constant. Most of the currently employed materials are based on silicon. Low k films based on organic polymers are supposed to be a viable alternative as they are easily processable and can be synthesized with simpler techniques. It is known that the employment of ac/rf plasma polymerization yields good quality organic thin films, which are homogenous, pinhole free and thermally stable. These polymer thin films are potential candidates for fabricating Schottky devices, storage batteries, LEDs, sensors, super capacitors and for EMI shielding. Recently, great efforts have been made in finding alternative methods to prepare low dielectric constant thin films in place of silicon-based materials. Polyaniline thin films were prepared by employing an rf plasma polymerization technique. Capacitance, dielectric loss, dielectric constant and ac conductivity were evaluated in the frequency range 100 Hz– 1 MHz. Capacitance and dielectric loss decrease with increase of frequency and increase with increase of temperature. This type of behaviour was found to be in good agreement with an existing model. The ac conductivity was calculated from the observed dielectric constant and is explained based on the Austin–Mott model for hopping conduction. These films exhibit low dielectric constant values, which are stable over a wide range of frequencies and are probable candidates for low k applications.
Resumo:
One of the most prominent industrial applications of heat transfer science and engineering has been electronics thermal control. Driven by the relentless increase in spatial density of microelectronic devices, integrated circuit chip powers have risen by a factor of 100 over the past twenty years, with a somewhat smaller increase in heat flux. The traditional approaches using natural convection and forced-air cooling are becoming less viable as power levels increase. This paper provides a high-level overview of the thermal management problem from the perspective of a practitioner, as well as speculation on the prospects for electronics thermal engineering in years to come.
Resumo:
Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.
Resumo:
Hypertension is a dangerous disease that can cause serious harm to a patient health. In some situations the necessity to control this pressure is even greater, as in surgical procedures and post-surgical patients. To decrease the chances of a complication, it is necessary to reduce blood pressure as soon as possible. Continuous infusion of vasodilators drugs, such as sodium nitroprusside (SNP), rapidly decreased blood pressure in most patients, avoiding major problems. Maintaining the desired blood pressure requires constant monitoring of arterial blood pressure and frequently adjusting the drug infusion rate. Manual control of arterial blood pressure by clinical personnel is very demanding, time consuming and, as a result, sometimes of poor quality. Thus, the aim of this work is the design and implementation of a database of tuned controllers based on patients models, in order to find a suitable PID to be embedded in a Programmable Integrated Circuit (PIC), which has a smaller cost, smaller size and lower power consumption. For best results in controlling the blood pressure and choosing the adequate controller, tuning algorithms, system identification techniques and Smith predictor are used. This work also introduces a monitoring system to assist in detecting anomalies and optimize the process of patient care.
Resumo:
In this work, the transmission line method is explored on the study of the propagation phenomenon in nonhomogeneous walls with finite thickness. It is evaluated the efficiency and applicability of the method, considering materials like gypsum, wood and brick, found in the composition of the structures of walls in question. The results obtained in this work are compared to those available in the literature, for several particular cases. A good agreement is observed, showing that the performed analysis is accurate and efficient in modeling, for instance, the wave propagation through building walls and integrated circuit layers in mobile communication and radar system applications. Later, simulations of resistive sheets devices such as Salisbury screens and Jaumann absorbers and of transmission lines made of metal-insulator-semiconductor (MIS) are made. Thereafter, it is described a study on frequency surface selective structures (FSS). It is proposed the development of devices and microwave integrated circuits (MIC) of such structures, for the accomplishment of experiments. Finally, future works are suggested, for instance, on the development of reflectarrays, frequency selective surfaces with dissimilar elements, and coupled frequency selective surfaces with elements located on different layers
Resumo:
In socio-environmental scenario increased the nature resources concern beyond products and subproducts reuse. Recycling is the approach for a material or energy reintroducing in productive system. This method allows the reduction of garbage volume dumped in environment, saving energy and decreasing the requirement of natural resources use. In general, the ending of expanded polystyrene is deposited sanitary landfills or garbage dumps without control that take large volume and spreads easily by aeolian action, with consequently environmental pollution, however, the recycling avoids their misuse and the obtainment from petroleum is reduced. This work recycled expanded polystyrene via merger and/or dissolution by solvents for the production of integrated circuits boards. The obtained material was characterized in flexural mode according to ASTM D 790 and results were compared with phenolite, traditionally used. Specimens fractures were observed by electronic microscopy scanning in order to establish patterns. Expanded Polyestirene recycled as well as phenolite were also thermo analyzed by TGA and DSC. The method using dissolution produced very brittle materials. The method using merger showed no voids formation nor increased the brittleness of the material. The recycled polystyrene presented a strength value significantly lower than that for the phenolite. (C) 2011 Published by Elsevier Ltd. Selection and peer-review under responsibility of ICM11
Resumo:
The increasingly request for processing power during last years has pushed integrated circuit industry to look for ways of providing even more processing power with less heat dissipation, power consumption, and chip area. This goal has been achieved increasing the circuit clock, but since there are physical limits of this approach a new solution emerged as the multiprocessor system on chip (MPSoC). This approach demands new tools and basic software infrastructure to take advantage of the inherent parallelism of these architectures. The oil exploration industry has one of its firsts activities the project decision on exploring oil fields, those decisions are aided by reservoir simulations demanding high processing power, the MPSoC may offer greater performance if its parallelism can be well used. This work presents a proposal of a micro-kernel operating system and auxiliary libraries aimed to the STORM MPSoC platform analyzing its influence on the problem of reservoir simulation
Resumo:
The design of a Gilbert Cell Mixer and a low noise amplifier (LNA), using GaAs PHEMT technology is presented. The compatibility is shown for co-integration of both block on the same chip, to form a high performance 1.9 GHz receiver front-end. The designed LNA shows 9.23 dB gain and 2.01 dB noise figure (NF). The mixer is designed to operate at RF=1.9 GHz, LO=2.0 GHz and IF=100 MHz with a gain of 14.3 dB and single sideband noise figure (SSB NF) of 9.6 dB. The mixer presents a bandwith of 8 GHz.