885 resultados para Integrated Circuit (IC)
Resumo:
Process variations are a major bottleneck for digital CMOS integrated circuits manufacturability and yield. That iswhy regular techniques with different degrees of regularity are emerging as possible solutions. Our proposal is a new regular layout design technique called Via-Configurable Transistors Array (VCTA) that pushes to the limit circuit layout regularity for devices and interconnects in order to maximize regularity benefits. VCTA is predicted to perform worse than the Standard Cell approach designs for a certain technology node but it will allow the use of a future technology on an earlier time. Ourobjective is to optimize VCTA for it to be comparable to the Standard Cell design in an older technology. Simulations for the first unoptimized version of our VCTA of delay and energy consumption for a Full Adder circuit in the 90 nm technology node are presented and also the extrapolation for Carry-RippleAdders from 4 bits to 64 bits.
Resumo:
Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
This technical note describes a new and simple electronic circuit for driving solenoid valves. The circuit is based on a single integrated circuit DRV103, which is able to drive resistive or inductive loads up to 1.5 A. Switching of 12-V loads can be controlled by TTLlevel signals in two distinct steps. Initially, 12 V is applied during 110 ms, followed by 4.2 V RMS until the end of the activation TTL pulse. This mode of operation is particularly suitable to drive solenoids, because it requires a higher voltage to start and a lower maintenance voltage. By using this circuit, power consumption and heating are reduced and the solenoid lifetime is enhanced. Moreover, this circuit is specially appropriated to build computercontrolled solenoid valves systems.
Resumo:
Actualment l’exigència i la competitivitat del mercat, obliguen les industries a modernitzar-se i automatitzar tots els seus processos productius. En aquests processos les dades i paràmetres de control són dades fonamentals a verificar. Amb aquest treball final de carrera, es pretén realitzar un mòdul d’entrades digitals, per tal de gestionar les dades rebudes d’un procés automatitzat. L’objectiu d’aquest TFC ha estat dissenyar un mòdul d’entrades digitals capaç de gestionar dades de qualsevol tipus de procés automatitzat i transmetre-les a un mestremitjançant un bus de comunicació Modbus. El projecte però, s’ha centrat en el cas específic d’un procés automatitzat per al tractament de la fusta. El desenvolupament d’aquest sistema, comprèn el disseny del circuit, la realització de la placa, el software de lectura de dades i la implementació del protocol Modbus. Tot el mòdul d’entrades està controlat per un microcontrolador PIC 18F4520. El disseny és un sistema multiplataforma per tal d’adaptar-se a qualsevol procés automàtic i algunes de les seves característiques més rellevants són: entrades aïllades multitensió, control de fugues, sortides a relé, i memòria externa de dades, entre altres. Com a conclusions cal dir que s’han assolit els objectius proposats amb èxit. S’ha aconseguit un disseny robust, fiable, polivalent i altament competitiu en el mercat. A nivell acadèmic, s’han ampliat els coneixements en el camp del disseny i de la programació.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
Thermally stable materials with low dielectric constant (k < 3.9) are being hotly pursued. They are essential as interlayer dielectrics/intermetal dielectrics in integrated circuit technology, which reduces parasitic capacitance and decreases the RC time constant. Most of the currently employed materials are based on silicon. Low k films based on organic polymers are supposed to be a viable alternative as they are easily processable and can be synthesized with simpler techniques. It is known that the employment of ac/rf plasma polymerization yields good quality organic thin films, which are homogenous, pinhole free and thermally stable. These polymer thin films are potential candidates for fabricating Schottky devices, storage batteries, LEDs, sensors, super capacitors and for EMI shielding. Recently, great efforts have been made in finding alternative methods to prepare low dielectric constant thin films in place of silicon-based materials. Polyaniline thin films were prepared by employing an rf plasma polymerization technique. Capacitance, dielectric loss, dielectric constant and ac conductivity were evaluated in the frequency range 100 Hz– 1 MHz. Capacitance and dielectric loss decrease with increase of frequency and increase with increase of temperature. This type of behaviour was found to be in good agreement with an existing model. The ac conductivity was calculated from the observed dielectric constant and is explained based on the Austin–Mott model for hopping conduction. These films exhibit low dielectric constant values, which are stable over a wide range of frequencies and are probable candidates for low k applications.
Resumo:
One of the most prominent industrial applications of heat transfer science and engineering has been electronics thermal control. Driven by the relentless increase in spatial density of microelectronic devices, integrated circuit chip powers have risen by a factor of 100 over the past twenty years, with a somewhat smaller increase in heat flux. The traditional approaches using natural convection and forced-air cooling are becoming less viable as power levels increase. This paper provides a high-level overview of the thermal management problem from the perspective of a practitioner, as well as speculation on the prospects for electronics thermal engineering in years to come.
Resumo:
Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.
Resumo:
Tests on printed circuit boards and integrated circuits are widely used in industry,resulting in reduced design time and cost of a project. The functional and connectivity tests in this type of circuits soon began to be a concern for the manufacturers, leading to research for solutions that would allow a reliable, quick, cheap and universal solution. Initially, using test schemes were based on a set of needles that was connected to inputs and outputs of the integrated circuit board (bed-of-nails), to which signals were applied, in order to verify whether the circuit was according to the specifications and could be assembled in the production line. With the development of projects, circuit miniaturization, improvement of the production processes, improvement of the materials used, as well as the increase in the number of circuits, it was necessary to search for another solution. Thus Boundary-Scan Testing was developed which operates on the border of integrated circuits and allows testing the connectivity of the input and the output ports of a circuit. The Boundary-Scan Testing method was converted into a standard, in 1990, by the IEEE organization, being known as the IEEE 1149.1 Standard. Since then a large number of manufacturers have adopted this standard in their products. This master thesis has, as main objective: the design of Boundary-Scan Testing in an image sensor in CMOS technology, analyzing the standard requirements, the process used in the prototype production, developing the design and layout of Boundary-Scan and analyzing obtained results after production. Chapter 1 presents briefly the evolution of testing procedures used in industry, developments and applications of image sensors and the motivation for the use of architecture Boundary-Scan Testing. Chapter 2 explores the fundamentals of Boundary-Scan Testing and image sensors, starting with the Boundary-Scan architecture defined in the Standard, where functional blocks are analyzed. This understanding is necessary to implement the design on an image sensor. It also explains the architecture of image sensors currently used, focusing on sensors with a large number of inputs and outputs.Chapter 3 describes the design of the Boundary-Scan implemented and starts to analyse the design and functions of the prototype, the used software, the designs and simulations of the functional blocks of the Boundary-Scan implemented. Chapter 4 presents the layout process used based on the design developed on chapter 3, describing the software used for this purpose, the planning of the layout location (floorplan) and its dimensions, the layout of individual blocks, checks in terms of layout rules, the comparison with the final design and finally the simulation. Chapter 5 describes how the functional tests were performed to verify the design compliancy with the specifications of Standard IEEE 1149.1. These tests were focused on the application of signals to input and output ports of the produced prototype. Chapter 6 presents the conclusions that were taken throughout the execution of the work.
Resumo:
Hypertension is a dangerous disease that can cause serious harm to a patient health. In some situations the necessity to control this pressure is even greater, as in surgical procedures and post-surgical patients. To decrease the chances of a complication, it is necessary to reduce blood pressure as soon as possible. Continuous infusion of vasodilators drugs, such as sodium nitroprusside (SNP), rapidly decreased blood pressure in most patients, avoiding major problems. Maintaining the desired blood pressure requires constant monitoring of arterial blood pressure and frequently adjusting the drug infusion rate. Manual control of arterial blood pressure by clinical personnel is very demanding, time consuming and, as a result, sometimes of poor quality. Thus, the aim of this work is the design and implementation of a database of tuned controllers based on patients models, in order to find a suitable PID to be embedded in a Programmable Integrated Circuit (PIC), which has a smaller cost, smaller size and lower power consumption. For best results in controlling the blood pressure and choosing the adequate controller, tuning algorithms, system identification techniques and Smith predictor are used. This work also introduces a monitoring system to assist in detecting anomalies and optimize the process of patient care.
Resumo:
In this work, the transmission line method is explored on the study of the propagation phenomenon in nonhomogeneous walls with finite thickness. It is evaluated the efficiency and applicability of the method, considering materials like gypsum, wood and brick, found in the composition of the structures of walls in question. The results obtained in this work are compared to those available in the literature, for several particular cases. A good agreement is observed, showing that the performed analysis is accurate and efficient in modeling, for instance, the wave propagation through building walls and integrated circuit layers in mobile communication and radar system applications. Later, simulations of resistive sheets devices such as Salisbury screens and Jaumann absorbers and of transmission lines made of metal-insulator-semiconductor (MIS) are made. Thereafter, it is described a study on frequency surface selective structures (FSS). It is proposed the development of devices and microwave integrated circuits (MIC) of such structures, for the accomplishment of experiments. Finally, future works are suggested, for instance, on the development of reflectarrays, frequency selective surfaces with dissimilar elements, and coupled frequency selective surfaces with elements located on different layers
Resumo:
In socio-environmental scenario increased the nature resources concern beyond products and subproducts reuse. Recycling is the approach for a material or energy reintroducing in productive system. This method allows the reduction of garbage volume dumped in environment, saving energy and decreasing the requirement of natural resources use. In general, the ending of expanded polystyrene is deposited sanitary landfills or garbage dumps without control that take large volume and spreads easily by aeolian action, with consequently environmental pollution, however, the recycling avoids their misuse and the obtainment from petroleum is reduced. This work recycled expanded polystyrene via merger and/or dissolution by solvents for the production of integrated circuits boards. The obtained material was characterized in flexural mode according to ASTM D 790 and results were compared with phenolite, traditionally used. Specimens fractures were observed by electronic microscopy scanning in order to establish patterns. Expanded Polyestirene recycled as well as phenolite were also thermo analyzed by TGA and DSC. The method using dissolution produced very brittle materials. The method using merger showed no voids formation nor increased the brittleness of the material. The recycled polystyrene presented a strength value significantly lower than that for the phenolite. (C) 2011 Published by Elsevier Ltd. Selection and peer-review under responsibility of ICM11
Resumo:
The increasingly request for processing power during last years has pushed integrated circuit industry to look for ways of providing even more processing power with less heat dissipation, power consumption, and chip area. This goal has been achieved increasing the circuit clock, but since there are physical limits of this approach a new solution emerged as the multiprocessor system on chip (MPSoC). This approach demands new tools and basic software infrastructure to take advantage of the inherent parallelism of these architectures. The oil exploration industry has one of its firsts activities the project decision on exploring oil fields, those decisions are aided by reservoir simulations demanding high processing power, the MPSoC may offer greater performance if its parallelism can be well used. This work presents a proposal of a micro-kernel operating system and auxiliary libraries aimed to the STORM MPSoC platform analyzing its influence on the problem of reservoir simulation