957 resultados para On-Chip Balun
Resumo:
Due to the increasing demand for high power and reliable miniaturized energy storage devices, the development of micro-supercapacitors or electrochemical micro-capacitors have attracted much attention in recent years. This dissertation investigates several strategies to develop on-chip micro-supercapacitors with high power and energy density. Micro-supercapacitors based on interdigitated carbon micro-electrode arrays are fabricated through carbon microelectromechanical systems (C-MEMS) technique which is based on carbonization of patterned photoresist. To improve the capacitive behavior, electrochemical activation is performed on carbon micro-electrode arrays. The developed micro-supercapacitors show specific capacitances as high as 75 mFcm-2 at a scan rate of 5 mVs -1 after electrochemical activation for 30 minutes. The capacitance loss is less than 13% after 1000 cyclic voltammetry (CV) cycles. These results indicate that electrochemically activated C-MEMS micro-electrode arrays are promising candidates for on-chip electrochemical micro-capacitor applications. The energy density of micro-supercapacitors was further improved by conformal coating of polypyrrole (PPy) on C-MEMS structures. In these types of micro-devices the three dimensional (3D) carbon microstructures serve as current collectors for high energy density PPy electrodes. The electrochemical characterizations of these micro-supercapacitors show that they can deliver a specific capacitance of about 162.07 mFcm-2 and a specific power of 1.62mWcm -2 at a 20 mVs-1 scan rate. Addressing the need for high power micro-supercapacitors, the application of graphene as electrode materials for micro-supercapacitor was also investigated. The present study suggests a novel method to fabricate graphene-based micro-supercapacitors with thin film or in-plane interdigital electrodes. The fabricated micro-supercapacitors show exceptional frequency response and power handling performance and could effectively charge and discharge at rates as high as 50 Vs-1. CV measurements show that the specific capacitance of the micro-supercapacitor based on reduced graphene oxide and carbon nanotube composites is 6.1 mFcm -2 at scan rate of 0.01Vs-1. At a very high scan rate of 50 Vs-1, a specific capacitance of 2.8 mFcm-2 (stack capacitance of 3.1 Fcm-3) is recorded. This unprecedented performance can potentially broaden the future applications of micro-supercapacitors.
Resumo:
This paper details methodologies that have been explored for the fast proofing of on-chip architectures for Circular Dichroism techniques. Flow-cell devices fabricated from UV transparent Quartz are used for these experiments. The complexity of flow-cell production typically results in lead times of six months from order to delivery. Only at that point can the on-chip architecture be tested empirically and any required modifications determined ready for the next six month iteration phase. By using the proposed 3D printing and PDMS moulding techniques for fast proofing on-chip architectures the optimum design can be determined within a matter of hours prior to commitment to quartz chip production.
Resumo:
International audience
Resumo:
Tese de Doutoramento em Ciências Veterinárias, Especialidade de Ciências Biológicas e Biomédicas
Resumo:
Negli ultimi anni, nell' ambito dell' ingegneria dei tessuti, ha avuto un rapido aumento la generazione di tessuti cardiaci miniaturizzati, per lo studio della fisiologia cardiaca e delle patologie. In questa tesi, viene analizzato un processo di realizzazione di un dispositivo heart-on-a-chip recentemente pubblicato da Jayne et al. Per il processo di fabbricazione dei dispositivi è stata utilizzata una combinazione di Soft Lithography e Direct Laser Writing (DLW). Quest' ultima, in particolare, ha fornito due importanti caratteristiche ai dispositivi deputati alla semina cellulare: una struttura curva lungo l’ asse verticale e strutture 3D di diverse altezze sullo stesso piano. Tramite DLW sono stati realizzati anche precisi punti di adesione per le cellule staminali pluripotenti indotte, che hanno consentito di controllare la geometria dei tessuti ingegnerizzati. In particolare, oltre al processo di fabbricazione, in questo lavoro vengono anche illustrate le procedure necessarie al fine di calibrare i microsensori utilizzati per monitorare i costrutti. La prima fase della calibrazione si occupa di determinare la responsività meccanica dei sensori di spostamento, mentre la seconda valuta quella dei sensori elettrici, deputati alla conversione di spostamenti in variazioni di resistenza elettrica.
Resumo:
A 4-10 GHz, on-chip balun based current commutating mixer is proposed. Tunable resistive feedback is used at the transconductance stage for wideband response, and interlaced stacked transformer is adopted for good balance of the balun. Measurement results show that a conversion gain of 13.5 dB, an IIP3 of 4 dBm and a noise figure of 14 dB are achieved with 5.6 mW power consumption under 1.2 V supply. The simulated amplitude and phase imbalance is within 0.9 dB and ±2◦ over the band.
Resumo:
The lanthanide binuclear helicate [Eu(2)(L(C2(CO(2)H)))(3)] is coupled to avidin to yield a luminescent bioconjugate EuB1 (Q = 9.3%, tau((5)D(0)) = 2.17 ms). MALDI/TOF mass spectrometry confirms the covalent binding of the Eu chelate and UV-visible spectroscopy allows one to determine a luminophore/protein ratio equal to 3.2. Bio-affinity assays involving the recognition of a mucin-like protein expressed on human breast cancer MCF-7 cells by a biotinylated monoclonal antibody 5D10 to which EuB1 is attached via avidin-biotin coupling demonstrate that (i) avidin activity is little affected by the coupling reaction and (ii) detection limits obtained by time-resolved (TR) luminescence with EuB1 and a commercial Eu-avidin conjugate are one order of magnitude lower than those of an organic conjugate (FITC-streptavidin). In the second part of the paper, conditions for growing MCF-7 cells in 100-200 microm wide microchannels engraved in PDMS are established; we demonstrate that EuB1 can be applied as effectively on this lab-on-a-chip device for the detection of tumour-associated antigens as on MCF-7 cells grown in normal culture vials. In order to exploit the versatility of the ligand used for self-assembling [Ln(2)(L(C2(CO(2)H)))(3)] helicates, which sensitizes the luminescence of both Eu(III) and Tb(III) ions, a dual on-chip assay is proposed in which estrogen receptors (ERs) and human epidermal growth factor receptors (Her2/neu) can be simultaneously detected on human breast cancer tissue sections. The Ln helicates are coupled to two secondary antibodies: ERs are visualized by red-emitting EuB4 using goat anti-mouse IgG and Her2/neu receptors by green-emitting TbB5 using goat anti-rabbit IgG. The fact that the assay is more than 6 times faster and requires 5 times less reactants than conventional immunohistochemical assays provides essential advantages over conventional immunohistochemistry for future clinical biomarker detection.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
As electronic devices get smaller and more complex, dependability assurance is becoming fundamental for many mission critical computer based systems. This paper presents a case study on the possibility of using the on-chip debug infrastructures present in most current microprocessors to execute real time fault injection campaigns. The proposed methodology is based on a debugger customized for fault injection and designed for maximum flexibility, and consists of injecting bit-flip type faults on memory elements without modifying or halting the target application. The debugger design is easily portable and applicable to different architectures, providing a flexible and efficient mechanism for verifying and validating fault tolerant components.
Resumo:
This thesis is one of the first reports of digital microfluidics on paper and the first in which the chip’s circuit was screen printed unto the paper. The use of the screen printing technique, being a low cost and fast method for electrodes deposition, makes the all chip processing much more aligned with the low cost choice of paper as a substrate. Functioning chips were developed that were capable of working at as low as 50 V, performing all the digital microfluidics operations: movement, dispensing, merging and splitting of the droplets. Silver ink electrodes were screen printed unto paper substrates, covered by Parylene-C (through vapor deposition) as dielectric and Teflon AF 1600 (through spin coating) as hydrophobic layer. The morphology of different paper substrates, silver inks (with different annealing conditions) and Parylene deposition conditions were studied by optical microscopy, AFM, SEM and 3D profilometry. Resolution tests for the printing process and electrical characterization of the silver electrodes were also made. As a showcase of the applications potential of these chips as a biosensing device, a colorimetric peroxidase detection test was successfully done on chip, using 200 nL to 350 nL droplets dispensed from 1 μL drops.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.