930 resultados para Lab-On-A-Chip Devices
Resumo:
Air conditioning and lighting costs can be reduced substantially by changing the optical properties of "intelligent windows." The electrochromic devices studied to date have used copper as an additive. Copper, used here as an electrochromic material, was dissolved in an aqueous animal protein-derived gel electrolyte. This combination constitutes the electrochromic system for reversible electrodeposition. Cyclic voltammetry, chronoamperometric and chromogenic analyses indicated that were obtained good conditions of transparency (initial transmittance of 70%), optical reversibility, small potential window (2.1 V), variation of transmittance in visible light (63.6%) and near infrared (20%) spectral regions. Permanence in the darkened state was achieved by maintaining a lower pulse potential (-0.16 V) than the deposition potential (-1.0 V). Increasing the number of deposition and dissolution cycles favored the transmittance and photoelectrochemical reversibility of the device. The conductivity of the electrolyte (10(-3) S/cm) at several concentrations of CuCl2 was determined by electrochemical impedance spectroscopy. A thermogravimetric analysis confirmed the good thermal stability of the electrolyte, since the mass loss detected up to 100 degrees C corresponded to water evaporation and decomposition of the gel started only at 200 degrees C. Micrographic and small angle X-ray scattering analyses indicated the formation of a persistent deposit of copper particles on the ITO. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Ionic conducting membranes of gelatin plasticized with glycerol and containing LiI/I-2 have been obtained and characterized by X-ray diffraction measurements, UV-Vis-NIR spectroscopy, thermal analysis and impedance spectroscopy. The transparent (80-90% in the visible range) membranes showed ionic conductivity value of 5 x 10(-5) S/cm at room temperature, which increased to 3 x 10(-3) S/cm at 80 degrees C. All the ionic conductivity measurements as a function of temperature showed VTF dependence and activation energy of 8 kJ/mol. These samples also showed low glass transition temperature of -76 degrees C. Moreover the samples were predominantly amorphous. The membranes applied to small electrochromic devices showed 20% of color change from colored to bleached states during more than 70 cronoamperometric cycles.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
Nel documento vengono principalmente trattati i principali meccanismi per il controllo di flusso per le NoC. Vengono trattati vari schemi di switching, gli stessi schemi associati all'introduzione dei Virtual Channel, alcuni low-level flow control, e due soluzioni per gli end-to-end flow control: Credit Based e CTC (STMicroelectronics). Nel corso della trattazione vengono presentate alcune possibili modifiche a CTC per incrementarne le prestazioni mantenendo la scalabilità che lo contraddistingue: queste sono le "back-to-back request" e "multiple incoming connections". Infine vengono introdotti alcune soluzioni per l'implementazione della qualità di servizio per le reti su chip. Proprio per il supporto al QoS viene introdotto CTTC: una versione di CTC con il supporto alla Time Division Multiplexing su rete Spidergon.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
A novel nanosized and addressable sensing platform based on membrane coated plasmonic particles for detection of protein adsorption using dark field scattering spectroscopy of single particles has been established. To this end, a detailed analysis of the deposition of gold nanorods on differently functionalized substrates is performed in relation to various factors (such as the pH, ionic strength, concentration of colloidal suspension, incubation time) in order to find the optimal conditions for obtaining a homogenous distribution of particles at the desired surface number density. The possibility of successfully draping lipid bilayers over the gold particles immobilized on glass substrates depends on the careful adjustment of parameters such as membrane curvature and adhesion properties and is demonstrated with complementary techniques such as phase imaging AFM, fluorescence microscopy (including FRAP) and single particle spectroscopy. The functionality and sensitivity of the proposed sensing platform is unequivocally certified by the resonance shifts of the plasmonic particles that were individually interrogated with single particle spectroscopy upon the adsorption of streptavidin to biotinylated lipid membranes. This new detection approach that employs particles as nanoscopic reporters for biomolecular interactions insures a highly localized sensitivity that offers the possibility to screen lateral inhomogeneities of native membranes. As an alternative to the 2D array of gold nanorods, short range ordered arrays of nanoholes in optically transparent gold films or regular arrays of truncated tetrahedron shaped particles are built by means of colloidal nanolithography on transparent substrates. Technical issues mainly related to the optimization of the mask deposition conditions are successfully addressed such that extended areas of homogenously nanostructured gold surfaces are achieved. Adsorption of the proteins annexin A1 and prothrombin on multicomponent lipid membranes as well as the hydrolytic activity of the phospholipase PLA2 were investigated with classical techniques such as AFM, ellipsometry and fluorescence microscopy. At first, the issues of lateral phase separation in membranes of various lipid compositions and the dependency of the domains configuration (sizes and shapes) on the membrane content are addressed. It is shown that the tendency for phase segregation of gel and fluid phase lipid mixtures is accentuated in the presence of divalent calcium ions for membranes containing anionic lipids as compared to neutral bilayers. Annexin A1 adsorbs preferentially and irreversibly on preformed phosphatidylserine (PS) enriched lipid domains but, dependent on the PS content of the bilayer, the protein itself may induce clustering of the anionic lipids into areas with high binding affinity. Corroborated evidence from AFM and fluorescence experiments confirm the hypothesis of a specifically increased hydrolytic activity of PLA2 on the highly curved regions of membranes due to a facilitated access of lipase to the cleavage sites of the lipids. The influence of the nanoscale gold surface topography on the adhesion of lipid vesicles is unambiguously demonstrated and this reveals, at least in part, an answer for the controversial question existent in the literature about the behavior of lipid vesicles interacting with bare gold substrates. The possibility of formation monolayers of lipid vesicles on chemically untreated gold substrates decorated with gold nanorods opens new perspectives for biosensing applications that involve the radiative decay engineering of the plasmonic particles.
Ultrasensitive chemiluminescence bioassays based on microfluidics in miniaturized analytical devices
Resumo:
The activity carried out during my PhD was principally addressed to the development of portable microfluidic analytical devices based on biospecific molecular recognition reactions and CL detection. In particular, the development of biosensors required the study of different materials and procedures for their construction, with particular attention to the development of suitable immobilization procedures, fluidic systems and the selection of the suitable detectors. Different methods were exploited, such as gene probe hybridization assay or immunoassay, based on different platform (functionalized glass slide or nitrocellulose membrane) trying to improve the simplicity of the assay procedure. Different CL detectors were also employed and compared with each other in the search for the best compromise between portability and sensitivity. The work was therefore aimed at miniaturization and simplification of analytical devices and the study involved all aspects of the system, from the analytical methodology to the type of detector, in order to combine high sensitivity with easiness-of-use and rapidity. The latest development involving the use of smartphone as chemiluminescent detector paves the way for a new generation of analytical devices in the clinical diagnostic field thanks to the ideal combination of sensibility a simplicity of the CL with the day-by-day increase in the performance of the new generation smartphone camera. Moreover, the connectivity and data processing offered by smartphones can be exploited to perform analysis directly at home with simple procedures. The system could eventually be used to monitor patient health and directly notify the physician of the analysis results allowing a decrease in costs and an increase in the healthcare availability and accessibility.
Resumo:
Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.
Resumo:
Il presente lavoro di tesi, svolto presso i laboratori dell'X-ray Imaging Group del Dipartimento di Fisica e Astronomia dell'Università di Bologna e all'interno del progetto della V Commissione Scientifica Nazionale dell'INFN, COSA (Computing on SoC Architectures), ha come obiettivo il porting e l’analisi di un codice di ricostruzione tomografica su architetture GPU installate su System-On-Chip low-power, al fine di sviluppare un metodo portatile, economico e relativamente veloce. Dall'analisi computazionale sono state sviluppate tre diverse versioni del porting in CUDA C: nella prima ci si è limitati a trasporre la parte più onerosa del calcolo sulla scheda grafica, nella seconda si sfrutta la velocità del calcolo matriciale propria del coprocessore (facendo coincidere ogni pixel con una singola unità di calcolo parallelo), mentre la terza è un miglioramento della precedente versione ottimizzata ulteriormente. La terza versione è quella definitiva scelta perché è la più performante sia dal punto di vista del tempo di ricostruzione della singola slice sia a livello di risparmio energetico. Il porting sviluppato è stato confrontato con altre due parallelizzazioni in OpenMP ed MPI. Si è studiato quindi, sia su cluster HPC, sia su cluster SoC low-power (utilizzando in particolare la scheda quad-core Tegra K1), l’efficienza di ogni paradigma in funzione della velocità di calcolo e dell’energia impiegata. La soluzione da noi proposta prevede la combinazione del porting in OpenMP e di quello in CUDA C. Tre core CPU vengono riservati per l'esecuzione del codice in OpenMP, il quarto per gestire la GPU usando il porting in CUDA C. Questa doppia parallelizzazione ha la massima efficienza in funzione della potenza e dell’energia, mentre il cluster HPC ha la massima efficienza in velocità di calcolo. Il metodo proposto quindi permetterebbe di sfruttare quasi completamente le potenzialità della CPU e GPU con un costo molto contenuto. Una possibile ottimizzazione futura potrebbe prevedere la ricostruzione di due slice contemporaneamente sulla GPU, raddoppiando circa la velocità totale e sfruttando al meglio l’hardware. Questo studio ha dato risultati molto soddisfacenti, infatti, è possibile con solo tre schede TK1 eguagliare e forse a superare, in seguito, la potenza di calcolo di un server tradizionale con il vantaggio aggiunto di avere un sistema portatile, a basso consumo e costo. Questa ricerca si va a porre nell’ambito del computing come uno tra i primi studi effettivi su architetture SoC low-power e sul loro impiego in ambito scientifico, con risultati molto promettenti.
Resumo:
Modifications and upgrades to the hydraulic flume facility in the Environmental Fluid Mechanics and Hydraulics Laboratory (EFM&H) at Bucknell University are described. These changes enable small-scale testing of model marine hydrokinetic(MHK) devices. The design of the experimental platform provides a controlled environment for testing of model MHK devices to determine their effect on localsubstrate. Specifically, the effects being studied are scour and erosion around a cylindrical support structure and deposition of sediment downstream from the device.
Resumo:
Microfluidic systems have become competitive tools in the invitro modelling of diseases and promising alternatives to animal studies. They allow obtaining more invivo like conditions for cellular assays. Research in idiopathic pulmonary fibrosis could benefit from this novel methodological approach to understand the pathophysiology of the disease & develop efficient therapies. The use of hepatocyte growth factor (HGF) for alveolar reepithelisation is a promising approach. In this study, we show a new microfluidic system to analyse the effects of HGF on injured alveolar epithelial cells. Microfluidic systems in polydimethylsiloxane were fabricated by soft lithography. The alveolar A549 epithelial cells (10,000 cells) were seeded and studied in these microfluidic systems with media perfusion (1μl/30min). Injury tests were made on the cells by the perfusion with media containing H2O2 or bleomycin. The degree of injury was then assessed by a metabolic and an apoptotic assays. Wound assays were also performed with a central laminar flow of trypsin. Monitoring of wound closure with HGF vs control media was assessed. The alveolar A549 epithelial cells grew and proliferated in the microfluidic system. In the wound closure assay, the degree of wound closure after 5 hours was (53.3±1.3%) with HGF compared to (9.8±2.4%) without HGF (P <0.001). We present a novel microfluidic model that allows culture, injury and wounding of A549 epithelial cells and represents the first step towards the development of an invitro reconstitution of the alveolar-capillary interface. We were also able to confirm that HGF increased alveolar epithelial repair in this system.