905 resultados para Lab-on-a-chip
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
A crescente evolução dos dispositivos contendo circuitos integrados, em especial os FPGAs (Field Programmable Logic Arrays) e atualmente os System on a chip (SoCs) baseados em FPGAs, juntamente com a evolução das ferramentas, tem deixado um espaço entre o lançamento e a produção de materiais didáticos que auxiliem os engenheiros no Co- Projecto de hardware/software a partir dessas tecnologias. Com o intuito de auxiliar na redução desse intervalo temporal, o presente trabalho apresenta o desenvolvimento de documentos (tutoriais) direcionados a duas tecnologias recentes: a ferramenta de desenvolvimento de hardware/software VIVADO; e o SoC Zynq-7000, Z-7010, ambos desenvolvidos pela Xilinx. Os documentos produzidos são baseados num projeto básico totalmente implementado em lógica programável e do mesmo projeto implementado através do processador programável embarcado, para que seja possível avaliar o fluxo de projeto da ferramenta para um projeto totalmente implementado em hardware e o fluxo de projeto para o mesmo projeto implementado numa estrutura de harware/software.
Resumo:
La miniaturització de la industria microelectrònica és un fet del tot inqüestionables i la tecnologia CMOS no n'és una excepció. En conseqüència la comunitat científica s'ha plantejat dos grans reptes: En primer lloc portar la tecnologia CMOS el més lluny possible ('Beyond CMOS') tot desenvolupant sistemes d'altes prestacions com microprocessadors, micro - nanosistemes o bé sistemes de píxels. I en segon lloc encetar una nova generació electrònica basada en tecnologies totalment diferents dins l'àmbit de les Nanotecnologies. Tots aquests avanços exigeixen una recerca i innovació constant en la resta d'àrees complementaries com són les d'encapsulat. L'encapsulat ha de satisfer bàsicament tres funcions: Interfície elèctrica del sistema amb l'exterior, Proporcionar un suport mecànic al sistema i Proporcionar un camí de dissipació de calor. Per tant, si tenim en compte que la majoria d'aquests dispositius d'altes prestacions demanden un alt nombre d'entrades i sortides, els mòduls multixip (MCMs) i la tecnologia flip chip es presenten com una solució molt interessant per aquests tipus de dispositiu. L'objectiu d'aquesta tesi és la de desenvolupar una tecnologia de mòduls multixip basada en interconnexions flip chip per a la integració de detectors de píxels híbrids, que inclou: 1) El desenvolupament d'una tecnologia de bumping basada en bumps de soldadura Sn/Ag eutèctics dipositats per electrodeposició amb un pitch de 50µm, i 2) El desenvolupament d'una tecnologia de vies d'or en silici que permet interconnectar i apilar xips verticalment (3D packaging) amb un pitch de 100µm. Finalment aquesta alta capacitat d'interconnexió dels encapsulats flip chip ha permès que sistemes de píxels tradicionalment monolítics puguin evolucionar cap a sistemes híbrids més compactes i complexes, i que en aquesta tesi s'ha vist reflectit transferint la tecnologia desenvolupada al camp de la física d'altes energies, en concret implantant el sistema de bump bonding d'un mamògraf digital. Addicionalment s'ha implantat també un dispositiu detector híbrid modular per a la reconstrucció d'imatges 3D en temps real, que ha donat lloc a una patent.
Resumo:
Hybrid multiprocessor architectures which combine re-configurable computing and multiprocessors on a chip are being proposed to transcend the performance of standard multi-core parallel systems. Both fine-grained and coarse-grained parallel algorithm implementations are feasible in such hybrid frameworks. A compositional strategy for designing fine-grained multi-phase regular processor arrays to target hybrid architectures is presented in this paper. The method is based on deriving component designs using classical regular array techniques and composing the components into a unified global design. Effective designs with phase-changes and data routing at run-time are characteristics of these designs. In order to describe the data transfer between phases, the concept of communication domain is introduced so that the producer–consumer relationship arising from multi-phase computation can be treated in a unified way as a data routing phase. This technique is applied to derive new designs of multi-phase regular arrays with different dataflow between phases of computation.
Resumo:
This paper proposes a parallel hardware architecture for image feature detection based on the Scale Invariant Feature Transform algorithm and applied to the Simultaneous Localization And Mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320 x 240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations oil performance, area and accuracy.
Resumo:
The increasingly request for processing power during last years has pushed integrated circuit industry to look for ways of providing even more processing power with less heat dissipation, power consumption, and chip area. This goal has been achieved increasing the circuit clock, but since there are physical limits of this approach a new solution emerged as the multiprocessor system on chip (MPSoC). This approach demands new tools and basic software infrastructure to take advantage of the inherent parallelism of these architectures. The oil exploration industry has one of its firsts activities the project decision on exploring oil fields, those decisions are aided by reservoir simulations demanding high processing power, the MPSoC may offer greater performance if its parallelism can be well used. This work presents a proposal of a micro-kernel operating system and auxiliary libraries aimed to the STORM MPSoC platform analyzing its influence on the problem of reservoir simulation
Resumo:
Background: The sequencing and publication of the cattle genome and the identification of single nucleotide polymorphism (SNP) molecular markers have provided new tools for animal genetic evaluation and genomic-enhanced selection. These new tools aim to increase the accuracy and scope of selection while decreasing generation interval. The objective of this study was to evaluate the enhancement of accuracy caused by the use of genomic information (Clarifide® - Pfizer) on genetic evaluation of Brazilian Nellore cattle. Review: The application of genome-wide association studies (GWAS) is recognized as one of the most practical approaches to modern genetic improvement. Genomic selection is perhaps most suited to the improvement of traits with low heritability in zebu cattle. The primary interest in livestock genomics has been to estimate the effects of all the markers on the chip, conduct cross-validation to determine accuracy, and apply the resulting information in GWAS either alone [9] or in combination with bull test and pedigree-based genetic evaluation data. The cost of SNP50K genotyping however limits the commercial application of GWAS based on all the SNPs on the chip. However, reasonable predictability and accuracy can be achieved in GWAS by using an assay that contains an optimally selected predictive subset of markers, as opposed to all the SNPs on the chip. The best way to integrate genomic information into genetic improvement programs is to have it included in traditional genetic evaluations. This approach combines traditional expected progeny differences based on phenotype and pedigree with the genomic breeding values based on the markers. Including the different sources of information into a multiple trait genetic evaluation model, for within breed dairy cattle selection, is working with excellent results. However, given the wide genetic diversity of zebu breeds, the high-density panel used for genomic selection in dairy cattle (Ilumina Bovine SNP50 array) appears insufficient for across-breed genomic predictions and selection in beef cattle. Today there is only one breed-specific targeted SNP panel and genomic predictions developed using animals across the entire population of the Nellore breed (www.pfizersaudeanimal.com), which enables genomically - enhanced selection. Genomic profiles are a way to enhance our current selection tools to achieve more accurate predictions for younger animals. Material and Methods: We analyzed the age at first calving (AFC), accumulated productivity (ACP), stayability (STAY) and heifer pregnancy at 30 months (HP30) in Nellore cattle fitting two different animal models; 1) a traditional single trait model, and 2) a two-trait model where the genomic breeding value or molecular value prediction (MVP) was included as a correlated trait. All mixed model analyses were performed using the statistical software ASREML 3.0. Results: Genetic correlation estimates between AFC, ACP, STAY, HP30 and respective MVPs ranged from 0.29 to 0.46. Results also showed an increase of 56%, 36%, 62% and 19% in estimated accuracy of AFC, ACP, STAY and HP30 when MVP information was included in the animal model. Conclusion: Depending upon the trait, integration of MVP information into genetic evaluation resulted in increased accuracy of 19% to 62% as compared to accuracy from traditional genetic evaluation. GE-EPD will be an effective tool to enable faster genetic improvement through more dependable selection of young animals.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper describes a long-range remotely controlled CE system built on an all-terrain vehicle. A four-stroke engine and a set of 12-V batteries were used to provide power to a series of subsystems that include drivers, communication, computers, and a capillary electrophoresis module. This dedicated instrument allows air sampling using a polypropylene porous tube, coupled to a flow system that transports the sample to the inlet of a fused-silica capillary. A hybrid approach was used for the construction of the analytical subsystem combining a conventional fused-silica capillary (used for separation) and a laser machined microfluidic block, made of PMMA. A solid-state cooling approach was also integrated in the CE module to enable controlling the temperature and therefore increasing the useful range of the robot. Although ultimately intended for detection of chemical warfare agents, the proposed system was used to analyze a series of volatile organic acids. As such, the system allowed the separation and detection of formic, acetic, and propionic acids with signal-to-noise ratios of 414, 150, and 115, respectively, after sampling by only 30 s and performing an electrokinetic injection during 2.0 s at 1.0 kV.
Resumo:
Microchip electrophoresis has become a powerful tool for DNA separation, offering all of the advantages typically associated with miniaturized techniques: high speed, high resolution, ease of automation, and great versatility for both routine and research applications. Various substrate materials have been used to produce microchips for DNA separations, including conventional (glass, silicon, and quartz) and alternative (polymers) platforms. In this study, we perform DNA separation in a simple and low-cost polyester-toner (PeT)-based electrophoresis microchip. PeT devices were fabricated by a direct-printing process using a 600 dpi-resolution laser printer. DNA separations were performed on PeT chip with channels filled with polymer solutions (0.5% m/v hydroxyethylcellulose or hydroxypropylcellulose) at electric fields ranging from 100 to 300Vcm(-1). Separation of DNA fragments between 100 and 1000 bp, with good correlation of the size of DNA fragments and mobility, was achieved in this system. Although the mobility increased with increasing electric field, separations showed the same profile regardless of the electric field. The system provided good separation efficiency (215 000 plates per m for the 500 bp fragment) and the separation was completed in 4 min for 1000 bp fragment ladder. The cost of a given chip is approximately $0.15 and it takes less than 10 minutes to prepare a single device.
Resumo:
In the framework of developing defect-based life models, in which breakdown is explicitly associated with partial discharge (PD)-induced damage growth from a defect, ageing tests and PD measurements were carried out in the lab on polyethylene (PE) layered specimens containing artificial cavities. PD activity was monitored continuously during aging. A quasi-deterministic series of stages can be observed in the behavior of the main PD parameters (i.e. discharge repetition rate and amplitude). Phase-resolved PD patterns at various ageing stages were reproduced by numerical simulation which is based on a physical discharge model devoid of adaptive parameters. The evolution of the simulation parameters provides insight into the physical-chemical changes taking place at the dielectric/cavity interface during the aging process. PD activity shows similar time behavior under constant cavity gas volume and constant cavity gas pressure conditions, suggesting that the variation of PD parameters may not be attributed to the variation of the gas pressure. Brownish PD byproducts, consisting of oxygen containing moieties, and degradation pits were found at the dielectric/cavity interface. It is speculated that the change of PD activity is related to the composition of the cavity gas, as well as to the properties of dielectric/cavity interface.
Resumo:
Many physiological and pathological processes are mediated by the activity of proteins assembled in homo and/or hetero-oligomers. The correct recognition and association of these proteins into a functional complex is a key step determining the fate of the whole pathway. This has led to an increasing interest in selecting molecules able to modulate/inhibit these protein-protein interactions. In particular, our research was focused on Heat Shock Protein 90 (Hsp90), responsible for the activation and maturation and disposition of many client proteins [1], [2] [3]. Circular Dichroism (CD) spectroscopy, Surface Plasmon Resonance (SPR) and Affinity Capillary Electrophoresis (ACE) were used to characterize the Hsp90 target and, furthermore, its inhibition process via C-terminal domain driven by the small molecule Coumermycin A1. Circular Dichroism was used as powerful technique to characterize Hsp90 and its co-chaperone Hop in solution for secondary structure content, stability to different pHs, temperatures and solvents. Furthermore, CD was used to characterize ATP but, unfortunately, we were not able to monitor an interaction between ATP and Hsp90. The utility of SPR technology, on the other hand, arises from the possibility of immobilizing the protein on a chip through its N-terminal domain to later study the interaction with small molecules able to disrupt the Hsp90 dimerization on the C-terminal domain. The protein was attached on SPR chip using the “amine coupling” chemistry so that the C-terminal domain was free to interact with Coumermycin A1. The goal of the experiment was achieved by testing a range of concentrations of the small molecule Coumermycin A1. Despite to the large difference in the molecular weight of the protein (90KDa) and the drug (1110.08 Da), we were able to calculate the affinity constant of the interaction that was found to be 11.2 µm. In order to confirm the binding constant calculated for the Hsp90 on the chip, we decided to use Capillary Electrophoresis to test the Coumermycin binding to Hsp90. First, this technique was conveniently used to characterize the Hsp90 sample in terms of composition and purity. The experimental conditions were settled on two different systems, the bared fused silica and the PVA-coated capillary. We were able to characterize the Hsp90 sample in both systems. Furthermore, we employed an application of capillary electrophoresis, the Affinity Capillary Electrophoresis (ACE), to measure and confirm the binding constant calculated for Coumermycin on Optical Biosensor. We found a KD = 19.45 µM. This result compares favorably with the KD previously obtained on biosensor. This is a promising result for the use of our novel approach to screen new potential inhibitors of Hsp90 C-terminal domain.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
Il presente lavoro di tesi presenta la progettazione, realizzazione e applicazione di un setup sperimentale miniaturizzato per la ricostruzione di immagine, con tecnica di Tomografia ad Impedenza Elettrica (EIT). Il lavoro descritto nel presente elaborato costituisce uno studio di fattibilità preliminare per ricostruire la posizione di piccole porzioni di tessuto (ordine di qualche millimetro) o aggregati cellulari dentro uno scaffold in colture tissutali o cellulari 3D. Il setup disegnato incorpora 8 elettrodi verticali disposti alla periferia di una camera di misura circolare del diametro di 10 mm. Il metodo di analisi EIT è stato svolto utilizzando i) elettrodi conduttivi per tutta l’altezza della camera (usati nel modello EIT bidimensionale e quasi-bidimensionale) e ii) elettrodi per deep brain stimulation (conduttivi esclusivamente su un ridotto volume in punta e posti a tre diverse altezze: alto, centro e basso) usati nel modello EIT tridimensionale. Il metodo ad elementi finiti (FEM) è stato utilizzato per la soluzione sia del problema diretto che del problema inverso, con la ricostruzione della mappa di distribuzione della conduttività entro la camera di misura. Gli esperimenti svolti hanno permesso di ricostruire la mappa di distribuzione di conduttività relativa a campioni dell’ordine del millimetro di diametro. Tali dimensioni sono compatibili con quelle dei campioni oggetto di studio in ingegneria tissutale e, anche, con quelle tipiche dei sistemi organ-on-a-chip. Il metodo EIT sviluppato, il prototipo del setup realizzato e la trattazione statistica dei dati sono attualmente in fase di implementazione in collaborazione con il gruppo del Professor David Holder, Dept. Medical Physics and Bioengineering, University College London (UCL), United Kingdom.