930 resultados para Lab-On-A-Chip Devices
Resumo:
Neste trabalho são apresentados processos de microfabricação de estruturas contendo microcanais e sistemas de manipulação hidrodinâmica e eletroosmótica de fluídos. Foram desenvolvidos processos de microfabricação utilizando toner sobre poliéster, toner sobre vidro, toner como resiste, além de métodos alternativos de perfuração de lâminas e selagem de microestruturas em vidro, desenvolvimento de microestruturas para eletroforese capilar e espectrometria de massas com ionização por eletronebulização. A caracterização dos materiais e processos permitiu uma ampla visão das potencialidades e alternativas dos processos de microfabricação, tendo sido demonstrado que os dispositivos produzidos em toner-poliéster são quimicamente resistentes às substâncias tipicamente utilizadas em eletroforese capilar. Neste trabalho, um detector condutométrico sem contato foi implementado em microestruturas de toner-poliéster e a separação eletroforética de alguns metais alcalinos é demonstrada. A microestrutura foi projetada no formato padrão em cruz, tendo o canal de separação 22 mm de comprimento, 12 µm de profundidade e largura típica. A cela condutométrica foi construída sobre o canal de separação utilizando-se fita adesiva de cobre (1 mm de largura) como eletrodos. O sinal aplicado na cela foi de 530 kHz e 10 Vpp . A separação de K+, Na+ e Li+ na concentração de 100 µmol L-1 foi efetuada em torno de 0,8 min, utilizando-se 1 kV como potencial de separação. Foram desenvolvidos microchips para análise por espectrometria de massas com introdução de amostra por eletronebulização, sendo determinado cluster do íon cloreto em concentração de 1 mmol L+. Também solução com 1 mmol/L de glucosamina em água/metanol 1: 1 (v/v), sob corrente de 100 nA gerou sinal estável e livre de descarga corona. Utilizando detecção amperométrica, obteve-se eletroferogramas mostrando a separação de iodeto (10 mmol L-1) e ascorbato (40 mmol L-1) em potencial de separação de 4,0 kV (800 V cm-1 potencial de detecção de 0,9 V (vs. Ag/AgCI), injeção com 1,0 kV/1°s, tampão borato de sódio 10 mmol L+ com CTAH 0,2 mmol L-1, pH 9,2. Obteve-se eficiência de 1,6.104 pratos/m e foi possível obter limites de detecção de 500 nmol L-1 (135 amol) e 1,8 µmol L-1 (486 amol) para iodeto e ascorbato, respectivamente. O processo de fabricação utilizando toner como material estrutural para microchips em vidro foi bem estabelecido, assim como os modos de detecção fotométrico e condutométrico foram demonstrados. Foram obtidos eletroferogramas par detecção condutométrica sem contato de solução 200 µmol L-1 de K+, Na+ e U+, em tampão histidina/ácido lático 30 mmol L-1 9:1 (v/v) água:metanol, injeção eletrocinética de 2,0 kV/5,0 s, potencial de separação de 1 kV, 530 kHz de frequência e tensão de 2,0 Vpp. Também foi implementado um sistema de detecção fotométrico para microchip operando em 660 nm, tendo sido utilizado para a detecção de azul de metileno 1,0 mmol L-1 em tampão de corrida de barato de sódio 20 mmol L-1 (pH 9,2), com o detector posicionado a 40 mm do ponto de injeção e com injeção eletrocinética a 2,0 kV por 12 s com picos bem resolvidos em menos de 1 min.
Resumo:
We could probably get sharper focus on another scanner with manual focus adjustment
Resumo:
We could probably get sharper focus on another scanner with manual focus adjustment
Resumo:
We could probably get sharper focus on another scanner with manual focus adjustment
Resumo:
Cover title.
Resumo:
This thesis documents the design, manufacture and testing of a passive and non-invasive micro-scale planar particle-from-fluid filter for segregating cell types from a homogeneous suspension. The microfluidics system can be used to separate spermatogenic cells from testis biopsy samples, providing a mechanism for filtrate retrieval for assisted reproduction therapy. The system can also be used for point-of-service diagnostics applications for hospitals, lab-on-a-chip pre-processing and field applications such as clinical testing in the third world. Various design concepts are developed and manufactured, and are assessed based on etched structure morphology, robustness to variations in the manufacturing process, and design impacts on fluid flow and particle separation characteristics. Segregation was measured using image processing algorithms that demonstrate efficiency is more than 55% for 1 µl volumes at populations exceeding 1 x 107. the technique supports a significant reduction in time over conventional processing, in the separation and identification of particle groups, offering a potential reduction in the associated cost of the targeted procedure. The thesis has developed a model of quasi-steady wetting flow within the micro channel and identifies the forces across the system during post-wetting equalisation. The model and its underlying assumptions are validated empirically in microfabricated test structures through a novel Micro-Particle Image Velocimetry technique. The prototype devices do not require ancillary equipment nor additional filtration media, and therefore offer fewer opportunities for sample contamination over conventional processing methods. The devices are disposable with minimal reagent volumes and process waste. Optimal processing parameters and production methods are identified with any improvements that could be made to enhance their performance in a number of identified potential applications.
Resumo:
Ultrasonics offers the possibility of developing sophisticated fluid manipulation tools in lab-on-a-chip technologies. Here we demonstrate the ability to shape ultrasonic fields by using phononic lattices, patterned on a disposable chip, to carry out the complex sequence of fluidic manipulations required to detect the rodent malaria parasite Plasmodium berghei in blood. To illustrate the different tools that are available to us, we used acoustic fields to produce the required rotational vortices that mechanically lyse both the red blood cells and the parasitic cells present in a drop of blood. This procedure was followed by the amplification of parasitic genomic sequences using different acoustic fields and frequencies to heat the sample and perform a real-time PCR amplification. The system does not require the use of lytic reagents nor enrichment steps, making it suitable for further integration into lab-on-a-chip point-of-care devices. This acoustic sample preparation and PCR enables us to detect ca. 30 parasites in a microliter-sized blood sample, which is the same order of magnitude in sensitivity as lab-based PCR tests. Unlike other lab-on-a-chip methods, where the sample moves through channels, here we use our ability to shape the acoustic fields in a frequency-dependent manner to provide different analytical functions. The methods also provide a clear route toward the integration of PCR to detect pathogens in a single handheld system.
Resumo:
The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Manipulation of single cells and particles is important to biology and nanotechnology. Our electrokinetic (EK) tweezers manipulate objects in simple microfluidic devices using gentle fluid and electric forces under vision-based feedback control. In this dissertation, I detail a user-friendly implementation of EK tweezers that allows users to select, position, and assemble cells and nanoparticles. This EK system was used to measure attachment forces between living breast cancer cells, trap single quantum dots with 45 nm accuracy, build nanophotonic circuits, and scan optical properties of nanowires. With a novel multi-layer microfluidic device, EK was also used to guide single microspheres along complex 3D trajectories. The schemes, software, and methods developed here can be used in many settings to precisely manipulate most visible objects, assemble objects into useful structures, and improve the function of lab-on-a-chip microfluidic systems.