970 resultados para microfluidic chip system


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ultrasonics offers the possibility of developing sophisticated fluid manipulation tools in lab-on-a-chip technologies. Here we demonstrate the ability to shape ultrasonic fields by using phononic lattices, patterned on a disposable chip, to carry out the complex sequence of fluidic manipulations required to detect the rodent malaria parasite Plasmodium berghei in blood. To illustrate the different tools that are available to us, we used acoustic fields to produce the required rotational vortices that mechanically lyse both the red blood cells and the parasitic cells present in a drop of blood. This procedure was followed by the amplification of parasitic genomic sequences using different acoustic fields and frequencies to heat the sample and perform a real-time PCR amplification. The system does not require the use of lytic reagents nor enrichment steps, making it suitable for further integration into lab-on-a-chip point-of-care devices. This acoustic sample preparation and PCR enables us to detect ca. 30 parasites in a microliter-sized blood sample, which is the same order of magnitude in sensitivity as lab-based PCR tests. Unlike other lab-on-a-chip methods, where the sample moves through channels, here we use our ability to shape the acoustic fields in a frequency-dependent manner to provide different analytical functions. The methods also provide a clear route toward the integration of PCR to detect pathogens in a single handheld system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today's surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper details methodologies that have been explored for the fast proofing of on-chip architectures for Circular Dichroism techniques. Flow-cell devices fabricated from UV transparent Quartz are used for these experiments. The complexity of flow-cell production typically results in lead times of six months from order to delivery. Only at that point can the on-chip architecture be tested empirically and any required modifications determined ready for the next six month iteration phase. By using the proposed 3D printing and PDMS moulding techniques for fast proofing on-chip architectures the optimum design can be determined within a matter of hours prior to commitment to quartz chip production.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis describes two separate projects. The first is a theoretical and experimental investigation of surface acoustic wave streaming in microfluidics. The second is the development of a novel acoustic glucose sensor. A separate abstract is given for each here. Optimization of acoustic streaming in microfluidic channels by SAWs Surface Acoustic Waves, (SAWs) actuated on flat piezoelectric substrates constitute a convenient and versatile tool for microfluidic manipulation due to the easy and versatile interfacing with microfluidic droplets and channels. The acoustic streaming effect can be exploited to drive fast streaming and pumping of fluids in microchannels and droplets (Shilton et al. 2014; Schmid et al. 2011), as well as size dependant sorting of particles in centrifugal flows and vortices (Franke et al. 2009; Rogers et al. 2010). Although the theory describing acoustic streaming by SAWs is well understood, very little attention has been paid to the optimisation of SAW streaming by the correct selection of frequency. In this thesis a finite element simulation of the fluid streaming in a microfluidic chamber due to a SAW beam was constructed and verified against micro-PIV measurements of the fluid flow in a fabricated device. It was found that there is an optimum frequency that generates the fastest streaming dependent on the height and width of the chamber. It is hoped this will serve as a design tool for those who want to optimally match SAW frequency with a particular microfluidic design. An acoustic glucose sensor Diabetes mellitus is a disease characterised by an inability to properly regulate blood glucose levels. In order to keep glucose levels under control some diabetics require regular injections of insulin. Continuous monitoring of glucose has been demonstrated to improve the management of diabetes (Zick et al. 2007; Heinemann & DeVries 2014), however there is a low patient uptake of continuous glucose monitoring systems due to the invasive nature of the current technology (Ramchandani et al. 2011). In this thesis a novel way of monitoring glucose levels is proposed which would use ultrasonic waves to ‘read’ a subcutaneous glucose sensitive-implant, which is only minimally invasive. The implant is an acoustic analogy of a Bragg stack with a ‘defect’ layer that acts as the sensing layer. A numerical study was performed on how the physical changes in the sensing layer can be deduced by monitoring the reflection amplitude spectrum of ultrasonic waves reflected from the implant. Coupled modes between the skin and the sensing layer were found to be a potential source of error and drift in the measurement. It was found that by increasing the number of layers in the stack that this could be minimized. A laboratory proof of concept system was developed using a glucose sensitive hydrogel as the sensing layer. It was possible to monitor the changing thickness and speed of sound of the hydrogel due to physiological relevant changes in glucose concentration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present dissertation aimed to develop a new microfluidic system for a point-of-care hematocrit device. Stabilization of microfluidic systems via surfactant additives and integration of semipermeable SnakeSkin® membranes was investigated. Both methods stabilized the microfluidic systems by controlling electrolysis bubbles. Surfactant additives, Triton X-100 and SDS stabilized promoted faster bubble detachment at electrode surfaces by lowering surface tension and decreased gas bubble formation by increasing gas solubility. The SnakeSkin® membranes blocked bubbles from entering the microchannel and thus less disturbance to the electric field by bubbles occurred in the microchannel. Platinum electrode performance was improved by carbonizing electrode surface using red blood cells. Irreversibly adsorbed RBCs lysed on platinum electrode surfaces and formed porous carbon layers while current response measurements. The formed carbon layers increase the platinum electrode surface area and thus electrode performance was improved by 140 %. The microfluidic system was simplified by employing DC field to use as a platform for a point-of-care hematocrit device. Feasibility of the microfluidic system for hematocrit determination was shown via current response measurements of red blood cell suspensions in phosphate buffered saline and plasma media. The linear trendline of current responses over red blood cell concentration was obtained in both phosphate buffered saline and plasma media. This research suggested that a new and simple microfluidic system could be a promising solution to develop an inexpensive and reliable point-of-care hematocrit device.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Combinatorial optimization is a complex engineering subject. Although formulation often depends on the nature of problems that differs from their setup, design, constraints, and implications, establishing a unifying framework is essential. This dissertation investigates the unique features of three important optimization problems that can span from small-scale design automation to large-scale power system planning: (1) Feeder remote terminal unit (FRTU) planning strategy by considering the cybersecurity of secondary distribution network in electrical distribution grid, (2) physical-level synthesis for microfluidic lab-on-a-chip, and (3) discrete gate sizing in very-large-scale integration (VLSI) circuit. First, an optimization technique by cross entropy is proposed to handle FRTU deployment in primary network considering cybersecurity of secondary distribution network. While it is constrained by monetary budget on the number of deployed FRTUs, the proposed algorithm identi?es pivotal locations of a distribution feeder to install the FRTUs in different time horizons. Then, multi-scale optimization techniques are proposed for digital micro?uidic lab-on-a-chip physical level synthesis. The proposed techniques handle the variation-aware lab-on-a-chip placement and routing co-design while satisfying all constraints, and considering contamination and defect. Last, the first fully polynomial time approximation scheme (FPTAS) is proposed for the delay driven discrete gate sizing problem, which explores the theoretical view since the existing works are heuristics with no performance guarantee. The intellectual contribution of the proposed methods establishes a novel paradigm bridging the gaps between professional communities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.