937 resultados para lab-on-a-chip systems
Resumo:
We present a new concept for rapid and fully portable Prostate Specific Antigen (PSA) measurement, termed “Lab-in-a-Briefcase”, which integrates an affordable microfluidic ELISA platform utilising a melt-extruded fluoropolymer Micro Capillary Film (MCF) containing 10 bore, 200 μm internal diameter capillaries, a disposable multi-syringe aspirator (MSA) plus a sample tray pre-loaded with all required immunoassay reagents, and a portable film scanner for colorimetric signal digital quantitation. Each MSA can perform 10 replicate microfluidic immunoassays on 8 samples, allowing 80measurements to be made in less than 15 minutes based on semi-automated operation and norequirement of additional fluid handling equipment. An assay was optimised for measurement of a clinically relevant range of PSA from 0.9 to 60.0 ng/ml in 15 minutes with CVs in the order of 5% based on intra-assay variability when read using a consumer flatbed film scanner. The PSA assay performance in the MSA remained robust in the presence of undiluted or 1:2 diluted human serum or whole blood, and the matrix effect could simply be overcome by extending sample incubation times. The PSA "Lab-in-a-briefcase" is particularly suited to a low-resource health setting where diagnostic labs and automated immunoassay systems are not accessible, by allowing PSA measurement outside the laboratory using affordable equipment.
Resumo:
The main aim of this project is to develop an ESES lab on a full scale system. The solar combisystem used is available most of the time and is only used twice a year to carry out some technical courses. At the moment, there are no other laboratories about combisystems. The experiments were designed in a way to use the system to the most in order to help the students apply the theoretical knowledge in the solar thermal course as well as make them more familiar with solar systems components. The method adopted to reach this aim is to carry out several test sequences on the system, in order to help formulating at the end some educating experiments. A few tests were carried out at the beginning of the project just for the sake of understanding the system and figuring out if any additional measuring equipment is required. The level of these tests sequences was varying from a simple energy draw off or collector loop controller respond tests to more complicated tests, such as the use of the ‘collector’ heater to simulate the solar collector effect on the system. The tests results were compared and verified with the theoretical data wherever relevant. The results of the experiment about the use of the ‘collector’ heater instead of the collector were positively acceptable. Finally, the Lab guide was developed based on the results of these experiments and also the experience gotten while conducting them. The lab work covers the theories related to solar systems in general and combisystems in particular.
Resumo:
This paper describes a 3D virtual lab environment that was developed using OpenSim software integrated into Moodle. Virtuald software tool was used to provide pedagogical support to the lab by enabling to create online texts and delivering them to the students. The courses taught in this virtual lab are methodologically in conformity to theory of multiple intelligences. Some results are presented.
Resumo:
Fully controlled liquid injection and flow in hydrophobic polydimethylsiloxane (PDMS) two-dimensional microchannel arrays based on on-chip integrated, low-voltage-driven micropumps are demonstrated. Our architecture exploits the surface-acoustic-wave (SAW) induced counterflow mechanism and the effect of nebulization anisotropies at crossing areas owing to lateral propagating SAWs. We show that by selectively exciting single or multiple SAWs, fluids can be drawn from their reservoirs and moved towards selected positions of a microchannel grid. Splitting of the main liquid flow is also demonstrated by exploiting multiple SAW beams. As a demonstrator, we show simultaneous filling of two orthogonal microchannels. The present results show that SAW micropumps are good candidates for truly integrated on-chip fluidic networks allowing liquid control in arbitrarily shaped two-dimensional microchannel arrays.
Resumo:
Background: Microfluidics system are novel tools to study cell-cell interactions in vitro. This project focuses on the development of a new microfluidic device to co-culture alveolar epithelial cells and mesenchymal stem cells to study cellular interactions involved in healing the injured alveolar epithelium. Methods: Microfluidic systems in polydimethylsiloxane were fabricated by soft lithography. The alveolar A549 epithelial cells were seeded and injury tests were made on the cells by perfusion with media containing H2O2 or bleomycin during 6 or 18hrs. Rat Bone marrow derived stromal cells (BMSC) were then introduced into the system and cell-cell interaction was studied over 24 hrs. Results: A successful co-culture of A549 alveolar epithelial cells and BMS was achieved in the microfluidic system. The seeded alveolar epithelial cells and BMSC adhered to the bottom surface of the microfluidic device and proliferated under constant perfusion. Epithelial injury to mimic mechanisms seen in idiopathic pulmonary fibrosis was induced in the microchannels by perfusing with H2O2 or bleomycin. Migration of BMSC towards the injured epithelium was observed as well as cell-cell interaction between the two cell types was also seen. Conclusion: We demonstrate a novel microfluidic device aimed at showing interactions between different cell types on the basis of a changing microenvironment. Also we were able to confirm interaction between injured alvolar epithelium and BMSC, and showed that BMSC try to heal the injured epitelium.
Resumo:
We report about a lung-on-chip array that mimics the pulmonary parenchymal environment, including the thin, alveolar barrier and the three-dimensional cyclic strain induced by the breathing movements. A micro-diaphragm used to stretch the alveolar barrier is inspired by the in-vivo diaphragm, the main muscle responsible for inspiration. The design of this device aims not only at best reproducing the in-vivo conditions found in the lung parenchyma, but also at making its handling easy and robust. An innovative concept, based on the reversible bonding of the device, is presented that enables to accurately control the concentration of cells cultured on the membrane by easily accessing both sides of the membranes. The functionality of the alveolar barrier could be restored by co-culturing epithelial and endothelial cells that formed tight monolayers on each side of a thin, porous and stretchable membrane. We showed that cyclic stretch significantly affects the permeability properties of epithelial cell layers. Furthermore, we could also demonstrate that the strain influences the metabolic activity and the cytokine secretion of primary human pulmonary alveolar epithelial cells obtained from patients. These results demonstrate the potential of this device and confirm the importance of the mechanical strain induced by the breathing in pulmonary research.
Resumo:
Development of PCB-integrateable microsensors for monitoring chemical species is a goal in areas such as lab-on-a-chip analytical devices, diagnostics medicine and electronics for hand-held instruments where the device size is a major issue. Cellular phones have pervaded the world inhabitants and their usefulness has dramatically increased with the introduction of smartphones due to a combination of amazing processing power in a confined space, geolocalization and manifold telecommunication features. Therefore, a number of physical and chemical sensors that add value to the terminal for health monitoring, personal safety (at home, at work) and, eventually, national security have started to be developed, capitalizing also on the huge number of circulating cell phones. The chemical sensor-enabled “super” smartphone provides a unique (bio)sensing platform for monitoring airborne or waterborne hazardous chemicals or microorganisms for both single user and crowdsourcing security applications. Some of the latest ones are illustrated by a few examples. Moreover, we have recently achieved for the first time (covalent) functionalization of p- and n-GaN semiconductor surfaces with tuneable luminescent indicator dyes of the Ru-polypyridyl family, as a key step in the development of innovative microsensors for smartphone applications. Chemical “sensoring” of GaN-based blue LED chips with those indicators has also been achieved by plasma treatment of their surface, and the micrometer-sized devices have been tested to monitor O2 in the gas phase to show their full functionality. Novel strategies to enhance the sensor sensitivity such as changing the length and nature of the siloxane buffer layer are discussed in this paper.
Resumo:
Immunoprecipitation (IP) is one of the most widely used and selective techniques for protein purification. Here, a miniaturised, polymer-supported immunoprecipitation (µIP) method for the on-chip purification of proteins from complex mixtures is described. A 4 µl PDMS column functionalised with covalently bound antibodies was created and all critical aspects of the µIP protocol (antibody immobilisation, blocking of potential non-specific adsorption sites, sample incubation and washing conditions) were assessed and optimised. The optimised µIP method was used to obtain purified fractions of affinity-tagged protein from a bacterial lysate.
Resumo:
Nanoparticles offer an ideal platform for the delivery of small molecule drugs, subunit vaccines and genetic constructs. Besides the necessity of a homogenous size distribution, defined loading efficiencies and reasonable production and development costs, one of the major bottlenecks in translating nanoparticles into clinical application is the need for rapid, robust and reproducible development techniques. Within this thesis, microfluidic methods were investigated for the manufacturing, drug or protein loading and purification of pharmaceutically relevant nanoparticles. Initially, methods to prepare small liposomes were evaluated and compared to a microfluidics-directed nanoprecipitation method. To support the implementation of statistical process control, design of experiment models aided the process robustness and validation for the methods investigated and gave an initial overview of the size ranges obtainable in each method whilst evaluating advantages and disadvantages of each method. The lab-on-a-chip system resulted in a high-throughput vesicle manufacturing, enabling a rapid process and a high degree of process control. To further investigate this method, cationic low transition temperature lipids, cationic bola-amphiphiles with delocalized charge centers, neutral lipids and polymers were used in the microfluidics-directed nanoprecipitation method to formulate vesicles. Whereas the total flow rate (TFR) and the ratio of solvent to aqueous stream (flow rate ratio, FRR) was shown to be influential for controlling the vesicle size in high transition temperature lipids, the factor FRR was found the most influential factor controlling the size of vesicles consisting of low transition temperature lipids and polymer-based nanoparticles. The biological activity of the resulting constructs was confirmed by an invitro transfection of pDNA constructs using cationic nanoprecipitated vesicles. Design of experiments and multivariate data analysis revealed the mathematical relationship and significance of the factors TFR and FRR in the microfluidics process to the liposome size, polydispersity and transfection efficiency. Multivariate tools were used to cluster and predict specific in-vivo immune responses dependent on key liposome adjuvant characteristics upon delivery a tuberculosis antigen in a vaccine candidate. The addition of a low solubility model drug (propofol) in the nanoprecipitation method resulted in a significantly higher solubilisation of the drug within the liposomal bilayer, compared to the control method. The microfluidics method underwent scale-up work by increasing the channel diameter and parallelisation of the mixers in a planar way, resulting in an overall 40-fold increase in throughput. Furthermore, microfluidic tools were developed based on a microfluidics-directed tangential flow filtration, which allowed for a continuous manufacturing, purification and concentration of liposomal drug products.
Resumo:
Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today's surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Surface heat treatment in glasses and ceramics, using CO(2) lasers, has attracted the attention of several researchers around the world due to its impact in technological applications, such as lab-on-a-chip devices, diffraction gratings and microlenses. Microlens fabrication on a glass surface has been studied mainly due to its importance in optical devices (fiber coupling, CCD signal enhancement, etc). The goal of this work is to present a systematic study of the conditions for microlens fabrications, along with the viability of using microlens arrays, recorded on the glass surface, as bidimensional codes for product identification. This would allow the production of codes without any residues (like the fine powder generated by laser ablation) and resistance to an aggressive environment, such as sterilization processes. The microlens arrays were fabricated using a continuous wave CO(2) laser, focused on the surface of flat commercial soda-lime silicate glass substrates. The fabrication conditions were studied based on laser power, heating time and microlens profiles. A He-Ne laser was used as a light source in a qualitative experiment to test the viability of using the microlenses as bidimensional codes.
Resumo:
While multimedia data, image data in particular, is an integral part of most websites and web documents, our quest for information so far is still restricted to text based search. To explore the World Wide Web more effectively, especially its rich repository of truly multimedia information, we are facing a number of challenging problems. Firstly, we face the ambiguous and highly subjective nature of defining image semantics and similarity. Secondly, multimedia data could come from highly diversified sources, as a result of automatic image capturing and generation processes. Finally, multimedia information exists in decentralised sources over the Web, making it difficult to use conventional content-based image retrieval (CBIR) techniques for effective and efficient search. In this special issue, we present a collection of five papers on visual and multimedia information management and retrieval topics, addressing some aspects of these challenges. These papers have been selected from the conference proceedings (Kluwer Academic Publishers, ISBN: 1-4020- 7060-8) of the Sixth IFIP 2.6 Working Conference on Visual Database Systems (VDB6), held in Brisbane, Australia, on 29–31 May 2002.