16 resultados para lab on a chip


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasing useof nanomaterials in consumer products and biomedical applications creates the possibilities of intentional/unintentional exposure to humans and the environment. Beyond the physiological limit, the nanomaterialexposure to humans can induce toxicity. It is difficult to define toxicity of nanoparticles on humans as it varies by nanomaterialcomposition, size, surface properties and the target organ/cell line. Traditional tests for nanomaterialtoxicity assessment are mostly based on bulk-colorimetric assays. In many studies, nanomaterials have found to interfere with assay-dye to produce false results and usually require several hours or days to collect results. Therefore, there is a clear need for alternative tools that can provide accurate, rapid, and sensitive measure of initial nanomaterialscreening. Recent advancement in single cell studies has suggested discovering cell properties not found earlier in traditional bulk assays. A complex phenomenon, like nanotoxicity, may become clearer when studied at the single cell level, including with small colonies of cells. Advances in lab-on-a-chip techniques have played a significant role in drug discoveries and biosensor applications, however, rarely explored for nanomaterialtoxicity assessment. We presented such cell-integrated chip-based approach that provided quantitative and rapid response of cellhealth, through electrochemical measurements. Moreover, the novel design of the device presented in this study was capable of capturing and analyzing the cells at a single cell and small cell-population level. We examined the change in exocytosis (i.e. neurotransmitterrelease) properties of a single PC12 cell, when exposed to CuOand TiO2 nanoparticles. We found both nanomaterials to interfere with the cell exocytosis function. We also studied the whole-cell response of a single-cell and a small cell-population simultaneously in real-time for the first time. The presented study can be a reference to the future research in the direction of nanotoxicity assessment to develop miniature, simple, and cost-effective tool for fast, quantitative measurements at high throughput level. The designed lab-on-a-chip device and measurement techniques utilized in the present work can be applied for the assessment of othernanoparticles' toxicity, as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasing useof nanomaterials in consumer products and biomedical applications creates the possibilities of intentional/unintentional exposure to humans and the environment. Beyond the physiological limit, the nanomaterialexposure to humans can induce toxicity. It is difficult to define toxicity of nanoparticles on humans as it varies by nanomaterialcomposition, size, surface properties and the target organ/cell line. Traditional tests for nanomaterialtoxicity assessment are mostly based on bulk-colorimetric assays. In many studies, nanomaterials have found to interfere with assay-dye to produce false results and usually require several hours or days to collect results. Therefore, there is a clear need for alternative tools that can provide accurate, rapid, and sensitive measure of initial nanomaterialscreening. Recent advancement in single cell studies has suggested discovering cell properties not found earlier in traditional bulk assays. A complex phenomenon, like nanotoxicity, may become clearer when studied at the single cell level, including with small colonies of cells. Advances in lab-on-a-chip techniques have played a significant role in drug discoveries and biosensor applications, however, rarely explored for nanomaterialtoxicity assessment. We presented such cell-integrated chip-based approach that provided quantitative and rapid response of cellhealth, through electrochemical measurements. Moreover, the novel design of the device presented in this study was capable of capturing and analyzing the cells at a single cell and small cell-population level. We examined the change in exocytosis (i.e. neurotransmitterrelease) properties of a single PC12 cell, when exposed to CuOand TiO2 nanoparticles. We found both nanomaterials to interfere with the cell exocytosis function. We also studied the whole-cell response of a single-cell and a small cell-population simultaneously in real-time for the first time. The presented study can be a reference to the future research in the direction of nanotoxicity assessment to develop miniature, simple, and cost-effective tool for fast, quantitative measurements at high throughput level. The designed lab-on-a-chip device and measurement techniques utilized in the present work can be applied for the assessment of othernanoparticles' toxicity, as well.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today's surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrated on-chip optical platforms enable high performance in applications of high-speed all-optical or electro-optical switching, wide-range multi-wavelength on-chip lasing for communication, and lab-on-chip optical sensing. Integrated optical resonators with high quality factor are a fundamental component in these applications. Periodic photonic structures (photonic crystals) exhibit a photonic band gap, which can be used to manipulate photons in a way similar to the control of electrons in semiconductor circuits. This makes it possible to create structures with radically improved optical properties. Compared to silicon, polymers offer a potentially inexpensive material platform with ease of fabrication at low temperatures and a wide range of material properties when doped with nanocrystals and other molecules. In this research work, several polymer periodic photonic structures are proposed and investigated to improve optical confinement and optical sensing. We developed a fast numerical method for calculating the quality factor of a photonic crystal slab (PhCS) cavity. The calculation is implemented via a 2D-FDTD method followed by a post-process for cavity surface energy radiation loss. Computational time is saved and good accuracy is demonstrated compared to other published methods. Also, we proposed a novel concept of slot-PhCS which enhanced the energy density 20 times compared to traditional PhCS. It combines both advantages of the slot waveguide and photonic crystal to localize the high energy density in the low index material. This property could increase the interaction between light and material embedded with nanoparticles like quantum dots for active device development. We also demonstrated a wide range bandgap based on a one dimensional waveguide distributed Bragg reflector with high coupling to optical waveguides enabling it to be easily integrated with other optical components on the chip. A flexible polymer (SU8) grating waveguide is proposed as a force sensor. The proposed sensor can monitor nN range forces through its spectral shift. Finally, quantum dot - doped SU8 polymer structures are demonstrated by optimizing spin coating and UV exposure. Clear patterns with high emission spectra proved the compatibility of the fabrication process for applications in optical amplification and lasing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two men in lab. On verso: Lasansky

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to the increasing demand for high power and reliable miniaturized energy storage devices, the development of micro-supercapacitors or electrochemical micro-capacitors have attracted much attention in recent years. This dissertation investigates several strategies to develop on-chip micro-supercapacitors with high power and energy density. Micro-supercapacitors based on interdigitated carbon micro-electrode arrays are fabricated through carbon microelectromechanical systems (C-MEMS) technique which is based on carbonization of patterned photoresist. To improve the capacitive behavior, electrochemical activation is performed on carbon micro-electrode arrays. The developed micro-supercapacitors show specific capacitances as high as 75 mFcm-2 at a scan rate of 5 mVs -1 after electrochemical activation for 30 minutes. The capacitance loss is less than 13% after 1000 cyclic voltammetry (CV) cycles. These results indicate that electrochemically activated C-MEMS micro-electrode arrays are promising candidates for on-chip electrochemical micro-capacitor applications. The energy density of micro-supercapacitors was further improved by conformal coating of polypyrrole (PPy) on C-MEMS structures. In these types of micro-devices the three dimensional (3D) carbon microstructures serve as current collectors for high energy density PPy electrodes. The electrochemical characterizations of these micro-supercapacitors show that they can deliver a specific capacitance of about 162.07 mFcm-2 and a specific power of 1.62mWcm -2 at a 20 mVs-1 scan rate. Addressing the need for high power micro-supercapacitors, the application of graphene as electrode materials for micro-supercapacitor was also investigated. The present study suggests a novel method to fabricate graphene-based micro-supercapacitors with thin film or in-plane interdigital electrodes. The fabricated micro-supercapacitors show exceptional frequency response and power handling performance and could effectively charge and discharge at rates as high as 50 Vs-1. CV measurements show that the specific capacitance of the micro-supercapacitor based on reduced graphene oxide and carbon nanotube composites is 6.1 mFcm -2 at scan rate of 0.01Vs-1. At a very high scan rate of 50 Vs-1, a specific capacitance of 2.8 mFcm-2 (stack capacitance of 3.1 Fcm-3) is recorded. This unprecedented performance can potentially broaden the future applications of micro-supercapacitors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the mid-1990s, the United States has experienced a shortage of scientists and engineers, declining numbers of students choosing these fields as majors, and low student success and retention rates in these disciplines. Learning theorists, educational researchers, and practitioners believe that learning environments can be created so that an improvement in the numbers of students who complete courses successfully could be attained (Astin, 1993; Magolda & Terenzini, n.d.; O'Banion, 1997). Learning communities do this by providing high expectations, academic and social support, feedback during the entire educational process, and involvement with faculty, other students, and the institution (Ketcheson & Levine, 1999). ^ A program evaluation of an existing learning community of science, mathematics, and engineering majors was conducted to determine the extent to which the program met its goals and was effective from faculty and student perspectives. The program provided laptop computers, peer tutors, supplemental instruction with and without computer software, small class size, opportunities for contact with specialists in selected career fields, a resource library, and Peer-Led Team Learning. During the two years the project has existed, success, retention, and next-course continuation rates were higher than in traditional courses. Faculty and student interviews indicated there were many affective accomplishments as well. ^ Success and retention rates for one learning community class ( n = 27) and one traditional class (n = 61) in chemistry were collected and compared using Pearson chi square procedures ( p = .05). No statistically significant difference was found between the two groups. Data from an open-ended student survey about how specific elements of their course experiences contributed to success and persistence were analyzed by coding the responses and comparing the learning community and traditional classes. Substantial differences were found in their perceptions about the lecture, the lab, other supports used for the course, contact with other students, helping them reach their potential, and their recommendation about the course to others. Because of the limitation of small sample size, these differences are reported in descriptive terms. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The kaon electroproduction reaction H(e, e ′K+)Λ was studied as a function of the four momentum transfer, Q2, for different values of the virtual photon polarization parameter. Electrons and kaons were detected in coincidence in two High Resolution Spectrometers (HRS) at Jefferson Lab. Data were taken at electron beam energies ranging from 3.4006 to 5.7544 GeV. The kaons were identified using combined time of flight information and two Aerogel Čerenkov detectors used for particle identification. For different values of Q2 ranging from 1.90 to 2.35 GeV/c2 the center of mass cross sections for the Λ hyperon were determined for 20 kinematics and the longitudinal, σ L, and transverse, σT, terms were separated using the Rosenbluth separation technique. ^ Comparisons between available models and data have been studied. The comparison supports the t-channel dominance behavior for kaon electroproduction. All models seem to underpredict the transverse cross section. An estimate of the kaon form factor has been explored by determining the sensitivity of the separated cross sections to variations of the kaon EM form factor. From comparison between models and data we can conclude that interpreting the data using the Regge model is quite sensitive to a particular choice for the EM form factors. The data from the E98-108 experiment extends the range of the available kaon electroproduction cross section data to an unexplored region of Q2 where no separations have ever been performed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study compares the effects of cooperative delivery (CD) and individual delivery (ID) of integrated learning system (ILS) instruction in mathematics on achievement, attitudes and behaviors in adult (16-21 yrs.) high school students (grades 9-13). The study was conducted in an urban adult high school in Miami-Dade County Public Schools using a pre-test/post-test design. Achievement was measured using the Test of Adult Basic Education (TABE) by CTB MC-Graw-Hill and Compass Learning. An attitudinal survey measured attitudes towards mathematics, the computer-related lessons, and attitudes toward group activities. Behavior was assessed using computer lab observations. ^ Two-way analyses of variance (ANOVA) were conducted on achievement (TABE and Compass) by group and time (pre and post). A one-way ANOVA was conducted on the overall attitude by group on the five components (i.e., content mathematics, delivery/computers, cooperative, partners, and self efficacy) and a one-way ANOVA was conducted on the on-task behavior by group. ^ The results of the study revealed that CD and ID students working on mathematics activities delivered by the ILS performed similarly on achievement tests of the TABE. The CD-ILS students had significantly better overall mathematics attitudes than the ID-ILS students and the ID-ILS group was on-task significantly more than the CD-ILS group. This study concludes that regularity and period of time over which the ILS is used may prove to be important variables although there were insufficient data to fully investigate the impact of models of use. Additionally, a minimum amount of time-on-system is necessary before gains can become apparent in innumeracy and increasing exposure to the system may have beneficial effects on learning. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Silicon photonics is a very promising technology for future low-cost high-bandwidth optical telecommunication applications down to the chip level. This is due to the high degree of integration, high optical bandwidth and large speed coupled with the development of a wide range of integrated optical functions. Silicon-based microring resonators are a key building block that can be used to realize many optical functions such as switching, multiplexing, demultiplaxing and detection of optical wave. The ability to tune the resonances of the microring resonators is highly desirable in many of their applications. In this work, the study and application of a thermally wavelength-tunable photonic switch based on silicon microring resonator is presented. Devices with 10μm diameter were systematically studied and used in the design. Its resonance wavelength was tuned by thermally induced refractive index change using a designed local micro-heater. While thermo-optic tuning has moderate speed compared with electro-optic and all-optic tuning, with silicon’s high thermo-optic coefficient, a much wider wavelength tunable range can be realized. The device design was verified and optimized by optical and thermal simulations. The fabrication and characterization of the device was also implemented. The microring resonator has a measured FSR of ∼18 nm, FWHM in the range 0.1-0.2 nm and Q around 10,000. A wide tunable range (>6.4 nm) was achieved with the switch, which enables dense wavelength division multiplexing (DWDM) with a channel space of 0.2nm. The time response of the switch was tested on the order of 10 μs with a low power consumption of ∼11.9mW/nm. The measured results are in agreement with the simulations. Important applications using the tunable photonic switch were demonstrated in this work. 1×4 and 4×4 reconfigurable photonic switch were implemented by using multiple switches with a common bus waveguide. The results suggest the feasibility of on-chip DWDM for the development of large-scale integrated photonics. Using the tunable switch for output wavelength control, a fiber laser was demonstrated with Erbium-doped fiber amplifier as the gain media. For the first time, this approach integrated on-chip silicon photonic wavelength control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design. Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions. The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Silicon photonics is a very promising technology for future low-cost high-bandwidth optical telecommunication applications down to the chip level. This is due to the high degree of integration, high optical bandwidth and large speed coupled with the development of a wide range of integrated optical functions. Silicon-based microring resonators are a key building block that can be used to realize many optical functions such as switching, multiplexing, demultiplaxing and detection of optical wave. The ability to tune the resonances of the microring resonators is highly desirable in many of their applications. In this work, the study and application of a thermally wavelength-tunable photonic switch based on silicon microring resonator is presented. Devices with 10µm diameter were systematically studied and used in the design. Its resonance wavelength was tuned by thermally induced refractive index change using a designed local micro-heater. While thermo-optic tuning has moderate speed compared with electro-optic and all-optic tuning, with silicon’s high thermo-optic coefficient, a much wider wavelength tunable range can be realized. The device design was verified and optimized by optical and thermal simulations. The fabrication and characterization of the device was also implemented. The microring resonator has a measured FSR of ~18 nm, FWHM in the range 0.1-0.2 nm and Q around 10,000. A wide tunable range (>6.4 nm) was achieved with the switch, which enables dense wavelength division multiplexing (DWDM) with a channel space of 0.2nm. The time response of the switch was tested on the order of 10 us with a low power consumption of ~11.9mW/nm. The measured results are in agreement with the simulations. Important applications using the tunable photonic switch were demonstrated in this work. 1×4 and 4×4 reconfigurable photonic switch were implemented by using multiple switches with a common bus waveguide. The results suggest the feasibility of on-chip DWDM for the development of large-scale integrated photonics. Using the tunable switch for output wavelength control, a fiber laser was demonstrated with Erbium-doped fiber amplifier as the gain media. For the first time, this approach integrated on-chip silicon photonic wavelength control.