957 resultados para On-Chip Balun
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today's surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.
Resumo:
The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^
Resumo:
Increasing useof nanomaterials in consumer products and biomedical applications creates the possibilities of intentional/unintentional exposure to humans and the environment. Beyond the physiological limit, the nanomaterialexposure to humans can induce toxicity. It is difficult to define toxicity of nanoparticles on humans as it varies by nanomaterialcomposition, size, surface properties and the target organ/cell line. Traditional tests for nanomaterialtoxicity assessment are mostly based on bulk-colorimetric assays. In many studies, nanomaterials have found to interfere with assay-dye to produce false results and usually require several hours or days to collect results. Therefore, there is a clear need for alternative tools that can provide accurate, rapid, and sensitive measure of initial nanomaterialscreening. Recent advancement in single cell studies has suggested discovering cell properties not found earlier in traditional bulk assays. A complex phenomenon, like nanotoxicity, may become clearer when studied at the single cell level, including with small colonies of cells. Advances in lab-on-a-chip techniques have played a significant role in drug discoveries and biosensor applications, however, rarely explored for nanomaterialtoxicity assessment. We presented such cell-integrated chip-based approach that provided quantitative and rapid response of cellhealth, through electrochemical measurements. Moreover, the novel design of the device presented in this study was capable of capturing and analyzing the cells at a single cell and small cell-population level. We examined the change in exocytosis (i.e. neurotransmitterrelease) properties of a single PC12 cell, when exposed to CuOand TiO2 nanoparticles. We found both nanomaterials to interfere with the cell exocytosis function. We also studied the whole-cell response of a single-cell and a small cell-population simultaneously in real-time for the first time. The presented study can be a reference to the future research in the direction of nanotoxicity assessment to develop miniature, simple, and cost-effective tool for fast, quantitative measurements at high throughput level. The designed lab-on-a-chip device and measurement techniques utilized in the present work can be applied for the assessment of othernanoparticles' toxicity, as well.
Resumo:
Technological developments in biomedical microsystems are opening up new opportunities to improve healthcare procedures. Swallowable diagnostic sensing capsules are an example of these. In none of the diagnostic sensing capsules, is the sensor’s first level packaging achieved via Flip Chip Over Hole (FCOH) method using Anisotropic Conductive Adhesive (ACA). In a capsule application with direct access sensor (DAS), ACA not only provides the electrical interconnection but simultaneously seals the interconnect area and the underlying electronics. The development showed that the ACA FCOH was a viable option for the DAS interconnection. Adequate adhesive formed a strong joint that withstood a shear stress of 120N/mm2 and a compressive stress of 6N required to secure the final sensor assembly in place before encapsulation. Electrical characterization of the ACA joint in a fluid environment showed that the ACA was saturated with moisture and that the ions in the solution actively contributed to the leakage current, characterized by the varying rate of change of conductance. Long term hygrothermal aging of the ACA joint showed that a thermal strain of 0.004 and a hygroscopic strain of 0.0052 were present and resulted in a fatigue like process. In-vitro tests showed that high temperature and acidity had a deleterious effect of the ACA and its joint. It also showed that the ACA contact joints positioned at around or over 1mm would survive the gastrointestinal (GI) fluids and would be able to provide a reliable contact during the entire 72hr of the GI transit time. A final capsule demonstrator was achieved by successfully integrating the DAS, the battery and the final foldable circuitry into a glycerine capsule. Final capsule soak tests suggested that the silicone encapsulated system could survive the 72hr gut transition.
Resumo:
Increasing useof nanomaterials in consumer products and biomedical applications creates the possibilities of intentional/unintentional exposure to humans and the environment. Beyond the physiological limit, the nanomaterialexposure to humans can induce toxicity. It is difficult to define toxicity of nanoparticles on humans as it varies by nanomaterialcomposition, size, surface properties and the target organ/cell line. Traditional tests for nanomaterialtoxicity assessment are mostly based on bulk-colorimetric assays. In many studies, nanomaterials have found to interfere with assay-dye to produce false results and usually require several hours or days to collect results. Therefore, there is a clear need for alternative tools that can provide accurate, rapid, and sensitive measure of initial nanomaterialscreening. Recent advancement in single cell studies has suggested discovering cell properties not found earlier in traditional bulk assays. A complex phenomenon, like nanotoxicity, may become clearer when studied at the single cell level, including with small colonies of cells. Advances in lab-on-a-chip techniques have played a significant role in drug discoveries and biosensor applications, however, rarely explored for nanomaterialtoxicity assessment. We presented such cell-integrated chip-based approach that provided quantitative and rapid response of cellhealth, through electrochemical measurements. Moreover, the novel design of the device presented in this study was capable of capturing and analyzing the cells at a single cell and small cell-population level. We examined the change in exocytosis (i.e. neurotransmitterrelease) properties of a single PC12 cell, when exposed to CuOand TiO2 nanoparticles. We found both nanomaterials to interfere with the cell exocytosis function. We also studied the whole-cell response of a single-cell and a small cell-population simultaneously in real-time for the first time. The presented study can be a reference to the future research in the direction of nanotoxicity assessment to develop miniature, simple, and cost-effective tool for fast, quantitative measurements at high throughput level. The designed lab-on-a-chip device and measurement techniques utilized in the present work can be applied for the assessment of othernanoparticles' toxicity, as well.^
Resumo:
A new approach for the integration of dual contactless conductivity and amperometric detection with an electrophoresis microchip system is presented. The PDMS layer with the embedded channels was reversibly sealed to a thin glass substrate (400 mu m), on top of which a palladium electrode had been previously fabricated enabling end-channel amperometric detection. The thin glass substrate served also as a physical wall between the separation channel and the sensing copper electrodes for contactless conductivity detection. The latter were not integrated in the microfluidic device, but fabricated on an independent plastic substrate allowing a simpler and more cost-effective fabrication of the chip. PDMS/glass chips with merely contactless conductivity detection were first characterized in terms of sensitivity, efficiency and reproducibility. The separation efficiency of this system was found to be similar or slightly superior to other systems reported in the literature. The simultaneous determination of ionic and electroactive species was illustrated by the separation of peroxynitrite degradation products, i.e. NO(3)(-) (non-electroactive) and NO(2)(-) (electroactive), using hybrid PDMS/glass chips with dual contactless conductivity and amperometric detection. While both ions were detected by contactless conductivity detection with good efficiency, NO(2)(-) was also simultaneously detected amperometrically with a significant enhancement in sensitivity compared to contactless conductivity detection.
Resumo:
Surface heat treatment in glasses and ceramics, using CO(2) lasers, has attracted the attention of several researchers around the world due to its impact in technological applications, such as lab-on-a-chip devices, diffraction gratings and microlenses. Microlens fabrication on a glass surface has been studied mainly due to its importance in optical devices (fiber coupling, CCD signal enhancement, etc). The goal of this work is to present a systematic study of the conditions for microlens fabrications, along with the viability of using microlens arrays, recorded on the glass surface, as bidimensional codes for product identification. This would allow the production of codes without any residues (like the fine powder generated by laser ablation) and resistance to an aggressive environment, such as sterilization processes. The microlens arrays were fabricated using a continuous wave CO(2) laser, focused on the surface of flat commercial soda-lime silicate glass substrates. The fabrication conditions were studied based on laser power, heating time and microlens profiles. A He-Ne laser was used as a light source in a qualitative experiment to test the viability of using the microlenses as bidimensional codes.
Resumo:
This paper reports a method for the analysis of secondary metabolites stored in glandular trichomes, employing negative ion `chip-based` nanospray tandem mass spectrometry. The analyses of glandular trichomes from Lychnophora ericoides, a plant endemic to the Brazilian `cerrado` and used in traditional medicine as an anti-inflammatory and analgesic agent, led to the identification of five flavonoids (chrysin, pinocembrin, pinostrobin, pinobanksin and 3-O-acetylpinobanksin) by direct infusion of the extracts of glandular trichomes into the nanospray ionisation source. All the flavonoids have no oxidation at ring B, which resulted in a modification of the fragmentation pathways compared with that of the oxidised 3,4-dihydroflavonoids already described in the literature. The absence of the anti-inflammatory and antioxidant di-C-glucosylflavone vicenin-2, or any other flavonoid glycosides, in the glandular trichomes was also demonstrated. The use of the,`chip-based` nanospray QqTOF apparatus is a new fast and useful tool for the identification of secondary metabolites stored in the glandular trichomes, which can be useful for chemotaxonomic studies based on metabolites from glandular trichomes. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
We describe a novel method of fabricating atom chips that are well suited to the production and manipulation of atomic Bose–Einstein condensates. Our chip was created using a silver foil and simple micro-cutting techniques without the need for photolithography. It can sustain larger currents than conventional chips, and is compatible with the patterning of complex trapping potentials. A near pure Bose–Einstein condensate of 4 × 104 87Rb atoms has been created in a magnetic microtrap formed by currents through wires on the chip. We have observed the fragmentation of atom clouds in close proximity to the silver conductors. The fragmentation has different characteristic features to those seen with copper conductors.
Resumo:
We investigate the design of free-space optical interconnects (FSOIs) based on arrays of vertical-cavity surface-emitting lasers (VCSELs), microlenses, and photodetectors. We explain the effect of the modal structure of a multimodeVCSEL beam on the performance of a FSOI with microchannel architecture. A Gaussian-beam diffraction model is used in combination with the experimentally obtained spectrally resolved VCSEL beam profiles to determine the optical channel crosstalk and the signal-to-noise ratio (SNR) in the system. The dependence of the SNR on the feature parameters of a FSOI is investigated. We found that the presence of higher-order modes reduces the SNR and the maximum feasible interconnect distance. We also found that the positioning of a VCSEL array relative to the transmitter microlens has a significant impact on the SNR and the maximum feasible interconnect distance. Our analysis shows that the departure from the traditional confocal system yields several advantages including the extended interconnect distance and/or improved SNR. The results show that FSOIs based on multimode VCSELs can be efficiently utilized in both chip-level and board-level interconnects. (C) 2002 Optical Society of America.
Resumo:
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
Technological limitations and power constraints are resulting in high-performance parallel computing architectures that are based on large numbers of high-core-count processors. Commercially available processors are now at 8 and 16 cores and experimental platforms, such as the many-core Intel Single-chip Cloud Computer (SCC) platform, provide much higher core counts. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.In this work, we first investigate the power behavior of scientific PGAS application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layerapproach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of recommendations and insights that can be used to support similar power management for PGAS applications on other many-core platforms.
Resumo:
ABSTRACT The interorganizational cooperation, through joint efforts with various actors, allows the high-tech companies to complement resources, especially in R&D projects. Collaborative projects have been identified in many studies as an important strategy to produce complex products and services in uncertain and competitive environments. Thus, this research aims at deepening the understanding of how the development dynamics of a collaborative R&D project in an industry of high technology occur. In order to achieve the proposed objective, the R&D project of the first microcontroller in the Brazilian semiconductor industry was defined as the object of analysis. The empirical choice is justified by the uniqueness of the case, besides bringing a diversity of actors and a level of complementarity of resources that were significant to the success of the project. Given the motivation to know who the actors were and what the main forms of interorganizational coordination were used in this project, interviews were carried out and a questionnaire was also made, besides other documents related to the project. The results presented show a network of nine actors and their roles in the interorganizational collaboration process, as well as the forms of social and temporal overlapping, used in the coordination of collective efforts. Focusing on the mechanisms of temporal and social integration highlighted throughout the study, the inclusion of R&D projects in the typology for interorganizational projects is proposed in this paper, which was also proposed by Jones and Lichtenstein (2008).