921 resultados para ChIP-Seq
Resumo:
This dissertation deals with the design, fabrication, and applications of microscale electrospray ionization chips for mass spectrometry. The microchip consists of microchannel, which leads to a sharp electrospray tip. Microchannel contain micropillars that facilitate a powerful capillary action in the channels. The capillary action delivers the liquid sample to the electrospray tip, which sprays the liquid sample to gas phase ions that can be analyzed with mass spectrometry. The microchip uses a high voltage, which can be utilized as a valve between the microchip and mass spectrometry. The microchips can be used in various applications, such as for analyses of drugs, proteins, peptides, or metabolites. The microchip works without pumps for liquid transfer, is usable for rapid analyses, and is sensitive. The characteristics of performance of the single microchips are studied and a rotating multitip version of the microchips are designed and fabricated. It is possible to use the microchip also as a microreactor and reaction products can be detected online with mass spectrometry. This property can be utilized for protein identification for example. Proteins can be digested enzymatically on-chip and reaction products, which are in this case peptides, can be detected with mass spectrometry. Because reactions occur faster in a microscale due to shorter diffusion lengths, the amount of protein can be very low, which is a benefit of the method. The microchip is well suited to surface activated reactions because of a high surface-to-volume ratio due to a dense micropillar array. For example, titanium dioxide nanolayer on the micropillar array combined with UV radiation produces photocatalytic reactions which can be used for mimicking drug metabolism biotransformation reactions. Rapid mimicking with the microchip eases the detection of possibly toxic compounds in preclinical research and therefore could speed up the research of new drugs. A micropillar array chip can also be utilized in the fabrication of liquid chromatographic columns. Precisely ordered micropillar arrays offer a very homogenous column, where separation of compounds has been demonstrated by using both laser induced fluorescence and mass spectrometry. Because of small dimensions on the microchip, the integrated microchip based liquid chromatography electrospray microchip is especially well suited to low sample concentrations. Overall, this work demonstrates that the designed and fabricated silicon/glass three dimensionally sharp electrospray tip is unique and facilitates stable ion spray for mass spectrometry.
Resumo:
A new Schmitt trigger circuit based on the lambda bipolar transistor is presented. This circuit which exhibits a hysteresis in its transfer characteristic seems to use a smaller chip area than many of the circuits proposed so far.
Resumo:
The increasing variability in device leakage has made the design of keepers for wide OR structures a challenging task. The conventional feedback keepers (CONV) can no longer improve the performance of wide dynamic gates for the future technologies. In this paper, we propose an adaptive keeper technique called rate sensing keeper (RSK) that enables faster switching and tracks the variation across different process corners. It can switch upto 1.9x faster (for 20 legs) than CONV and can scale upto 32 legs as against 20 legs for CONV in a 130-nm 1.2-V process. The delay tracking is within 8% across the different process corners. We demonstrate the circuit operation of RSK using a 32 x 8 register file implemented in an industrial 130-nm 1.2-V CMOS process. The performance of individual dynamic logic gates are also evaluated on chip for various keeper techniques. We show that the RSK technique gives superior performance compared to the other alternatives such as Conditional Keeper (CKP) and current mirror-based keeper (LCR).
Resumo:
A comparative study has been carried out of R-12, 22, 125, 134a, 152a, 218, 245, 500, 502, 507 and 717 as working fluids in a vapour-compression refrigeration system. Two performance parameters were defined, which are expressed in reduced quantities for a corresponding-states comparison of these refrigerants in the temperature range -20 to 50-degrees-C. One is based on the product of temperature drop to pressure penalty ratio and the available volumetric heat of vaporisation at the evaporator; the other considers the effect of isentropic compression in the ideal gas state. It was shown that R-125, 507 and 218 could be better alternatives to R-12 than R-134a. Among these, R-218 has a lower maximum cycle pressure.
Resumo:
The system gain of two CCD systems in regular use at the Vainu Bappu Observatory, Kavalur, is determined at a few gain settings. The procedure used for the determination of system gain and base-level noise is described in detail. The Photometrics CCD system at the 1-m reflector uses a Thomson-CSF TH 7882 CDA chip coated for increased ultraviolet sensitivity. The gain is programme-selected through the parameter 'cgain' varying between 0 and 4095 in steps of 1. The inverse system gain for this system varies almost linearly from 27.7 electrons DN-1 at cgain = 0 to 1.5 electrons DN-1 at cgain = 500. The readout noise is less than or similar 11 electrons at cgain = 66. The Astromed CCD system at 2.3-m Vainu Bappu Telescope uses a GEC P8603 chip which is also coated for enhanced ultraviolet sensitivity. The amplifier gain is selected in discrete steps using switches in the controller. The inverse system gain is 4.15 electrons DN-1 at the gain setting of 9.2, and the readout noise approximately 8 electrons.
Resumo:
Experiments have been conducted to obtain the optimum conditions of hydrogen ion concentration of feed and strip phases and concentration of the carrier ALAMINE 336, in the extraction of Cr(VI) and Hg(II) using two different types of liquid membranes-bulk liquid membrane and emulsion liquid membrane. Experiments have also been carried out to find the effect of metal loading and the effect of extraction time on metal flux.
Resumo:
brusive Jet Machining (AJM) or Micro Blast Machining is a non-traditional machining process, wherein material removal is effected by the erosive action of a high velocity jet of a gas, carrying fine-grained abrasive particles, impacting the work surface. The AJM process differs from conventional sand blasting in that the abrasive is much finer and the process parameters and cutting action are carefully controlled. The process is particularly suitable to cut intricate shapes in hard and brittle materials which are sensitive to heat and have a tendency to chip easily. In other words, AJM can handle virtually any hard or brittle material. Already the process has found its ways Into dozens of applications; sometimes replacing conventional alternatives often doing jobs that could not be done in any other way. This paper reviews the current status of this non-conventional machining process and discusses the unique advantages and possible applications.
Resumo:
We use Monte Carlo simulations to obtain thermodynamic functions and correlation functions in a lattice model we propose for sponge phases. We demonstrate that the surface-density correlation function dominates the scattering only along the symmetric-sponge (SS) to asymmetric-sponge (AS) phase boundary but not the boundary between the sponge-with-free-edges (SFE) and symmetric-sponge phases. At this second thermodynamic transition the scattering is dominated instead by an edge-density (or seam-density) correlation function. This prediction provides an unambiguous diagnostic for experiments in search of the SS-SFE transition.
Resumo:
We consider a system comprising a finite number of nodes, with infinite packet buffers, that use unslotted ALOHA with Code Division Multiple Access (CDMA) to share a channel for transmitting packetised data. We propose a simple model for packet transmission and retransmission at each node, and show that saturation throughput in this model yields a sufficient condition for the stability of the packet buffers; we interpret this as the capacity of the access method. We calculate and compare the capacities of CDMA-ALOHA (with and without code sharing) and TDMA-ALOHA; we also consider carrier sensing and collision detection versions of these protocols. In each case, saturation throughput can be obtained via analysis pf a continuous time Markov chain. Our results show how saturation throughput degrades with code-sharing. Finally, we also present some simulation results for mean packet delay. Our work is motivated by optical CDMA in which "chips" can be optically generated, and hence the achievable chip rate can exceed the achievable TDMA bit rate which is limited by electronics. Code sharing may be useful in the optical CDMA context as it reduces the number of optical correlators at the receivers. Our throughput results help to quantify by how much the CDMA chip rate should exceed the TDMA bit rate so that CDMA-ALOHA yields better capacity than TDMA-ALOHA.
Resumo:
The effect on the macroscopic compressive failure features of introduction of two flexible foam layers, either together at mid-region or separately at two locations that are away from the midregion, into a glass-epoxy (G-E) system is studied in this work. In this experimental approach an attempt to look at the possible influence the foam/G-E interface region has on the way the materials respond to compressive loading is made by involving an analyses of macrofractographic features. While foam-free samples fail by extensive ear formation and separation nearer to the mid-region, the foam bearing ones display pronounced interface separation. The positioning of the foam sheet(s) has a bearing on the failure features.
Resumo:
An in-situ power monitoring technique for Dynamic Voltage and Threshold scaling (DVTS) systems is proposed which measures total power consumed by load circuit using sleep transistor acting as power sensor. Design details of power monitor are examined using simulation framework in UMC 90nm CMOS process. Experimental results of test chip fabricated in AMS 0.35µm CMOS process are presented. The test chip has variable activity between 0.05 and 0.5 and has PMOS VTH control through nWell contact. Maximum resolution obtained from power monitor is 0.25mV. Overhead of power monitor in terms of its power consumption is 0.244 mW (2.2% of total power of load circuit). Lastly, power monitor is used to demonstrate closed loop DVTS system. DVTS algorithm shows 46.3% power savings using in-situ power monitor.
Resumo:
In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.
Resumo:
In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.
Resumo:
Building flexible constraint length Viterbi decoders requires us to be able to realize de Bruijn networks of various sizes on the physically provided interconnection network. This paper considers the case when the physical network is itself a de Bruijn network and presents a scalable technique for realizing any n-node de Bruijn network on an N-node de Bruijn network, where n < N. The technique ensures that the length of the longest path realized on the network is minimized and that each physical connection is utilized to send only one data item, both of which are desirable in order to reduce the hardware complexity of the network and to obtain the best possible performance.
Resumo:
Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences critical system design objectives like area, power and performance. Hence the embedded system designer performs a complete memory architecture exploration to custom design a memory architecture for a given set of applications. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of exhaustive-search based memory exploration at the outer level and a two step based integrated data layout for SPRAM-Cache based architectures at the inner level. We present a two step integrated approach for data layout for SPRAM-Cache based hybrid architectures with the first step as data-partitioning that partitions data between SPRAM and Cache, and the second step is the cache conscious data layout. We formulate the cache-conscious data layout as a graph partitioning problem and show that our approach gives up to 34% improvement over an existing approach and also optimizes the off-chip memory address space. We experimented our approach with 3 embedded multimedia applications and our approach explores several hundred memory configurations for each application, yielding several optimal design points in a few hours of computation on a standard desktop.