999 resultados para Array techniques
Resumo:
We report on two patients with de novo subtelomeric terminal deletion of chromosome 6p. Patient 1 is an 8-month-old female born with normal growth parameters, typical facial features of 6pter deletion, bilateral corectopia, and protruding tongue. She has severe developmental delay, profound bilateral neurosensory deafness, poor visual contact, and hypsarrhythmia since the age of 6 months. Patient 2 is a 5-year-old male born with normal growth parameters and unilateral hip dysplasia; he has a characteristic facial phenotype, bilateral embryotoxon, and moderate mental retardation. Further characterization of the deletion, using high-resolution array comparative genomic hybridization (array-CGH; Agilent Human Genome kit 244 K), revealed that Patient 1 has a 8.1 Mb 6pter-6p24.3 deletion associated with a contiguous 5.8 Mb 6p24.3-6p24.1 duplication and Patient 2 a 5.7 Mb 6pter-6p25.1 deletion partially overlapping with that of Patient 1. Complementary FISH and array analysis showed that the inv del dup(6) in Patient 1 originated de novo. Our results demonstrate that simple rearrangements are often more complex than defined by standard techniques. We also discuss genotype-phenotype correlations including previously reported cases of deletion 6p.
Resumo:
Process variations are a major bottleneck for digital CMOS integrated circuits manufacturability and yield. That iswhy regular techniques with different degrees of regularity are emerging as possible solutions. Our proposal is a new regular layout design technique called Via-Configurable Transistors Array (VCTA) that pushes to the limit circuit layout regularity for devices and interconnects in order to maximize regularity benefits. VCTA is predicted to perform worse than the Standard Cell approach designs for a certain technology node but it will allow the use of a future technology on an earlier time. Ourobjective is to optimize VCTA for it to be comparable to the Standard Cell design in an older technology. Simulations for the first unoptimized version of our VCTA of delay and energy consumption for a Full Adder circuit in the 90 nm technology node are presented and also the extrapolation for Carry-RippleAdders from 4 bits to 64 bits.
Resumo:
The main goal of the present Master’s Thesis project was to create a field-programmable gate array (FPGA) based system for the control of single-electron transistors or other cryoelectronic devices. The FPGA and similar technologies are studied in the present work. The fixed and programmable logic are compared with each other. The main features and limitations of the hardware used in the project are investigated. The hardware and software connections of the device to the computer are shown in detail. The software development techniques for FPGA-based design are described. The steps of design for programmable logic are considered. Furthermore, the results of filters implemented in the software are illustrated.
Resumo:
Pan-viral DNA array (PVDA) and high-throughput sequencing (HTS) are useful tools to identify novel viruses of emerging diseases. However, both techniques have difficulties to identify viruses in clinical samples because of the host genomic nucleic acid content (hg/cont). Both propidium monoazide (PMA) and ethidium bromide monoazide (EMA) have the capacity to bind free DNA/RNA, but are cell membrane-impermeable. Thus, both are unable to bind protected nucleic acid such as viral genomes within intact virions. However, EMA/PMA modified genetic material cannot be amplified by enzymes. In order to assess the potential of EMA/PMA to lower the presence of amplifiable hg/cont in samples and improve virus detection, serum and lung tissue homogenates were spiked with porcine reproductive and respiratory virus (PRRSV) and were processed with EMA/PMA. In addition, PRRSV RT-qPCR positive clinical samples were also tested. EMA/PMA treatments significantly decreased amplifiable hg/cont and significantly increased the number of PVDA positive probes and their signal intensity compared to untreated spiked lung samples. EMA/PMA treatments also increased the sensitivity of HTS by increasing the number of specific PRRSV reads and the PRRSV percentage of coverage. Interestingly, EMA/PMA treatments significantly increased the sensitivity of PVDA and HTS in two out of three clinical tissue samples. Thus, EMA/PMA treatments offer a new approach to lower the amplifiable hg/cont in clinical samples and increase the success of PVDA and HTS to identify viruses.
Resumo:
The main objective of carrying out this investigation is to develop suitable transducer array systems so that underwater pipeline inspection could be carried out in a much better way, a focused beam and electronic steering can reduce inspection time as well. Better results are obtained by optimizing the array parameters. The spacing between the elements is assumed to be half the wavelength so that the interelement interaction is minimum. For NDT applications these arrays are operated at MHz range. The wavelengths become very small in these frequency ranges. Then the size of the array elements becomes very small, requiring hybrid construction techniques for their fabrication. Transducer elements have been fabricated using PVDF as the active, mild steel as the backing and conducting silver preparation as the bonding materials. The transducer is operated in the (3,3) mode. The construction of a high frequency array is comparatively complicated. The interelement spacing between the transducer elements becomes considerably small when high frequencies are considered. It becomes very difficult to construct the transducer manually. The electrode connections to the elements can produce significant loading effect. The array has to be fabricated using hybrid construction techniques. The active materials has to be deposited on a proper substrate and etching techniques are required to fabricate the array. The annular ring, annular cylindrical or other similar structural forms of arrays may also find applications in the near future in treatments were curved contours of the human body are affected.
Resumo:
In this paper, a comparison study among three neuralnetwork algorithms for the synthesis of array patterns is presented. The neural networks are used to estimate the array elements' excitations for an arbitrary pattern. The architecture of the neural networks is discussed and simulation results are presented. Two new neural networks, based on radial basis functions (RBFs) and wavelet neural networks (WNNs), are introduced. The proposed networks offer a more efficient synthesis procedure, as compared to other available techniques
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.
Resumo:
In this thesis, different techniques for image analysis of high density microarrays have been investigated. Most of the existing image analysis techniques require prior knowledge of image specific parameters and direct user intervention for microarray image quantification. The objective of this research work was to develop of a fully automated image analysis method capable of accurately quantifying the intensity information from high density microarrays images. The method should be robust against noise and contaminations that commonly occur in different stages of microarray development.
Resumo:
The authors present a systolic design for a simple GA mechanism which provides high throughput and unidirectional pipelining by exploiting the inherent parallelism in the genetic operators. The design computes in O(N+G) time steps using O(N2) cells where N is the population size and G is the chromosome length. The area of the device is independent of the chromosome length and so can be easily scaled by replicating the arrays or by employing fine-grain migration. The array is generic in the sense that it does not rely on the fitness function and can be used as an accelerator for any GA application using uniform crossover between pairs of chromosomes. The design can also be used in hybrid systems as an add-on to complement existing designs and methods for fitness function acceleration and island-style population management
Resumo:
The paper presents a design for a hardware genetic algorithm which uses a pipeline of systolic arrays. These arrays have been designed using systolic synthesis techniques which involve expressing the algorithm as a set of uniform recurrence relations. The final design divorces the fitness function evaluation from the hardware and can process chromosomes of different lengths, giving the design a generic quality. The paper demonstrates the design methodology by progressively re-writing a simple genetic algorithm, expressed in C code, into a form from which systolic structures can be deduced. This paper extends previous work by introducing a simplification to a previous systolic design for the genetic algorithm. The simplification results in the removal of 2N 2 + 4N cells and reduces the time complexity by 3N + 1 cycles.
Resumo:
We advocate the use of systolic design techniques to create custom hardware for Custom Computing Machines. We have developed a hardware genetic algorithm based on systolic arrays to illustrate the feasibility of the approach. The architecture is independent of the lengths of chromosomes used and can be scaled in size to accommodate different population sizes. An FPGA prototype design can process 16 million genes per second.
Resumo:
We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.
Resumo:
This paper makes a contribution in bridging the theory and practice of the polyhedral model for designing parallel algorithms. Although the theory of polyhedral model is well developed, designers of massively parallel algorithms are unable to benefit from the theory due to the lack of software tools that incorporate the wide range of transformations that are possible in the model. The Uniformization tool that we developed was the first to integrate a number of techniques and to completely automate the transformation step allowing designers to explore a wide range of feasible designs from high-level specifications.
Resumo:
The High Resolution Dynamics Limb Sounder is described, with particular reference to the atmospheric measurements to be made and the rationale behind the measurement strategy. The demands this strategy places on the filters to be used in the instrument and the designs to which this leads to are described. A second set of filters at an intermediate image plane to reduce "Ghost Imaging" is discussed together with their required spectral properties. A method of combining the spectral characteristics of the primary and secondary filters in each channel are combined together with the spectral response of the detectors and other optical elements to obtain the system spectral response weighted appropriately for the Planck function and atmospheric limb absorption. This method is used to demonstrate whether the out-of-band spectral blocking requirement for a channel is being met and an example calculation is demonstrated showing how the blocking is built up for a representative channel. Finally, the techniques used to produce filters of the necessary sub-millimetre sizes together with the testing methods and procedures used to assess the environmental durability and establish space flight quality are discussed.