930 resultados para TPM chip
Resumo:
The end of Dennard scaling has pushed power consumption into a first order concern for current systems, on par with performance. As a result, near-threshold voltage computing (NTVC) has been proposed as a potential means to tackle the limited cooling capacity of CMOS technology. Hardware operating in NTV consumes significantly less power, at the cost of lower frequency, and thus reduced performance, as well as increased error rates. In this paper, we investigate if a low-power systems-on-chip, consisting of ARM's asymmetric big.LITTLE technology, can be an alternative to conventional high performance multicore processors in terms of power/energy in an unreliable scenario. For our study, we use the Conjugate Gradient solver, an algorithm representative of the computations performed by a large range of scientific and engineering codes.
Resumo:
The design of a high-performance IIR (infinite impulse response) digital filter is described. The chip architecture operates on 11-b parallel, two's complement input data with a 12-b parallel two's complement coefficient to produce a 14-b two's complement output. The chip is implemented in 1.5-µm, double-layer-metal CMOS technology, consumes 0.5 W, and can operate up to 15 Msample/s. The main component of the system is a fine-grained systolic array that internally is based on a signed binary number representation (SBNR). Issues addressed include testing, clock distribution, and circuitry for conversion between two's complement and SBNR.
Resumo:
Features of chip formation can inform the mechanism of a machining process. In this paper, a series of orthogonal cutting experiments were carried out on unidirectional carbon fiber reinforced polymer (UD-CFRP) under cutting speed of 0.5 m/min. The specially designed orthogonal cutting tools and high-speed camera were used in this paper. Two main factors are found to influence the chip morphology, namely the depth of cut (DOC) and the fiber orientation (angle 휃), and the latter of which plays a more dominant role. Based on the investigation of chip formation, a new approach is proposed for predicting fracture toughness of the newly machined surface and the total energy consumption during CFRP orthogonal cutting is introduced as a function of the surface energy of machined surface, the energy consumed to overcome friction, and the energy for chip fracture. The results show that the proportion of energy spent on tool-chip friction is the greatest, and the proportions of energy spent on creating new surface decrease with the increasing of fiber angle.
Resumo:
A solvent-vapour thermoplastic bonding process is reported which provides high strength bonding of PMMA over a large area for multi-channel and multi-layer microfluidic devices with shallow high resolution channel features. The bond process utilises a low temperature vacuum thermal fusion step with prior exposure of the substrate to chloroform (CHCl3) vapour to reduce bond temperature to below the PMMA glass transition temperature. Peak tensile and shear bond strengths greater than 3 MPa were achieved for a typical channel depth reduction of 25 µm. The device-equivalent bond performance was evaluated for multiple layers and high resolution channel features using double-side and single-side exposure of the bonding pieces. A single-sided exposure process was achieved which is suited to multi-layer bonding with channel alignment at the expense of greater depth loss and a reduction in peak bond strength. However, leak and burst tests demonstrate bond integrity up to at least 10 bar channel pressure over the full substrate area of 100 mm x 100 mm. The inclusion of metal tracks within the bond resulted in no loss of performance. The vertical wall integrity between channels was found to be compromised by solvent permeation for wall thicknesses of 100 µm which has implications for high resolution serpentine structures. Bond strength is reduced considerably for multi-layer patterned substrates where features on each layer are not aligned, despite the presence of an intermediate blank substrate. Overall a high performance bond process has been developed that has the potential to meet the stringent specifications for lab-on-chip deployment in harsh environmental conditions for applications such as deep ocean profiling.
Resumo:
The end of Dennard scaling has promoted low power consumption into a firstorder concern for computing systems. However, conventional power conservation schemes such as voltage and frequency scaling are reaching their limits when used in performance-constrained environments. New technologies are required to break the power wall while sustaining performance on future processors. Low-power embedded processors and near-threshold voltage computing (NTVC) have been proposed as viable solutions to tackle the power wall in future computing systems. Unfortunately, these technologies may also compromise per-core performance and, in the case of NTVC, xreliability. These limitations would make them unsuitable for HPC systems and datacenters. In order to demonstrate that emerging low-power processing technologies can effectively replace conventional technologies, this study relies on ARM’s big.LITTLE processors as both an actual and emulation platform, and state-of-the-art implementations of the CG solver. For NTVC in particular, the paper describes how efficient algorithm-based fault tolerance schemes preserve the power and energy benefits of very low voltage operation.
Resumo:
Chromatin immunoprecipitation (ChIP) allows enrichment of genomic regions which are associated with specific transcription factors, histone modifications, and indeed any other epitopes which are present on chromatin. The original ChIP methods used site-specific PCR and Southern blotting to confirm which regions of the genome were enriched, on a candidate basis. The combination of ChIP with genomic tiling arrays (ChIP-chip) allowed a more unbiased approach to map ChIP-enriched sites. However, limitations of microarray probe design and probe number have a detrimental impact on the coverage, resolution, sensitivity, and cost of whole-genome tiling microarray sets for higher eukaryotes with large genomes. The combination of ChIP with high-throughput sequencing technology has allowed more comprehensive surveys of genome occupancy, greater resolution, and lower cost for whole genome coverage. Herein, we provide a comparison of high-throughput sequencing platforms and a survey of ChIP-seq analysis tools, discuss experimental design, and describe a detailed ChIP-seq method.Chromatin immunoprecipitation (ChIP) allows enrichment of genomic regions which are associated with specific transcription factors, histone modifications, and indeed any other epitopes which are present on chromatin. The original ChIP methods used site-specific PCR and Southern blotting to confirm which regions of the genome were enriched, on a candidate basis. The combination of ChIP with genomic tiling arrays (ChIP-chip) allowed a more unbiased approach to map ChIP-enriched sites. However, limitations of microarray probe design and probe number have a detrimental impact on the coverage, resolution, sensitivity, and cost of whole-genome tiling microarray sets for higher eukaryotes with large genomes. The combination of ChIP with high-throughput sequencing technology has allowed more comprehensive surveys of genome occupancy, greater resolution, and lower cost for whole genome coverage. Herein, we provide a comparison of high-throughput sequencing platforms and a survey of ChIP-seq analysis tools, discuss experimental design, and describe a detailed ChIP-seq method.
Resumo:
The identification of direct nuclear hormone receptor gene targets provides clues to their contribution to both development and cancer progression. Until recently, the identification of such direct target genes has relied on a combination of expression analysis and in silico searches for consensus binding motifs in gene promoters. Consensus binding motifs for transcription factors are often defined using in vitro DNA binding strategies. Such in vitro strategies fail to account for the many factors that contribute significantly to target selection by transcription factors in cells beyond the recognition of these short consensus DNA sequences. These factors include DNA methylation, chromatin structure, posttranslational modifications of transcription factors, and the cooperative recruitment of transcription factor complexes. Chromatin immunoprecipitation (ChIP) provides a means of isolating transcription factor complexes in the context of endogenous chromatin, allowing the identification of direct transcription factor targets. ChIP can be combined with site-specific PCR for candidate binding sites or alternatively with cloning, genomic microarrays or more recently direct high throughput sequencing to identify novel genomic targets. The application of ChIP-based approaches has redefined consensus binding motifs for transcription factors and provided important insights into transcription factor biology.
Resumo:
The paper details on-chip inductor optimization for a reconfigurable continuous-time delta-sigma (Δ-Σ) modulator based radio-frequency analog-to-digital converter. Inductor optimisation enables the Δ-Σ modulator with Q enhanced LC tank circuits employing a single high Q-factor on-chip inductor and lesser quantizer levels thereby reducing the circuit complexity for excess loop delay, power dissipation and dynamic element matching. System level simulations indicate at a Q-factor of 75 Δ- Σ modulator with a 3-level quantizer achieves dynamic ranges of 106, 82 dB and 84 dB for RFID, TETRA, and Galileo over bandwidths of 200 kHz, 10 MHz and 40 MHz respectively.
Resumo:
This paper describes in detail the design of a CMOS custom fast Fourier transform (FFT) processor for computing a 256-point complex FFT. The FFT is well-suited for real-time spectrum analysis in instrumentation and measurement applications. The FFT butterfly processor reported here consists of one parallel-parallel multiplier and two adders. It is capable of computing one butterfly computation every 100 ns thus it can compute a 256-point complex FFT in 102.4 μs excluding data input and output processes.
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.
Resumo:
The rapid increase in the use of microprocessor-based systems in critical areas, where failures imply risks to human lives, to the environment or to expensive equipment, significantly increased the need for dependable systems, able to detect, tolerate and eventually correct faults. The verification and validation of such systems is frequently performed via fault injection, using various forms and techniques. However, as electronic devices get smaller and more complex, controllability and observability issues, and sometimes real time constraints, make it harder to apply most conventional fault injection techniques. This paper proposes a fault injection environment and a scalable methodology to assist the execution of real-time fault injection campaigns, providing enhanced performance and capabilities. Our proposed solutions are based on the use of common and customized on-chip debug (OCD) mechanisms, present in many modern electronic devices, with the main objective of enabling the insertion of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented starting from basic Components Off-The-Shelf (COTS) microprocessors, equipped with real-time OCD infrastructures, to improved solutions based on modified interfaces, and dedicated OCD circuitry that enhance fault injection capabilities and performance. All methodologies and configurations were evaluated and compared concerning performance gain and silicon overhead.
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.