937 resultados para Chip


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract not available

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

International audience

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract not available

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciências Veterinárias, Especialidade de Ciências Biológicas e Biomédicas

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To identify markers for gynecological tumor diagnosis using antibody chip capture. Methods: Marker proteins, including cancer antigen 153 (CA153), CA125, and carcinoembryonic antigen (CEA), were analyzed using antibody chip capture of serum samples. Fifteen agglutinin types that specifically recognized five common glycans (fucose, sialic acid, mannose, N - acetylgalactosamine, and N-acetylglucosamine) were used to detect marker protein glycan levels. The levels of CA153, CA125, and CEA from 49 healthy control samples, 31 breast cancer samples, 24 cervical cancer samples, and 19 ovarian cancer samples were used to measure the glycan levels of these marker proteins. Results: In breast cancer samples, CA153 and CA125 were down-regulated (p < 0.01), while differences in ovarian cancer samples were not statistically significant (p > 0.01). The total accuracy was 85.1 %, with 96.8 % accuracy for breast cancer, 75 % in cervical cancer, and 78.9 % in ovarian cancer. Cross-validation analyses showed that breast cancer had 93.5 % accuracy, cervical cancer was 66.7 %, and ovarian cancer was 68.4 %, leading to 78.4 % total accuracy (58/74). Conclusions: The results indicate that better clinical diagnosis of gynecological tumors can be obtained by monitoring changes in glycan levels of serum proteins and types of proteoglycan changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combinatorial optimization is a complex engineering subject. Although formulation often depends on the nature of problems that differs from their setup, design, constraints, and implications, establishing a unifying framework is essential. This dissertation investigates the unique features of three important optimization problems that can span from small-scale design automation to large-scale power system planning: (1) Feeder remote terminal unit (FRTU) planning strategy by considering the cybersecurity of secondary distribution network in electrical distribution grid, (2) physical-level synthesis for microfluidic lab-on-a-chip, and (3) discrete gate sizing in very-large-scale integration (VLSI) circuit. First, an optimization technique by cross entropy is proposed to handle FRTU deployment in primary network considering cybersecurity of secondary distribution network. While it is constrained by monetary budget on the number of deployed FRTUs, the proposed algorithm identi?es pivotal locations of a distribution feeder to install the FRTUs in different time horizons. Then, multi-scale optimization techniques are proposed for digital micro?uidic lab-on-a-chip physical level synthesis. The proposed techniques handle the variation-aware lab-on-a-chip placement and routing co-design while satisfying all constraints, and considering contamination and defect. Last, the first fully polynomial time approximation scheme (FPTAS) is proposed for the delay driven discrete gate sizing problem, which explores the theoretical view since the existing works are heuristics with no performance guarantee. The intellectual contribution of the proposed methods establishes a novel paradigm bridging the gaps between professional communities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing useof nanomaterials in consumer products and biomedical applications creates the possibilities of intentional/unintentional exposure to humans and the environment. Beyond the physiological limit, the nanomaterialexposure to humans can induce toxicity. It is difficult to define toxicity of nanoparticles on humans as it varies by nanomaterialcomposition, size, surface properties and the target organ/cell line. Traditional tests for nanomaterialtoxicity assessment are mostly based on bulk-colorimetric assays. In many studies, nanomaterials have found to interfere with assay-dye to produce false results and usually require several hours or days to collect results. Therefore, there is a clear need for alternative tools that can provide accurate, rapid, and sensitive measure of initial nanomaterialscreening. Recent advancement in single cell studies has suggested discovering cell properties not found earlier in traditional bulk assays. A complex phenomenon, like nanotoxicity, may become clearer when studied at the single cell level, including with small colonies of cells. Advances in lab-on-a-chip techniques have played a significant role in drug discoveries and biosensor applications, however, rarely explored for nanomaterialtoxicity assessment. We presented such cell-integrated chip-based approach that provided quantitative and rapid response of cellhealth, through electrochemical measurements. Moreover, the novel design of the device presented in this study was capable of capturing and analyzing the cells at a single cell and small cell-population level. We examined the change in exocytosis (i.e. neurotransmitterrelease) properties of a single PC12 cell, when exposed to CuOand TiO2 nanoparticles. We found both nanomaterials to interfere with the cell exocytosis function. We also studied the whole-cell response of a single-cell and a small cell-population simultaneously in real-time for the first time. The presented study can be a reference to the future research in the direction of nanotoxicity assessment to develop miniature, simple, and cost-effective tool for fast, quantitative measurements at high throughput level. The designed lab-on-a-chip device and measurement techniques utilized in the present work can be applied for the assessment of othernanoparticles' toxicity, as well.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A surface plasmon resonance-based solution affinity assay is described for measuring the Kd of binding of heparin/heparan sulfate-binding proteins with a variety of ligands. The assay involves the passage of a pre-equilibrated solution of protein and ligand over a sensor chip onto which heparin has been immobilised. Heparin sensor chips prepared by four different methods, including biotin–streptavidin affinity capture and direct covalent attachment to the chip surface, were successfully used in the assay and gave similar Kd values. The assay is applicable to a wide variety of heparin/HS-binding proteins of diverse structure and function (e.g., FGF-1, FGF-2, VEGF, IL-8, MCP-2, ATIII, PF4) and to ligands of varying molecular weight and degree of sulfation (e.g., heparin, PI-88, sucrose octasulfate, naphthalene trisulfonate) and is thus well suited for the rapid screening of ligands in drug discovery applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the design and implementation of a public-key platform, secFleck, based on a commodity Trusted Platform Module (TPM) chip that extends the capability of a standard node. Unlike previous software public-key implementations this approach provides E- Commerce grade security; is computationally fast, energy efficient; and has low financial cost — all essential attributes for secure large-scale sen- sor networks. We describe the secFleck message security services such as confidentiality, authenticity and integrity, and present performance re- sults including computation time, energy consumption and cost. This is followed by examples, built on secFleck, of symmetric key management, secure RPC and secure software update.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents the design and implementation of a trusted sensor node that provides Internet-grade security at low system cost. We describe trustedFleck, which uses a commodity Trusted Platform Module (TPM) chip to extend the capabilities of a standard wireless sensor node to provide security services such as message integrity, confidentiality, authenticity, and system integrity based on RSA public-key and XTEA-based symmetric-key cryptography. In addition trustedFleck provides secure storage of private keys and provides platform configuration registers (PCRs) to store system configurations and detect code tampering. We analyze system performance using metrics that are important for WSN applications such as computation time, memory size, energy consumption and cost. Our results show that trustedFleck significantly outperforms previous approaches (e.g., TinyECC) in terms of these metrics while providing stronger security levels. Finally, we describe a number of examples, built on trustedFleck, of symmetric key management, secure RPC, secure software update, and remote attestation.