930 resultados para TPM chip
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
A surface plasmon resonance-based solution affinity assay is described for measuring the Kd of binding of heparin/heparan sulfate-binding proteins with a variety of ligands. The assay involves the passage of a pre-equilibrated solution of protein and ligand over a sensor chip onto which heparin has been immobilised. Heparin sensor chips prepared by four different methods, including biotin–streptavidin affinity capture and direct covalent attachment to the chip surface, were successfully used in the assay and gave similar Kd values. The assay is applicable to a wide variety of heparin/HS-binding proteins of diverse structure and function (e.g., FGF-1, FGF-2, VEGF, IL-8, MCP-2, ATIII, PF4) and to ligands of varying molecular weight and degree of sulfation (e.g., heparin, PI-88, sucrose octasulfate, naphthalene trisulfonate) and is thus well suited for the rapid screening of ligands in drug discovery applications.
Resumo:
A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.
Resumo:
This research paper aims to develop a method to explore the travel behaviour differences between disadvantaged and non-disadvantaged populations. It also aims to develop a modelling approach or a framework to integrate disadvantage analysis into transportation planning models (TPMs). The methodology employed identifies significantly disadvantaged groups through a cluster analysis and the paper presents a disadvantage-integrated TPM. This model could be useful in determining areas with concentrated disadvantaged population and also developing and formulating relevant disadvantage sensitive policies. (a) For the covering entry of this conference, please see ITRD abstract no. E214666.
Resumo:
Studying the rate of cell migration provides insight into fundamental cell biology as well as a tool to assess the functionality of synthetic surfaces and soluble environments used in tissue engineering. The traditional tools used to study cell migration include the fence and wound healing assays. In this paper we describe the development of a microchannel based device for the study of cell migration on defined surfaces. We demonstrate that this device provides a superior tool, relative to the previously mentioned assays, for assessing the propagation rate of cell wave fronts. The significant advantage provided by this technology is the ability to maintain a virgin surface prior to the commencement of the cell migration assay. Here, the device is used to assess rates of mouse fibroblasts (NIH 3T3) and human osteosarcoma (SaOS2) cell migration on surfaces functionalized with various extracellular matrix proteins as a demonstration that confining cell migration within a microchannel produces consistent and robust data. The device design enables rapid and simplistic assessment of multiple repeats on a single chip, where surfaces have not been previously exposed to cells or cellular secretions.
Resumo:
A series of polymers with a comb architecture were prepared where the poly(olefin sulfone) backbone was designed to be highly sensitive to extreme ultraviolet (EUV) radiation, while the well-defined poly(methyl methacrylate) (PMMA) arms were incorporated with the aim of increasing structural stability. It is hypothesized that upon EUV radiation rapid degradation of the polysulfone backbone will occur leaving behind the well-defined PMMA arms. The synthesized polymers were characterised and have had their performance as chain-scission EUV photoresists evaluated. It was found that all materials possess high sensitivity towards degradation by EUV radiation (E0 in the range 4–6 mJ cm−2). Selective degradation of the poly(1-pentene sulfone) backbone relative to the PMMA arms was demonstrated by mass spectrometry headspace analysis during EUV irradiation and by grazing-angle ATR-FTIR. EUV interference patterning has shown that materials are capable of resolving 30 nm 1:1 line:space features. The incorporation of PMMA was found to increase the structural integrity of the patterned features. Thus, it has been shown that terpolymer materials possessing a highly sensitive poly(olefin sulfone) backbone and PMMA arms are able to provide a tuneable materials platform for chain scission EUV resists. These materials have the potential to benefit applications that require nanopattering, such as computer chip manufacture and nano-MEMS.
Resumo:
This paper investigates the field programmable gate array (FPGA) approach for multi-objective and multi-disciplinary design optimisation (MDO) problems. One class of optimisation method that has been well-studied and established for large and complex problems, such as those inherited in MDO, is multi-objective evolutionary algorithms (MOEAs). The MOEA, nondominated sorting genetic algorithm II (NSGA-II), is hardware implemented on an FPGA chip. The NSGA-II on FPGA application to multi-objective test problem suites has verified the designed implementation effectiveness. Results show that NSGA-II on FPGA is three orders of magnitude better than the PC based counterpart.
Resumo:
In this paper, a hardware-based path planning architecture for unmanned aerial vehicle (UAV) adaptation is proposed. The architecture aims to provide UAVs with higher autonomy using an application specific evolutionary algorithm (EA) implemented entirely on a field programmable gate array (FPGA) chip. The physical attributes of an FPGA chip, being compact in size and low in power consumption, compliments it to be an ideal platform for UAV applications. The design, which is implemented entirely in hardware, consists of EA modules, population storage resources, and three-dimensional terrain information necessary to the path planning process, subject to constraints accounted for separately via UAV, environment and mission profiles. The architecture has been successfully synthesised for a target Xilinx Virtex-4 FPGA platform with 32% logic slices utilisation. Results obtained from case studies for a small UAV helicopter with environment derived from LIDAR (Light Detection and Ranging) data verify the effectiveness of the proposed FPGA-based path planner, and demonstrate convergence at rates above the typical 10 Hz update frequency of an autopilot system.
Resumo:
The current investigation reports on diesel particulate matter emissions, with special interest in fine particles from the combustion of two base fuels. The base fuels selected were diesel fuel and marine gas oil (MGO). The experiments were conducted with a four-stroke, six-cylinder, direct injection diesel engine. The results showed that the fine particle number emissions measured by both SMPS and ELPI were higher with MGO compared to diesel fuel. It was observed that the fine particle number emissions with the two base fuels were quantitatively different but qualitatively similar. The gravimetric (mass basis) measurement also showed higher total particulate matter (TPM) emissions with the MGO. The smoke emissions, which were part of TPM, were also higher for the MGO. No significant changes in the mass flow rate of fuel and the brake-specific fuel consumption (BSFC) were observed between the two base fuels.
Resumo:
Introduction During a recent study of how parents source information about children‘s early learning, one of us made our first serious foray into a local store licensed to the global chain Toys'R' Us. While walking the aisles, closely observing layout, signage and stock, several things became obvious. Firstly, large numbers of toys were labeled'educational'. Secondly, many toys in that category were intended for children under the age of two years. These were further differentiated as intended for 'babies' or 'infants', and sub-categorized on packaging or shelving using even smaller age increments (e.g. 0-3 months, 12-18 months, and so on). Thirdly, many products were labeled as 'interactive' and 'learning' toys that promised to assist children‘s early learning and development. The activation of some of these toys relied on embedded computer chip technology and promised to 'connect' children with the home television, computer and the Internet. These products were hybrids between a toy and a platform for digital media interaction. Closer inspection of toy packaging and other promotional material suggested that industry had begun to invest heavily in developing highly differentiated children‘s markets for products that yoked together concepts of learning and development, the 'fun toy' that incorporates digital technology, and offline- and online participation. In this chapter we explore the growth of this contemporary cultural phenomenon that now connects books, toys and mobile digital media with children‘s play and learning.
Resumo:
In Strong v Woolworth Ltd (t/as Big W) (2012) 285 ALR 420 the appellant was injured when she fell at a shopping centre outside the respondent’s premises. The appellant was disabled, having had her right leg amputated above the knee and therefore walked with crutches. One of the crutches came into contact with a hot potato chip which was on the floor, causing the crutch to slip and the appellant to fall. The appellant sued in negligence, alleging that the respondent was in breach of its duty of care by failing to institute and maintain a cleaning system to detect spillages and foreign objects within its sidewalk sales area. The issue before the High Court was whether it could be established on the balance of probabilities as to when the hot chip had fallen onto the ground so as to prove causation in fact...
Resumo:
Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.
Resumo:
The use of Trusted Platform Module (TPM) is be- coming increasingly popular in many security sys- tems. To access objects protected by TPM (such as cryptographic keys), several cryptographic proto- cols, such as the Object Specific Authorization Pro- tocol (OSAP), can be used. Given the sensitivity and the importance of those objects protected by TPM, the security of this protocol is vital. Formal meth- ods allow a precise and complete analysis of crypto- graphic protocols such that their security properties can be asserted with high assurance. Unfortunately, formal verification of these protocols are limited, de- spite the abundance of formal tools that one can use. In this paper, we demonstrate the use of Coloured Petri Nets (CPN) - a type of formal technique, to formally model the OSAP. Using this model, we then verify the authentication property of this protocol us- ing the state space analysis technique. The results of analysis demonstrates that as reported by Chen and Ryan the authentication property of OSAP can be violated.