941 resultados para Field-Programmable Gate Array (FPGA)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Organic printed electronics is attracting an ever-growing interest in the last decades because of its impressive breakthroughs concerning the chemical design of π-conjugated materials and their processing. This has an impact on novel applications, such as flexible-large-area displays, low- cost printable circuits, plastic solar cells and lab-on-a-chip devices. The organic field-effect transistor (OFET) relies on a thin film of organic semiconductor that bridges source and drain electrodes. Since its first discovery in the 80s, intensive research activities were deployed in order to control the chemico-physical properties of these electronic devices and consequently their charge. Self-assembled monolayers (SAMs) are a versatile tool for tuning the properties of metallic, semi-conducting, and insulating surfaces. Within this context, OFETs represent reliable instruments for measuring the electrical properties of the SAMs in a Metal/SAM/OS junction. Our experimental approach, named Charge Injection Organic-Gauge (CIOG), uses OTFT in a charge-injection controlled regime. The CIOG sensitivity has been extensively demonstrated on different homologous self-assembling molecules that differ in either chain length or in anchor/terminal group. One of the latest applications of organic electronics is the so-called “bio-electronics” that makes use of electronic devices to encompass interests of the medical science, such as biosensors, biotransducers etc… As a result, thee second part of this thesis deals with the realization of an electronic transducer based on an Organic Field-Effect Transistor operating in aqueous media. Here, the conventional bottom gate/bottom contact configuration is replaced by top gate architecture with the electrolyte that ensures electrical contact between the top gold electrode and the semiconductor layer. This configuration is named Electrolyte-Gated Field-Effect Transistor (EGOFET). The functionalization of the top electrode is the sensing core of the device allowing the detection of dopamine as well as of protein biomarkers with ultra-low sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The objective of this study was to evaluate the feasibility and reproducibility of high-resolution magnetic resonance imaging (MRI) and quantitative T2 mapping of the talocrural cartilage within a clinically applicable scan time using a new dedicated ankle coil and high-field MRI. MATERIALS AND METHODS: Ten healthy volunteers (mean age 32.4 years) underwent MRI of the ankle. As morphological sequences, proton density fat-suppressed turbo spin echo (PD-FS-TSE), as a reference, was compared with 3D true fast imaging with steady-state precession (TrueFISP). Furthermore, biochemical quantitative T2 imaging was prepared using a multi-echo spin-echo T2 approach. Data analysis was performed three times each by three different observers on sagittal slices, planned on the isotropic 3D-TrueFISP; as a morphological parameter, cartilage thickness was assessed and for T2 relaxation times, region-of-interest (ROI) evaluation was done. Reproducibility was determined as a coefficient of variation (CV) for each volunteer; averaged as root mean square (RMSA) given as a percentage; statistical evaluation was done using analysis of variance. RESULTS: Cartilage thickness of the talocrural joint showed significantly higher values for the 3D-TrueFISP (ranging from 1.07 to 1.14 mm) compared with the PD-FS-TSE (ranging from 0.74 to 0.99 mm); however, both morphological sequences showed comparable good results with RMSA of 7.1 to 8.5%. Regarding quantitative T2 mapping, measurements showed T2 relaxation times of about 54 ms with an excellent reproducibility (RMSA) ranging from 3.2 to 4.7%. CONCLUSION: In our study the assessment of cartilage thickness and T2 relaxation times could be performed with high reproducibility in a clinically realizable scan time, demonstrating new possibilities for further investigations into patient groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present chapter gives a comprehensive introduction into the display and quantitative characterization of scalp field data. After introducing the construction of scalp field maps, different interpolation methods, the effect of the recording reference and the computation of spatial derivatives are discussed. The arguments raised in this first part have important implications for resolving a potential ambiguity in the interpretation of differences of scalp field data. In the second part of the chapter different approaches for comparing scalp field data are described. All of these comparisons can be interpreted in terms of differences of intracerebral sources either in strength, or in location and orientation in a nonambiguous way. In the present chapter we only refer to scalp field potentials, but mapping also can be used to display other features, such as power or statistical values. However, the rules for comparing and interpreting scalp field potentials might not apply to such data. Generic form of scalp field data Electroencephalogram (EEG) and event-related potential (ERP) recordings consist of one value for each sample in time and for each electrode. The recorded EEG and ERP data thus represent a two-dimensional array, with one dimension corresponding to the variable “time” and the other dimension corresponding to the variable “space” or electrode. Table 2.1 shows ERP measurements over a brief time period. The ERP data (averaged over a group of healthy subjects) were recorded with 19 electrodes during a visual paradigm. The parietal midline Pz electrode has been used as the reference electrode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since approximately two thirds of epileptic patients are non-eligible for surgery, local axonal fiber transections might be of particular interest for them. Micrometer to millimeter wide synchrotron-generated X-ray beamlets produced by spatial fractionation of the main beam could generate such fiber disruptions non-invasively. The aim of this work was to optimize irradiation parameters for the induction of fiber transections in the rat brain white matter by exposure to such beamlets. For this purpose, we irradiated cortex and external capsule of normal rats in the antero-posterior direction with a 4 mm×4 mm array of 25 to 1000 µm wide beamlets and entrance doses of 150 Gy to 500 Gy. Axonal fiber responses were assessed with diffusion tensor imaging and fiber tractography; myelin fibers were examined histopathologically. Our study suggests that high radiation doses (500 Gy) are required to interrupt axons and myelin sheaths. However, a radiation dose of 500 Gy delivered by wide minibeams (1000 µm) induced macroscopic brain damage, depicted by a massive loss of matter in fiber tractography maps. With the same radiation dose, the damage induced by thinner microbeams (50 to 100 µm) was limited to their paths. No macroscopic necrosis was observed in the irradiated target while overt transections of myelin were detected histopathologically. Diffusivity values were found to be significantly reduced. A radiation dose ≤ 500 Gy associated with a beamlet size of < 50 µm did not cause visible transections, neither on diffusion maps nor on sections stained for myelin. We conclude that a peak dose of 500 Gy combined with a microbeam width of 100 µm optimally induced axonal transections in the white matter of the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a conceptual prototype model of a focal plane array unit for the STEAMR instrument, highlighting the challenges presented by the required high relative beam proximity of the instrument and focus on how edge-diffraction effects contribute to the array's performance. The analysis was carried out as a comparative process using both PO & PTD and MoM techniques. We first highlight general differences between these computational techniques, with the discussion focusing on diffractive edge effects for near-field imaging reflectors with high truncation. We then present the results of in-depth modeling analyses of the STEAMR focal plane array followed by near-field antenna measurements of a breadboard model of the array. The results of these near-field measurements agree well with both simulation techniques although MoM shows slightly higher complex beam coupling to the measurements than PO & PTD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High brightness electron sources are of great importance for the operation of the hard X-ray free electron lasers. Field emission cathodes based on the double-gate metallic field emitter arrays (FEAs) can potentially offer higher brightness than the currently used ones. We report on the successful application of electron beam lithography for fabrication of the large-scale single-gate as well as double-gate FEAs. We demonstrate operational high-density single-gate FEAs with sub-micron pitch and total number of tips up to 106 as well as large-scale double-gate FEAs with large collimation gate apertures. The details of design, fabrication procedure and successful measurements of the emission current from the single- and double-gate cathodes are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the course of this study, stiffness of a fibril array of mineralized collagen fibrils modeled with a mean field method was validated experimentally at site-matched two levels of tissue hierarchy using mineralized turkey leg tendons (MTLT). The applied modeling approaches allowed to model the properties of this unidirectional tissue from nanoscale (mineralized collagen fibrils) to macroscale (mineralized tendon). At the microlevel, the indentation moduli obtained with a mean field homogenization scheme were compared to the experimental ones obtained with microindentation. At the macrolevel, the macroscopic stiffness predicted with micro finite element (μFE) models was compared to the experimental stiffness measured with uniaxial tensile tests. Elastic properties of the elements in μFE models were injected from the mean field model or two-directional microindentations. Quantitatively, the indentation moduli can be properly predicted with the mean-field models. Local stiffness trends within specific tissue morphologies are very weak, suggesting additional factors responsible for the stiffness variations. At macrolevel, the μFE models underestimate the macroscopic stiffness, as compared to tensile tests, but the correlations are strong.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the fabrication and field emission properties of high-density nano-emitter arrays with on-chip electron extraction gate electrodes and up to 106 metallic nanotips that have an apex curvature radius of a few nanometers and a the tip density exceeding 108 cm−2. The gate electrode was fabricated on top of the nano-emitter arrays using a self-aligned polymer mask method. By applying a hot-press step for the polymer planarization, gate–nanotip alignment precision below 10 nm was achieved. Fabricated devices exhibited stable field electron emission with a current density of 0.1 A cm−2, indicating that these are promising for applications that require a miniature high-brightness electron source.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic resonance imaging (MRI) is a non-invasive technique that offers excellent soft tissue contrast for characterizing soft tissue pathologies. Diffusion tensor imaging (DTI) is an MRI technique that has shown to have the sensitivity to detect subtle pathology that is not evident on conventional MRI. ^ Rats are commonly used as animal models in characterizing the spinal cord pathologies including spinal cord injury (SCI), cancer, multiple sclerosis, etc. These pathologies could affect both thoracic and cervical regions and complete characterization of these pathologies using MRI requires DTI characterization in both the thoracic and cervical regions. Prior to the application of DTI for investigating the pathologic changes in the spinal cord, it is essential to establish DTI metrics in normal animals. ^ To date, in-vivo DTI studies of rat spinal cord have used implantable coils for high signal-to-noise ratio (SNR) and spin-echo pulse sequences for reduced geometric distortions. Implantable coils have several disadvantages including: (1) the invasive nature of implantation, (2) loss of SNR due to frequency shift with time in the longitudinal studies, and (3) difficulty in imaging the cervical region. While echo planar imaging (EPI) offers much shorter acquisition times compared to spin-echo imaging, EPI is very sensitive to static magnetic field inhomogeneities and the existing shimming techniques implemented on the MRI scanner do not perform well on spinal cord because of its geometry. ^ In this work, an integrated approach has been implemented for in-vivo DTI characterization of rat spinal cord in the thoracic and cervical regions. A three element phased array coil was developed for improved SNR and extended spatial coverage. A field-map shimming technique was developed for minimizing the geometric distortions in EPI images. Using these techniques, EPI based DWI images were acquired with optimized diffusion encoding scheme from 6 normal rats and the DTI-derived metrics were quantified. ^ The phantom studies indicated higher SNR and smaller bias in the estimated DTI metrics than the previous studies in the cervical region. In-vivo results indicated no statistical difference in the DTI characteristics of either gray matter or white matter between the thoracic and cervical regions. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recordings from the PerenniAL Acoustic Observatory in the Antarctic ocean (PALAOA) show seasonal acoustic presence of 4 Antarctic ice-breeding seal species (Ross seal, Ommatophoca rossii, Weddell seal, Leptonychotes weddellii, crabeater, Lobodon carcinophaga, and leopard seal, Hydrurga leptonyx). Apart from Weddell seals, inhabiting the fast-ice in Atka Bay, the other three (pack-ice) species however have to date never (Ross and leopard seal) or only very rarely (crabeater seals) been sighted in the Atka Bay region. The aim of the PASATA project is twofold: the large passive acoustic hydrophone array (hereafter referred to as large array) aims to localize calling pack-ice pinniped species to obtain information on their location and hence the ice habitat they occupy. This large array consists of four autonomous passive acoustic recorders with a hydrophone sensor deployed through a drilled hole in the sea ice. The PASATA recordings are time-stamped and can therefore be coupled to the PALAOA recordings so that the hydrophone array spans the bay almost entirely from east to west. The second, smaller hydrophone array (hereafter referred to as small array), also consists of four autonomous passive acoustic recorders with hydrophone sensors deployed through drilled holes in the sea ice. The smaller array was deployed within a Weddell seal breeding colony, located further south in the bay, just off the ice shelf. Male Weddell seals are thought to defend underwater territories around or near tide cracks and breathing holes used by females. Vocal activity increases strongly during the breeding season and vocalizations are thought to be used underwater by males for the purpose of territorial defense and advertisement. With the smaller hydrophone array we aim to investigate underwater behaviour of vocalizing male and female Weddell seals to provide further information on underwater movement patterns in relation to the location of tide cracks and breathing holes. As a pilot project, one on-ice and three underwater camera systems have been deployed near breathing holes to obtain additional visual information on Weddell seal behavioural activity. Upon each visit in the breeding colony, a census of colony composition on the ice (number of animals, sex, presence of dependent pups, presence and severity of injuries-indicative of competition intensity) as well as GPS readings of breathing holes and positions of hauled out Weddell seals are taken.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Early propagation effect (EPE) is a critical problem in conventional dual-rail logic implementations against Side Channel Attacks (SCAs). Among previous EPE-resistant architectures, PA-DPL logic offers EPE-free capability at relatively low cost. However, its separate dual core structure is a weakness when facing concentrated EM attacks where a tiny EM probe can be precisely positioned closer to one of the two cores. In this paper, we present an PA-DPL dual-core interleaved structure to strengthen resistance against sophisticated EM attacks on Xilinx FPGA implementations. The main merit of the proposed structure is that every two routing in each signal pair are kept identical even the dual cores are interleaved together. By minimizing the distance between the complementary routings and instances of both cores, even the concentrated EM measurement cannot easily distinguish the minor EM field unbalance. In PA- DPL, EPE is avoided by compressing the evaluation phase to a small portion of the clock period, therefore, the speed is inevitably limited. Regarding this, we made an improvement to extend the duty cycle of evaluation phase to more than 40 percent, yielding a larger maximum working frequency. The detailed design flow is also presented. We validate the security improvement against EM attack by implementing a simplified AES co-processor in Virtex-5 FPGA.