883 resultados para network-on-chip,deadlock, message-dependent-deadlock,NoC
Resumo:
We report the fabrication and field emission properties of high-density nano-emitter arrays with on-chip electron extraction gate electrodes and up to 106 metallic nanotips that have an apex curvature radius of a few nanometers and a the tip density exceeding 108 cm−2. The gate electrode was fabricated on top of the nano-emitter arrays using a self-aligned polymer mask method. By applying a hot-press step for the polymer planarization, gate–nanotip alignment precision below 10 nm was achieved. Fabricated devices exhibited stable field electron emission with a current density of 0.1 A cm−2, indicating that these are promising for applications that require a miniature high-brightness electron source.
Resumo:
CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contains their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration,and the potential to perform image processing operations on-chip and in real-time. Here, the major challenges and design drivers for ground-based and space-based optical observation strategies for objects in Earth orbit have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Finally, we simulated several observation scenarios for ground- and space-based sensor by assuming different observation and sensor properties. We will introduce the analyzed end-to-end simulations of the ground- and spacebased strategies in order to investigate the orbit determination accuracy and its sensitivity which may result from different values for the frame-rate, pixel scale, astrometric and epoch registration accuracies. Two cases were simulated, a survey assuming a ground-based sensor to observe objects in LEO for surveillance applications, and a statistical survey with a space-based sensor orbiting in LEO observing small-size debris in LEO. The ground-based LEO survey uses a dynamical fence close to the Earth shadow a few hours after sunset. For the space-based scenario a sensor in a sun-synchronous LEO orbit, always pointing in the anti-sun direction to achieve optimum illumination conditions for small LEO debris was simulated.
Resumo:
High-resolution chemical depth profiling measurements of copper films are presented. The 10 μm thick copper test samples were electrodeposited on a Si-supported Cu seed under galvanostatic conditions in the presence of particular plating additives (SPS, Imep, PEI, and PAG) used in the semiconductor industry for the on-chip metallization of interconnects. To probe the trend of these plating additives toward inclusion into the deposit upon growth, quantitative elemental mass spectrometric measurements at trace level concentration were conducted by using a sensitive miniature laser ablation ionization mass spectrometer (LIMS), originally designed and developed for in situ space exploration. An ultrashort pulsed laser system (τ ∼ 190 fs, λ = 775 nm) was used for ablation and ionization of sample material. We show that with our LIMS system, quantitative chemical mass spectrometric analysis with an ablation rate at the subnanometer level per single laser shot can be conducted. The measurement capabilities of our instrument, including the high vertical depth resolution coupled with high detection sensitivity of ∼10 ppb, high dynamic range ≥10(8), measurement accuracy and precision, is of considerable interest in various fields of application, where investigations with high lateral and vertical resolution of the chemical composition of solid materials are required, these include, e.g., wafers from semiconductor industry or studies on space weathered samples in space research.
Resumo:
The formation of blood vessels is a complex tissue-specific process that plays a pivotal role during developmental processes, in wound healing, cancer progression, fibrosis and other pathologies. To study vasculogenesis and vascular remodeling in the context of the lung, we developed an in-vitro microvascular model that closely mimics the human lung microvasculature in terms of 3D architecture, accessibility, functionality and cell types. Human pericytes from the distal airway were isolated and characterized using flow cytometry. To assess their role in the generation of normal microvessels, lung pericytes were mixed in fibrin gel and seeded into well-defined microcompartments together with primary endothelial cells (HUVEC). Patent microvessels covering an area of 3.1 mm2 formed within 3-5 days and were stable for up to 14 days. Soluble signals from the lung pericytes were necessary to establish perfusability, and pericytes migrated towards endothelial microvessels. Cell-cell communication in the form of adherens and tight junctions, as well as secretion of basement membrane was confirmed using transmission electron microscopy and immunocytochemistry on chip. Direct co-culture of pericytes with endothelial cells decreased the microvascular permeability by one order of magnitude from 17.8∙10-6 cm/s to 2.0∙10-6 cm/s and led to vessels with significantly smaller and less variable diameter. Upon phenylephrine administration, vasoconstriction was observed in microvessels lined with pericytes but not in endothelial microvessels only. Perfusable microvessels were also generated with human lung microvascular endothelial cells and lung pericytes. Human lung pericytes were thus shown to have a prominent influence on microvascular morphology, permeability, vasoconstriction and long-term stability in an in-vitro microvascular system. This biomimetic platform opens new possibilities to test functions and interactions of patient-derived cells in a physiologically relevant microvascular setting.
Resumo:
Background: The aim of this study was to evaluate the validity and the inter- and intra-examiner reliability of panoramic-radiograph-driven findings of different maxillary sinus anatomic variations and pathologies, which had initially been prediagnosed by cone beam computed tomography (CBCT). Methods: After pairs of two-dimensional (2D) panoramic and three-dimensional (3D) CBCT images of patients having received treatment at the outpatient department had been screened, the predefinition of 54 selected maxillary sinus conditions was initially performed on CBCT images by two blinded consultants individually using a questionnaire that defined ten different clinically relevant findings. Using the identic questionnaire, these consultants performed the evaluation of the panoramic radiographs at a later time point. The results were analyzed for inter-imaging differences in the evaluation of the maxillary sinus between 2D and 3D imaging methods. Additionally, two resident groups (first year and last year of training) performed two diagnostic runs of the panoramic radiographs and results were analyzed for inter- and intra-observer reliability. Results: There is a moderate risk for false diagnosis of findings of the maxillary sinus if only panoramic radiography is used. Based on the ten predefined conditions, solely maxillary bone cysts penetrating into the sinus were frequently detected differently comparing 2D to 3D diagnostics. Additionally, on panoramic radiographs, the inter-observer comparison demonstrated that basal septa were significantly often rated differently and the intra-observer comparison showed a significant lack in reliability in detecting maxillary bone cysts penetrating into the sinus. Conclusions: Panoramic radiography provides the most information on the maxillary sinus, and it may be an adequate imaging method. However, particular findings of the maxillary sinus in panoramic imaging may be based on a rather examiner-dependent assessment. Therefore, a persistent and precise evaluation of specific conditions of the maxillary sinus may only be possible using CBCT because it provides additional information compared to panoramic radiography. This might be relevant for consecutive surgical procedures; consequently, we recommend CBCT if a precise preoperative evaluation is mandatory. However, higher radiation dose and costs of 3D imaging need to be considered. Keywords: Panoramic radiography; Cone beam computed tomography; Maxillary sinus; Inter-imaging method differences; Inter-examiner reliability; Intra-examiner reliability
Resumo:
This paper examines the relationship between house price levels, school performance, and the racial and ethnic composition of Connecticut school districts between 1995 and 2000. A panel of Connecticut school districts over both time and labor market areas is used to estimate a simultaneous equations model describing the determinants of these variables. Specifically, school district changes in price level, school performance, and racial and ethnic compositions depend upon each other, labor market wide changes in these variables, and the deviation of each school district from the overall metropolitan area. The specification is based on the differencing of dependent variables, as opposed to the use of level or fixed effects models and lagging level variables beyond the period over which change is considered; as a result the model is robust to persistence in the sample. Identification of the simultaneous system arises from the presence of multiple labor market areas in the sample, and the assumption that labor market changes in a variable due not directly influence the allocation of households across towns within a labor market area. We find that towns in labor markets that experience an inflow of minority households have greater increases in percent minority if those towns already ahve a substantial minoritypopulation. We find evidence that this sorting proces is reflected in housing price changes in the low priced segment of the housing market, not in the middle and upper segments.
Resumo:
Formation of a triple helix resulting from oligonucleotide binding to the DNA double helix offers new possibilities to control gene expression at the transcriptional level. Purine-motif triplexes can be formed under physiological pH. Nevertheless, this formation was inhibited by certain monovalent cations during the association but not during dissociation. Since triplexes are very stable, it was possible to assemble them in the absence of KCl and have them survive throughout the course of an in vitro transcription reaction. As for the design of a better triplex-forming oligonucleotide, 12 nucleotides in length afforded the highest binding affinity. G/T-rich oligonucleotides can be very polymorphic in solution. The conditions for forming purine-motif triplexes, duplexes or G-quartets were determined. Understanding these parameters will be important for the practical use of G-rich oligonucleotides in the development of DNA aptamers where the structure of the oligonucleotide is paramount in dictating its function. Finally, purine-motif triplexes were demonstrated to significantly inhibit gene transcription in vitro. The optimal effect on this process was dependent on the location of triplexes within the promoter, i.e., whether upstream or proximally downstream of the transcription start site. The mechanism for the inhibition of transcription appeared to be interference with initiation through preventing engagement by RNA polymerase. This finding is revolutionary when compared to the conventional model where triplexes inhibit transcription only by occluding binding by trans-acting proteins. Our findings broaden the utility of triplexes and support a strategy for antigene therapy by triplexes. ^
Resumo:
Focus of this study is the analysis of a local hydrogeological system in the subhumid outer tropics in the western African country of Benin. The aim was to characterize, qualify and quantify the hydrogeological and hydrological properties of the approx. 30 km2 big study area and to develop a conceptual hydrogeological model. This model should provide the basis for further studies on a regional scale. The main goal was to obtain the process knowledge of the hydrogeological system and to determine the process and the quantity of the groundwater recharge in the working area. According to the objectives, a broad hydrogeological approach was chosen. In a spacious network on the local scale TDR probes, suction cups and groundwater observation bores were installed. Also in a multidisciplinary cooperation with hydrology, geography, soil science, biology, meteorology and plant nutrition sciences, instruments like discharge gauging stations, tensiometers, lysimeter, climate stations, runoff plots and erosion pins were installed in the test site for the investigation of the relevant parameters of the hydrological cycle.
Resumo:
Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.
Resumo:
Dynamic thermal management techniques require a collection of on-chip thermal sensors that imply a significant area and power overhead. Finding the optimum number of temperature monitors and their location on the chip surface to optimize accuracy is an NP-hard problem. In this work we improve the modeling of the problem by including area, power and networking constraints along with the consideration of three inaccuracy terms: spatial errors, sampling rate errors and monitor-inherent errors. The problem is solved by the simulated annealing algorithm. We apply the algorithm to a test case employing three different types of monitors to highlight the importance of the different metrics. Finally we present a case study of the Alpha 21364 processor under two different constraint scenarios.
Resumo:
In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.
Resumo:
The purpose of this document is to create a modest integration guide for embedding a Linux Operating System on ZedBoard development platform, based on Xilinx’s Zynq-7000 All Programmable System on Chip which contains a dual core ARM Cortex-A9 and a 7 Series FPGA Artix-7. The integration process has been structured in four chapters according to the logic generation of the different parts that compose the embedded system. With the intention of automating the generation process of a complete Linux distribution specific for ZedBoard platform, BuildRoot development platform it is used. Once the embedding process finished, it was decided to add to the system the required functionalities for adding support for IEEE1588 Standard for Precision Clock Synchronization Protocol for Networked Measurement and Control Systems, through a user space Linux program which implements the protocol. That PTP user space implementation program has been cross-compiled, executed on target and tested for evaluating the functionalities added. RESUMEN El propósito de este documento es crear una modesta guía de integración de un sistema operativo Linux para la plataforma de desarrollo ZedBoard, basada en un System on Chip del fabricante Xilinx llamado Zynq-7000. Este System on Chip está compuesto por un procesador de doble núcleo ARM Cortex-A9 y una FPGA de la Serie 7 equiparable a una Artix-7. El proceso de integración se ha estructurado en cuatro grandes capítulos que se rigen según el orden lógico de generación de las distintas partes por las que el sistema empotrado está compuesto. Con el ánimo de automatizar el proceso de creación de una distribución de Linux específica para la plataforma ZedBoard, se ha utilizado la plataforma de desarrollo BuildRoot. Una vez terminado el proceso de integración del sistema empotrado, se procedió a dar dotar al sistema de las funcionalidades necesarias para dar soporte al estándar de sincronización de relojes en redes de área local, PTP IEEE1588, a través de una implementación del mismo en un programa de lado de usuario el cual ha sido compilado, ejecutado y testeado para evaluar el correcto funcionamiento de las funcionalidades añadidas.
Resumo:
Este proyecto consiste en el diseño y construcción de un sintetizador basado en el chip 6581 Sound Interface Device (SID). Este chip era el encargado de la generación de sonido en el Commodore 64, ordenador personal comercializado en 1982, y fue el primer sintetizador complejo construido para ordenador. El chip en cuestión es un sintetizador de tres voces, cada una de ellas capaz de generar cuatro diferentes formas de onda. Cada voz tiene control independiente de varios parámetros, permitiendo una relativamente amplia variedad de sonidos y efectos, muy útil para su uso en videojuegos. Además está dotado de un filtro programable para conseguir distintos timbres mediante síntesis sustractiva. El sintetizador se ha construido sobre Arduino, una plataforma de electrónica abierta concebida para la creación de prototipos, consistente en una placa de circuito impreso con un microcontrolador, programable desde un PC para que realice múltiples funciones (desde encender LEDs hasta controlar servomecanismos en robótica, procesado y transmisión de datos, etc.). El sintetizador es controlable vía MIDI, por ejemplo, desde un teclado de piano. A través de MIDI recibe información tal como qué notas debe tocar, o los valores de los parámetros del SID que modifican las propiedades del sonido. Además, toda esa información también la puede recibir de un PC mediante una conexión USB. Se han construido dos versiones del sintetizador: una versión “hardware”, que utiliza el SID para la generación de sonido, y otra “software”, que reemplaza el SID por un emulador, es decir, un programa que se comporta (en la medida de lo posible) de la misma manera que el SID. El emulador se ha implementado en un microcontrolador Atmega 168 de Atmel, el mismo que utiliza Arduino. ABSTRACT. This project consists on design and construction of a synthesizer which is based on chip 6581 Sound Interface Device (SID). This chip was used for sound generation on the Commodore 64, a home computer presented in 1982, and it was the first complex synthesizer built for computers. The chip is a three-voice synthesizer, each voice capable of generating four different waveforms. Each voice has independent control of several parameters, allowing a relatively wide variety of sounds and effects, very useful for its use on videogames. It also includes a programmable filter, allowing more timbre control via subtractive synthesis. The synthesizer has been built on Arduino, an open-source electronics prototyping platform that consists on a printed circuit board with a microcontroller, which is programmable with a computer to do several functions (lighting LEDs, controlling servomechanisms on robotics, data processing or transmission, etc.). The synthesizer is controlled via MIDI, in example, from a piano-type keyboard. It receives from MIDI information such as the notes that should be played or SID’s parameter values that modify the sound. It also can receive that information from a PC via USB connection. Two versions of the synthesizer have been built: a hardware one that uses the SID chip for sound generation, and a software one that replaces SID by an emulator, it is, a program that behaves (as far as possible) in the same way the SID would. The emulator is implemented on an Atmel’s Atmega 168 microcontroller, the same one that is used on Arduino.
Resumo:
Modern Field Programmable Gate Arrays (FPGAs) are power packed with features to facilitate designers. Availability of features like huge block memory (BRAM), Digital Signal Processing (DSP) cores, embedded CPU makes the design strategy of FPGAs quite different from ASICs. FPGA are also widely used in security-critical application where protection against known attacks is of prime importance. We focus ourselves on physical attacks which target physical implementations. To design countermeasures against such attacks, the strategy for FPGA designers should also be different from that in ASIC. The available features should be exploited to design compact and strong countermeasures. In this paper, we propose methods to exploit the BRAMs in FPGAs for designing compact countermeasures. BRAM can be used to optimize intrinsic countermeasures like masking and dual-rail logic, which otherwise have significant overhead (at least 2X). The optimizations are applied on a real AES-128 co-processor and tested for area overhead and resistance on Xilinx Virtex-5 chips. The presented masking countermeasure has an overhead of only 16% when applied on AES. Moreover Dual-rail Precharge Logic (DPL) countermeasure has been optimized to pack the whole sequential part in the BRAM, hence enhancing the security. Proper robustness evaluations are conducted to analyze the optimization for area and security.