989 resultados para FPGA (Field programmable gate arrays)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityössä perehdyttiin taajuusmuuttajien toimintaan ja ohjaukseen. Lisäksi työssä tarkasteltiin vaihtosuuntaajan nopeiden transienttitilojen aiheuttamaa moottorin ylijännitettä. Moottorikaapelin heijastuksia käsiteltiin vertaamalla moottorikaapelia siirtolinjaan ja todennettiin ylijännitteen syyt. Ylijännitteen vähentämiseksi on kehitetty useita suodatusmenetelmiä. Työssä vertailtiin näitä menetelmiä ja kartoitettiin kaupallisia vaihtoehtoja. Taajuusmuuttajan ohjaus on tähän päivään asti tehty yleensä käyttäen mikroprosessoria sekä logiikkapiiriä. Tulevaisuudessa ohjaukseen käytetään todennäköisesti uudelleenohjelmoitavia FPGA-piirejä (Field Programmable Gate Array). FPGA-piirin etuihin kuuluu uudelleenohjelmoitavuus sekä ohjauksen keskittäminen yhdelle piirille.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How can a bridge be built between autonomic computing approaches and parallel computing systems? The work reported in this paper is motivated towards bridging this gap by proposing a swarm-array computing approach based on ‘Intelligent Agents’ to achieve autonomy for distributed parallel computing systems. In the proposed approach, a task to be executed on parallel computing cores is carried onto a computing core by carrier agents that can seamlessly transfer between processing cores in the event of a predicted failure. The cognitive capabilities of the carrier agents on a parallel processing core serves in achieving the self-ware objectives of autonomic computing, hence applying autonomic computing concepts for the benefit of parallel computing systems. The feasibility of the proposed approach is validated by simulation studies using a multi-agent simulator on an FPGA (Field-Programmable Gate Array) and experimental studies using MPI (Message Passing Interface) on a computer cluster. Preliminary results confirm that applying autonomic computing principles to parallel computing systems is beneficial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RFID (Radio Frequency Identification) identifies object by using the radio frequency which is a non-contact automatic identification technique. This technology has shown its powerful practical value and potential in the field of manufacturing, retailing, logistics and hospital automation. Unfortunately, the key problem that impacts the application of RFID system is the security of the information. Recently, researchers have demonstrated solutions to security threats in RFID technology. Among these solutions are several key management protocols. This master dissertations presents a performance evaluation of Neural Cryptography and Diffie-Hellman protocols in RFID systems. For this, we measure the processing time inherent in these protocols. The tests was developed on FPGA (Field-Programmable Gate Array) platform with Nios IIr embedded processor. The research methodology is based on the aggregation of knowledge to development of new RFID systems through a comparative analysis between these two protocols. The main contributions of this work are: performance evaluation of protocols (Diffie-Hellman encryption and Neural) on embedded platform and a survey on RFID security threats. According to the results the Diffie-Hellman key agreement protocol is more suitable for RFID systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho, um controlador adaptativo backstepping a estrutura variável (Variable Structure Adaptive Backstepping Controller, VS-ABC) é apresentado para plantas monovariáveis, lineares e invariantes no tempo com grau relativo unitário. Ao invés das tradicionais leis integrais para estimação dos parâmetros da planta, leis chaveadas são utilizadas com o objetivo de aumentar a robustez em relação a incertezas paramétricas e distúrbios externos, bem como melhorar o desempenho transitório do sistema. Adicionalmente, o projeto do novo controlador é mais intuitivo quando comparado ao controlador backstepping original, uma vez que os relés introduzidos apresentam amplitudes diretamente relacionadas com os parâmetros nominais da planta. Esta nova abordagem, com uso de estrutura variável, também reduz a complexidade das implementações práticas, motivando a utilização de componentes industriais, tais como, FPGAs (Field Programmable Gate Arrays ), MCUs (Microcontrollers) e DSPs (Digital Signal Processors). Simulações preliminares para um sistema instável de primeira e segunda ordem são apresentadas de modo a corroborar os estudos. Um dos exemplos de Rohrs é ainda abordado através de simulações, para os dois cenários adaptativos: o controlador backstepping adaptativo original e o VS-ABC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have recently proposed an extension to Petri nets in order to be able to directly deal with all aspects of embedded digital systems. This extension is meant to be used as an internal model of our co-design environment. After analyzing relevant related work, and presenting a short introduction to our extension as a background material, we describe the details of the timing model we use in our approach, which is mainly based in Merlin's time model. We conclude the paper by discussing an example of its usage. © 2004 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patente de invenção de um método para arquitetura de computador reconfigurável e sujeita a constantes otimizações que compreende uma arquitetura de computador implementada em FPGA (Field Programmable Gate Array).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several countries have invested in technologies for Smart Grids. Among such protocols designed cover this area, highlights the DNP3 (Distributed Network Protocol version 3). Although the DNP3 be developed for operation over the serial interface, there is a trend in the literature to the use of other interfaces. The Zigbee wireless interface has become more popular in the industrial applications. In order to study the challenges of integrating of these two protocols, this article is presented the analysis of DNP3 protocol stack through state machines The encapsulation of DNP3 messages in P2P (point-to-point) ZigBee Network, may assist in the discovery and solution of failures of availability and security of this integration. The ultimate goal is to merge the features of DNP3 and Zigbee stacks, and display a solution that provides the benefits of wireless environment, without impairment of security required for Smart Grid applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto fin de carrera tiene como finalidad el diseño e implementación de un sistema multicanal de medida de temperaturas con termopares con procesado digital. Se ha realizado un prototipo de cuatro canales con conexión de termopar, que es el tipo de sensor utilizado para realizar dichas medidas. La tensión generada por el termopar es procesada mediante un conversor de termopar a digital con salida en interfaz modo serie o SPI (Serial Peripheral Interface). El control de dicha comunicación se realiza por medio de un Array de Puertas Lógicas Programables o FPGA (Field Programmable Gate Array), en concreto se ha utilizado una plataforma de desarrollo modelo Virtex-5 de la empresa Xilinx. Esta tarjeta se ha programado también para el procesado software y la posterior comunicación serie con el PC, el cual consta de una interfaz de usuario donde se muestran los resultados de las medidas en tiempo real. El proyecto ha sido desarrollado en colaboración con una empresa privada dedicada principalmente al diseño electrónico. La finalidad de este prototipo es el estudio de una actualización del bloque de medida para el control de las curvas de temperatura de un equipo de reparación aeronáutica. En esta memoria se describe el proceso realizado para el desarrollo del prototipo, incluye la presentación de los estudios realizados y la información necesaria para llevar a cabo el diseño, la fabricación y la programación de los diferentes bloques que componen el sistema. ABSTRACT. The aim of this project is to implement a multichannel temperature measurement system with digital processing, using thermocouples. A four-channel prototype with thermocouple connection has been built. The thermocouple voltage is converted to digital line using a Thermocouple-to-Digital Converter with a Serial Perpheral Interface (SPI) output. The master which controls this communication is embedded in a Field Programmable Gate Array (FPGA), specifically the Xilinx Virtex-5 model. This FPGA also has the code for software temperature processing and the prototype to PC serial communication embedded. The PC user interface displays the measurement results in real time. This project has been developed at a private electronics design company. The company wants to study an update to change the analogue temperature controller equipment to a digital one. So this prototype studies a digital version of the temperature measurement block. The processes accomplished for the prototype development are detailed in the next pages of this document. It includes the studies and information needed to develop the design, manufacturing process and programming of the blocks which integrate with the global system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este Proyecto Fin de Grado se ha realizado un estudio de cómo generar, a partir de modelos de flujo de datos en RVC-CAL (Reconfigurable Video Coding – CAL Actor Language), modelos VHDL (Versatile Hardware Description Language) mediante Vivado HLS (Vivado High Level Synthesis), incluida en las herramientas disponibles en Vivado de Xilinx. Una vez conseguido el modelo VHDL resultante, la intención es que mediante las herramientas de Xilinx se programe en una FPGA (Field Programmable Gate Array) o el dispositivo Zynq también desarrollado por Xilinx. RVC-CAL es un lenguaje de flujo de datos que describe la funcionalidad de bloques funcionales, denominados actores. Las funcionalidades que desarrolla un actor se definen como acciones, las cuales pueden ser diferentes en un mismo actor. Los actores pueden comunicarse entre sí y formar una red de actores o network. Con Vivado HLS podemos obtener un diseño VHDL a partir de un modelo en lenguaje C. Por lo que la generación de modelos en VHDL a partir de otros en RVC-CAL, requiere una fase previa en la que los modelos en RVC-CAL serán compilados para conseguir su equivalente en lenguaje C. El compilador ORCC (Open RVC-CAL Compiler) es la herramienta que nos permite lograr diseños en lenguaje C partiendo de modelos en RVC-CAL. ORCC no crea directamente el código ejecutable, sino que genera un código fuente disponible para ser compilado por otra herramienta, en el caso de este proyecto, el compilador GCC (Gnu C Compiler) de Linux. En resumen en este proyecto nos encontramos con tres puntos de estudio bien diferenciados, los cuales son: 1. Partimos de modelos de flujo de datos en RVC-CAL, los cuales son compilados por ORCC para alcanzar su traducción en lenguaje C. 2. Una vez conseguidos los diseños equivalentes en lenguaje C, son sintetizados en Vivado HLS para conseguir los modelos en VHDL. 3. Los modelos VHDL resultantes serian manipulados por las herramientas de Xilinx para producir el bitstream que sea programado en una FPGA o en el dispositivo Zynq. En el estudio del segundo punto, nos encontramos con una serie de elementos conflictivos que afectan a la síntesis en Vivado HLS de los diseños en lenguaje C generados por ORCC. Estos elementos están relacionados con la manera que se encuentra estructurada la especificación en C generada por ORCC y que Vivado HLS no puede soportar en determinados momentos de la síntesis. De esta manera se ha propuesto una transformación “manual” de los diseños generados por ORCC que afecto lo menos posible a los modelos originales para poder realizar la síntesis con Vivado HLS y crear el fichero VHDL correcto. De esta forma este documento se estructura siguiendo el modelo de un trabajo de investigación. En primer lugar, se exponen las motivaciones y objetivos que apoyan y se esperan lograr en este trabajo. Seguidamente, se pone de manifiesto un análisis del estado del arte de los elementos necesarios para el desarrollo del mismo, proporcionando los conceptos básicos para la correcta comprensión y estudio del documento. Se realiza una descripción de los lenguajes RVC-CAL y VHDL, además de una introducción de las herramientas ORCC y Vivado, analizando las bondades y características principales de ambas. Una vez conocido el comportamiento de ambas herramientas, se describen las soluciones desarrolladas en nuestro estudio de la síntesis de modelos en RVC-CAL, poniéndose de manifiesto los puntos conflictivos anteriormente señalados que Vivado HLS no puede soportar en la síntesis de los diseños en lenguaje C generados por el compilador ORCC. A continuación se presentan las soluciones propuestas a estos errores acontecidos durante la síntesis, con las cuales se pretende alcanzar una especificación en C más óptima para una correcta síntesis en Vivado HLS y alcanzar de esta forma los modelos VHDL adecuados. Por último, como resultado final de este trabajo se extraen un conjunto de conclusiones sobre todos los análisis y desarrollos acontecidos en el mismo. Al mismo tiempo se proponen una serie de líneas futuras de trabajo con las que se podría continuar el estudio y completar la investigación desarrollada en este documento. ABSTRACT. In this Project it has made a study of how to generate, from data flow models in RVC-CAL (Reconfigurable Video Coding - Actor CAL Language), VHDL models (Versatile Hardware Description Language) by Vivado HLS (Vivado High Level Synthesis), included in the tools available in Vivado of Xilinx. Once achieved the resulting VHDL model, the intention is that by the Xilinx tools programmed in FPGA or Zynq device also developed by Xilinx. RVC-CAL is a dataflow language that describes the functionality of functional blocks, called actors. The functionalities developed by an actor are defined as actions, which may be different in the same actor. Actors can communicate with each other and form a network of actors. With Vivado HLS we can get a VHDL design from a model in C. So the generation of models in VHDL from others in RVC-CAL requires a preliminary phase in which the models RVC-CAL will be compiled to get its equivalent in C. The compiler ORCC (Open RVC-CAL Compiler) is the tool that allows us to achieve designs in C language models based on RVC-CAL. ORCC not directly create the executable code but generates an available source code to be compiled by another tool, in the case of this project, the GCC compiler (GNU C Compiler) of Linux. In short, in this project we find three well-defined points of study, which are: 1. We start from data flow models in RVC-CAL, which are compiled by ORCC to achieve its translation in C. 2. Once you realize the equivalent designs in C, they are synthesized in Vivado HLS for VHDL models. 3. The resulting models VHDL would be manipulated by Xilinx tools to produce the bitstream that is programmed into an FPGA or Zynq device. In the study of the second point, we find a number of conflicting elements that affect the synthesis Vivado HLS designs in C generated by ORCC. These elements are related to the way it is structured specification in C generated ORCC and Vivado HLS cannot hold at certain times of the synthesis. Thus it has proposed a "manual" transformation of designs generated by ORCC that affected as little as possible to the original in order to perform the synthesis Vivado HLS and create the correct file VHDL models. Thus this document is structured along the lines of a research. First, the motivations and objectives that support and hope to reach in this work are presented. Then it shows an analysis the state of the art of the elements necessary for its development, providing the basics for a correct understanding and study of the document. A description of the RVC-CAL and VHDL languages is made, in addition an introduction of the ORCC and Vivado tools, analyzing the advantages and main features of both. Once you know the behavior of both tools, the solutions developed in our study of the synthesis of RVC-CAL models, introducing the conflicting points mentioned above are described that Vivado HLS cannot stand in the synthesis of design in C language generated by ORCC compiler. Below the proposed solutions to these errors occurred during synthesis, with which it is intended to achieve optimum C specification for proper synthesis Vivado HLS and thus create the appropriate VHDL models are presented. Finally, as the end result of this work a set of conclusions on all analyzes and developments occurred in the same are removed. At the same time a series of future lines of work which could continue to study and complete the research developed in this document are proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this investigation was to develop and implement a general purpose VLSI (Very Large Scale Integration) Test Module based on a FPGA (Field Programmable Gate Array) system to verify the mechanical behavior and performance of MEM sensors, with associated corrective capabilities; and to make use of the evolving System-C, a new open-source HDL (Hardware Description Language), for the design of the FPGA functional units. System-C is becoming widely accepted as a platform for modeling, simulating and implementing systems consisting of both hardware and software components. In this investigation, a Dual-Axis Accelerometer (ADXL202E) and a Temperature Sensor (TMP03) were used for the test module verification. Results of the test module measurement were analyzed for repeatability and reliability, and then compared to the sensor datasheet. Further study ideas were identified based on the study and results analysis. ASIC (Application Specific Integrated Circuit) design concepts were also being pursued.