718 resultados para FPGA


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena oli toteuttaa optinen radiolinkki hyödyntäen ohjelmistoradiota. Työn alkuosassa käydään läpi ohjelmistoradiota yleisellä tasolla sekä yleisesti nykyisin käytössä olevia optisia tiedonsiirtotapoja. Työn keskiosassa käsitellään työhön käytettävä laitteisto ja ohjelmistot sekä optisen radioetuasteen suunnittelu ja toteutus. Työn loppuosassa analysoidaan toteutetun etuasteen toimintaa. Ohjelmistoradio, yleisemmin ohjelmallisesti määritetty radiolaite, jonka toiminnallisuutta, kuten modulaatioita, suodattimia ja kommunikointiin käytettävää taajuuskaistaa, pystytään muuttamaan ohjelmallisesti ilman laitteistomuutoksia. Useimmiten ohjelmistoradioiden toiminnallisuus määrätään ohjelmoimalla ohjelmistoradio-oheislaitteen ohjelmoitavia porttipiirejä, eli FPGA-piirejä. Optisen radioetuasteen suunnittelun pohjana käytettiin audiokäyttöön tarkoitettua infrapunalähetintä ja – vastaanotinta, jotka muokattiin toimimaan näkyvän valon aallonpituuksilla. Ohjelmistoradio-oheislaitteena toimi Ettus USRP1 varustettuna matalataajuisilla lähetin- ja vastaanotintytärkorteilla. Ohjelmistoradion ohjelmointiympäristönä toimi Linux Ubuntu, ja ohjelmistona GNURadio sekä sen graafinen ohjelmointikäyttöliittymä Gnu Radio Companion. Tutkielman lopputuloksena saatiin aikaan piirilevylle rakennettu optisen radioetuasteen prototyyppi, jolla pystyttiin siirtämään digitaalista audiota 300 kbps tiedonsiirtonopeudella muutamien senttimetrien matkalla pimeässä tilassa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SD card (Secure Digital Memory Card) is widely used in portable storage medium. Currently, latest researches on SD card, are mainly SD card controller based on FPGA (Field Programmable Gate Array). Most of them are relying on API interface (Application Programming Interface), AHB bus (Advanced High performance Bus), etc. They are dedicated to the realization of ultra high speed communication between SD card and upper systems. Studies about SD card controller, really play a vital role in the field of high speed cameras and other sub-areas of expertise. This design of FPGA-based file systems and SD2.0 IP (Intellectual Property core) does not only exhibit a nice transmission rate, but also achieve the systematic management of files, while retaining a strong portability and practicality. The file system design and implementation on a SD card covers the main three IP innovation points. First, the combination and integration of file system and SD card controller, makes the overall system highly integrated and practical. The popular SD2.0 protocol is implemented for communication channels. Pure digital logic design based on VHDL (Very-High-Speed Integrated Circuit Hardware Description Language), integrates the SD card controller in hardware layer and the FAT32 file system for the entire system. Secondly, the document management system mechanism makes document processing more convenient and easy. Especially for small files in batch processing, it can ease the pressure of upper system to frequently access and process them, thereby enhancing the overall efficiency of systems. Finally, digital design ensures the superior performance. For transmission security, CRC (Cyclic Redundancy Check) algorithm is for data transmission protection. Design of each module is platform-independent of macro cells, and keeps a better portability. Custom integrated instructions and interfaces may facilitate easily to use. Finally, the actual test went through multi-platform method, Xilinx and Altera FPGA developing platforms. The timing simulation and debugging of each module was covered. Finally, Test results show that the designed FPGA-based file system IP on SD card can support SD card, TF card and Micro SD with 2.0 protocols, and the successful implementation of systematic management for stored files, and supports SD bus mode. Data read and write rates in Kingston class10 card is approximately 24.27MB/s and 16.94MB/s.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La invención consiste en un sistema implementado en un microcontrolador o FPGA que cifra y descifra la información mediante un algoritmo de cifrado simétrico basado en una tabla de claves que es recorrida usando un generador de filtro no lineal, produciendo de esta forma una secuencia cifrante que es operada mediante una operación XOR bit a bit dando lugar de este modo palabras de mensaje cifrado o mensaje en claro, dependiendo de si la entrada es el mensaje en claro o el mensaje cifrado respectivamente y mediante la cual, un mismo mensaje puede ser cifrado de muy diferentes formas, dependiendo del momento en el que se cifra.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International audience

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The philosophy of minimalism in robotics promotes gaining an understanding of sensing and computational requirements for solving a task. This minimalist approach lies in contrast to the common practice of first taking an existing sensory motor system, and only afterwards determining how to apply the robotic system to the task. While it may seem convenient to simply apply existing hardware systems to the task at hand, this design philosophy often proves to be wasteful in terms of energy consumption and cost, along with unnecessary complexity and decreased reliability. While impressive in terms of their versatility, complex robots such as the PR2 (which cost hundreds of thousands of dollars) are impractical for many common applications. Instead, if a specific task is required, sensing and computational requirements can be determined specific to that task, and a clever hardware implementation can be built to accomplish the task. Since this minimalist hardware would be designed around accomplishing the specified task, significant reductions in hardware complexity can be obtained. This can lead to huge advantages in battery life, cost, and reliability. Even if cost is of no concern, battery life is often a limiting factor in many applications. Thus, a minimalist hardware system is critical in achieving the system requirements. In this thesis, we will discuss an implementation of a counting, tracking, and actuation system as it relates to ergodic bodies to illustrate a minimalist design methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconfigurable hardware can be used to build a multitasking system where tasks are assigned to HW resources at run-time according to the requirements of the running applications. These tasks are frequently represented as direct acyclic graphs and their execution is typically controlled by an embedded processor that schedules the graph execution. In order to improve the efficiency of the system, the scheduler can apply prefetch and reuse techniques that can greatly reduce the reconfiguration latencies. For an embedded processor all these computations represent a heavy computational load that can significantly reduce the system performance. To overcome this problem we have implemented a HW scheduler using reconfigurable resources. In addition we have implemented both prefetch and replacement techniques that obtain as good results as previous complex SW approaches, while demanding just a few clock cycles to carry out the computations. We consider that the HW cost of the system (in our experiments 3% of a Virtex-II PRO xc2vp30 FPGA) is affordable taking into account the great efficiency of the techniques applied to hide the reconfiguration latency and the negligible run-time penalty introduced by the scheduler computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El sector eléctrico está experimentando cambios importantes tanto a nivel de gestión como a nivel de mercado. Una de las claves que están acelerando este cambio es la penetración cada vez mayor de los Sistemas de Generación Distribuida (DER), que están dando un mayor protagonismo al usuario a la hora de plantear la gestión del sistema eléctrico. La complejidad del escenario que se prevé en un futuro próximo, exige que los equipos de la red tenga la capacidad de interactuar en un sistema mucho más dinámico que en el presente, donde la interfaz de conexión deberá estar dotada de la inteligencia necesaria y capacidad de comunicación para que todo el sistema pueda ser gestionado en su conjunto de manera eficaz. En la actualidad estamos siendo testigos de la transición desde el modelo de sistema eléctrico tradicional hacia un nuevo sistema, activo e inteligente, que se conoce como Smart Grid. En esta tesis se presenta el estudio de un Dispositivo Electrónico Inteligente (IED) orientado a aportar soluciones para las necesidades que la evolución del sistema eléctrico requiere, que sea capaz de integrase en el equipamiento actual y futuro de la red, aportando funcionalidades y por tanto valor añadido a estos sistemas. Para situar las necesidades de estos IED se ha llevado a cabo un amplio estudio de antecedentes, comenzando por analizar la evolución histórica de estos sistemas, las características de la interconexión eléctrica que han de controlar, las diversas funciones y soluciones que deben aportar, llegando finalmente a una revisión del estado del arte actual. Dentro de estos antecedentes, también se lleva a cabo una revisión normativa, a nivel internacional y nacional, necesaria para situarse desde el punto de vista de los distintos requerimientos que deben cumplir estos dispositivos. A continuación se exponen las especificaciones y consideraciones necesarias para su diseño, así como su arquitectura multifuncional. En este punto del trabajo, se proponen algunos enfoques originales en el diseño, relacionados con la arquitectura del IED y cómo deben sincronizarse los datos, dependiendo de la naturaleza de los eventos y las distintas funcionalidades. El desarrollo del sistema continua con el diseño de los diferentes subsistemas que lo componen, donde se presentan algunos algoritmos novedosos, como el enfoque del sistema anti-islanding con detección múltiple ponderada. Diseñada la arquitectura y funciones del IED, se expone el desarrollo de un prototipo basado en una plataforma hardware. Para ello se analizan los requisitos necesarios que debe tener, y se justifica la elección de una plataforma embebida de altas prestaciones que incluye un procesador y una FPGA. El prototipo desarrollado se somete a un protocolo de pruebas de Clase A, según las normas IEC 61000-4-30 e IEC 62586-2, para comprobar la monitorización de parámetros. También se presentan diversas pruebas en las que se han estimado los retardos implicados en los algoritmos relacionados con las protecciones. Finalmente se comenta un escenario de prueba real, dentro del contexto de un proyecto del Plan Nacional de Investigación, donde este prototipo ha sido integrado en un inversor dotándole de la inteligencia necesaria para un futuro contexto Smart Grid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At the HL-LHC, proton bunches will cross each other every 25. ns, producing an average of 140 pp-collisions per bunch crossing. To operate in such an environment, the CMS experiment will need a L1 hardware trigger able to identify interesting events within a latency of 12.5. μs. The future L1 trigger will make use also of data coming from the silicon tracker to control the trigger rate. The architecture that will be used in future to process tracker data is still under discussion. One interesting proposal makes use of the Time Multiplexed Trigger concept, already implemented in the CMS calorimeter trigger for the Phase I trigger upgrade. The proposed track finding algorithm is based on the Hough Transform method. The algorithm has been tested using simulated pp-collision data. Results show a very good tracking efficiency. The algorithm will be demonstrated in hardware in the coming months using the MP7, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1. Tb/s.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta tesis doctoral se presentan distintas soluciones para la adquisición de datos provenientes de matrices de sensores resistivos, y en concreto de sensores táctiles piezorresistivos. Los circuitos propuestos reducen el hardware de acondicionamiento y adquisición clásico, implementado una conexión directa entre el sensor y el dispositivo digital (FPGA) que recibe los datos. El objetivo es la adquisición en paralelo y con bajo coste y consumo de área de grandes cantidades de datos provenientes de los sensores matriciales, aprovechando las capacidades de las FPGAs para llevar a cabo medidas simultáneas de varios sensores. Dependiendo del tipo de direccionamiento que pueda ser empleado, dos soluciones son propuestas. En el caso donde el número de unidades sensoriales de la matriz no sea excesivamente alto y el direccionamiento pueda ser realizado sin compartir conexionado, el valor de resistencia de los distintos elementos de la matriz se obtiene a partir del tiempo de descarga de una red RC o integrador pasivo que incluye al sensor. Por otro lado, para matrices con un gran número de elementos o donde el direccionamiento de los mismos haga uso de conexiones compartidas, el uso de un circuito integrador activo reduce la diafonía entre los elementos medidos simultáneamente. El análisis y caracterización de los circuitos propuestos para un rango de resistencias de un sensor táctil piezorresistivo da lugar a una resolución efectiva en la conversión analógico-digital de 10 bits y 8 bits para los circuitos de conexión directa basados en el integrador pasivo y activo, respectivamente. En cuanto a la exactitud en la medida del valor de resistencia, se alcanzan errores relativos del 0,066% (integrador pasivo) y del 0,77% (integrador activo), empleando una novedosa técnica de calibración que hace uso de un único elemento de referencia. Por último, se propone una arquitectura para un sistema táctil basada en los circuitos anteriormente citados. Dos implementaciones se han desarrollado: un prototipo para caracterización y pruebas de laboratorio, y otro para un demostrador en una mano robótica comercial (mano de Barrett). Con estas realizaciones se comprueba que el sistema táctil es capaz de realizar el refresco del conjunto de sensores con una tasa lo suficientemente alta para aplicaciones que requieran una rápida respuesta dinámica (por ejemplo, detección de deslizamiento de objetos en tareas de manipulación con manos robóticas). Además, el paralelismo de las FPGAs no sólo se explota en la adquisición de datos, sino que el pre-procesado que puede realizarse en el sensor inteligente resultante tiene un gran potencial. Como ejemplo, en este trabajo se extraen los momentos geométricos y la elipse asociados a las imágenes táctiles adquiridas por cada uno de los sensores que conforman el sistema.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bilinear pairings can be used to construct cryptographic systems with very desirable properties. A pairing performs a mapping on members of groups on elliptic and genus 2 hyperelliptic curves to an extension of the finite field on which the curves are defined. The finite fields must, however, be large to ensure adequate security. The complicated group structure of the curves and the expensive field operations result in time consuming computations that are an impediment to the practicality of pairing-based systems. The Tate pairing can be computed efficiently using the ɳT method. Hardware architectures can be used to accelerate the required operations by exploiting the parallelism inherent to the algorithmic and finite field calculations. The Tate pairing can be performed on elliptic curves of characteristic 2 and 3 and on genus 2 hyperelliptic curves of characteristic 2. Curve selection is dependent on several factors including desired computational speed, the area constraints of the target device and the required security level. In this thesis, custom hardware processors for the acceleration of the Tate pairing are presented and implemented on an FPGA. The underlying hardware architectures are designed with care to exploit available parallelism while ensuring resource efficiency. The characteristic 2 elliptic curve processor contains novel units that return a pairing result in a very low number of clock cycles. Despite the more complicated computational algorithm, the speed of the genus 2 processor is comparable. Pairing computation on each of these curves can be appealing in applications with various attributes. A flexible processor that can perform pairing computation on elliptic curves of characteristic 2 and 3 has also been designed. An integrated hardware/software design and verification environment has been developed. This system automates the procedures required for robust processor creation and enables the rapid provision of solutions for a wide range of cryptographic applications.