63 resultados para RFID System, Authentication, Indistinguishability, Traceability, Distributed DB
Resumo:
Intraoral devices for bite-force sensing have several applications in odontology and maxillofacial surgery, as bite-force measurements provide additional information to help understand the characteristics of bruxism disorders and can also be of help for the evaluation of post-surgical evolution and for comparison of alternative treatments. A new system for measuring human bite forces is proposed in this work. This system has future applications for the monitoring of bruxism events and as a complement for its conventional diagnosis. Bruxism is a pathology consisting of grinding or tight clenching of the upper and lower teeth, which leads to several problems such as lesions to the teeth, headaches, orofacial pain and important disorders of the temporomandibular joint. The prototype uses a magnetic field communication scheme similar to low-frequency radio frequency identification (RFID) technology (NFC). The reader generates a low-frequency magnetic field that is used as the information carrier and powers the sensor. The system is notable because it uses an intra-mouth passive sensor and an external interrogator, which remotely records and processes information regarding a patient?s dental activity. This permits a quantitative assessment of bite-force, without requiring intra-mouth batteries, and can provide supplementary information to polysomnographic recordings, current most adequate early diagnostic method, so as to initiate corrective actions before irreversible dental wear appears. In addition to describing the system?s operational principles and the manufacture of personalized prototypes, this report will also demonstrate the feasibility of the system and results from the first in vitro and in vivo trials.
Resumo:
Many applications in several domains such as telecommunications, network security, large scale sensor networks, require online processing of continuous data lows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static con?gurations that lead to either under or over-provisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation and a thorough evaluation of the scalability and elasticity of the fully implemented system.
Resumo:
This paper is on homonymous distributed systems where processes are prone to crash failures and have no initial knowledge of the system membership (?homonymous? means that several processes may have the same identi?er). New classes of failure detectors suited to these systems are ?rst de?ned. Among them, the classes H? and H? are introduced that are the homonymous counterparts of the classes ? and ?, respectively. (Recall that the pair h?,?i de?nes the weakest failure detector to solve consensus.) Then, the paper shows how H? and H? can be implemented in homonymous systems without membership knowledge (under different synchrony requirements). Finally, two algorithms are presented that use these failure detectors to solve consensus in homonymous asynchronous systems where there is no initial knowledge ofthe membership. One algorithm solves consensus with hH?, H?i, while the other uses only H?, but needs a majority of correct processes. Observe that the systems with unique identi?ers and anonymous systems are extreme cases of homonymous systems from which follows that all these results also apply to these systems. Interestingly, the new failure detector class H? can be implemented with partial synchrony, while the analogous class A? de?ned for anonymous systems can not be implemented (even in synchronous systems). Hence, the paper provides us with the ?rst proof showing that consensus can be solved in anonymous systems with only partial synchrony (and a majority of correct processes).
Resumo:
We present a new method to accurately locate persons indoors by fusing inertial navigation system (INS) techniques with active RFID technology. A foot-mounted inertial measuring units (IMUs)-based position estimation method, is aided by the received signal strengths (RSSs) obtained from several active RFID tags placed at known locations in a building. In contrast to other authors that integrate IMUs and RSS with a loose Kalman filter (KF)-based coupling (by using the residuals of inertial- and RSS-calculated positions), we present a tight KF-based INS/RFID integration, using the residuals between the INS-predicted reader-to-tag ranges and the ranges derived from a generic RSS path-loss model. Our approach also includes other drift reduction methods such as zero velocity updates (ZUPTs) at foot stance detections, zero angular-rate updates (ZARUs) when the user is motionless, and heading corrections using magnetometers. A complementary extended Kalman filter (EKF), throughout its 15-element error state vector, compensates the position, velocity and attitude errors of the INS solution, as well as IMU biases. This methodology is valid for any kind of motion (forward, lateral or backward walk, at different speeds), and does not require an offline calibration for the user gait. The integrated INS+RFID methodology eliminates the typical drift of IMU-alone solutions (approximately 1% of the total traveled distance), resulting in typical positioning errors along the walking path (no matter its length) of approximately 1.5 m.
Resumo:
Las redes son la esencia de comunidades y sociedades humanas; constituyen el entramado en el que nos relacionamos y determinan cómo lo hacemos, cómo se disemina la información o incluso cómo las cosas se llevan a cabo. Pero el protagonismo de las redes va más allá del que adquiere en las redes sociales. Se encuentran en el seno de múltiples estructuras que conocemos, desde las interaciones entre las proteínas dentro de una célula hasta la interconexión de los routers de internet. Las redes sociales están presentes en internet desde sus principios, en el correo electrónico por tomar un ejemplo. Dentro de cada cliente de correo se manejan listas contactos que agregadas constituyen una red social. Sin embargo, ha sido con la aparición de los sitios web de redes sociales cuando este tipo de aplicaciones web han llegado a la conciencia general. Las redes sociales se han situado entre los sitios más populares y con más tráfico de la web. Páginas como Facebook o Twitter manejan cifras asombrosas en cuanto a número de usuarios activos, de tráfico o de tiempo invertido en el sitio. Pero las funcionalidades de red social no están restringidas a las redes sociales orientadas a contactos, aquellas enfocadas a construir tu lista de contactos e interactuar con ellos. Existen otros ejemplos de sitios que aprovechan las redes sociales para aumentar la actividad de los usuarios y su involucración alrededor de algún tipo de contenido. Estos ejemplos van desde una de las redes sociales más antiguas, Flickr, orientada al intercambio de fotografías, hasta Github, la red social de código libre más popular hoy en día. No es una casualidad que la popularidad de estos sitios web venga de la mano de sus funcionalidades de red social. El escenario es más rico aún, ya que los sitios de redes sociales interaccionan entre ellos, compartiendo y exportando listas de contactos, servicios de autenticación y proporcionando un valioso canal para publicitar la actividad de los usuarios en otros sitios web. Esta funcionalidad es reciente y aún les queda un paso hasta que las redes sociales superen su condición de bunkers y lleguen a un estado de verdadera interoperabilidad entre ellas, tal como funcionan hoy en día el correo electrónico o la mensajería instantánea. Este trabajo muestra una tecnología que permite construir sitios web con características de red social distribuída. En primer lugar, se presenta una tecnología para la construcción de un componente intermedio que permite proporcionar cualquier característica de gestión de contenidos al popular marco de desarrollo web modelo-vista-controlador (MVC) Ruby on Rails. Esta técnica constituye una herramienta para desarrolladores que les permita abstraerse de las complejidades de la gestión de contenidos y enfocarse en las particularidades de los propios contenidos. Esta técnica se usará también para proporcionar las características de red social. Se describe una nueva métrica de reusabilidad de código para demostrar la validez del componente intermedio en marcos MVC. En segundo lugar, se analizan las características de los sitios web de redes sociales más populares, con el objetivo de encontrar los patrones comunes que aparecen en ellos. Este análisis servirá como base para definir los requisitos que debe cumplir un marco para construir redes sociales. A continuación se propone una arquitectura de referencia que proporcione este tipo de características. Dicha arquitectura ha sido implementada en un componente, Social Stream, y probada en varias redes sociales, tanto orientadas a contactos como a contenido, en el contexto de una asociación vecinal tanto como en proyectos de investigación financiados por la UE. Ha sido la base de varios proyectos fin de carrera. Además, ha sido publicado como código libre, obteniendo una comunidad creciente y está siendo usado más allá del ámbito de este trabajo. Dicha arquitectura ha permitido la definición de un nuevo modelo de control de acceso social que supera varias limitaciones presentes en los modelos de control de acceso para redes sociales. Más aún, se han analizado casos de estudio de sitios de red social distribuídos, reuniendo un conjunto de caraterísticas que debe cumplir un marco para construir redes sociales distribuídas. Por último, se ha extendido la arquitectura del marco para dar cabida a las características de redes sociales distribuídas. Su implementación ha sido validada en proyectos de investigación financiados por la UE. Abstract Networks are the substance of human communities and societies; they constitute the structural framework on which we relate to each other and determine the way we do it, the way information is diseminated or even the way people get things done. But network prominence goes beyond the importance it acquires in social networks. Networks are found within numerous known structures, from protein interactions inside a cell to router connections on the internet. Social networks are present on the internet since its beginnings, in emails for example. Inside every email client, there are contact lists that added together constitute a social network. However, it has been with the emergence of social network sites (SNS) when these kinds of web applications have reached general awareness. SNS are now among the most popular sites in the web and with the higher traffic. Sites such as Facebook and Twitter hold astonishing figures of active users, traffic and time invested into the sites. Nevertheless, SNS functionalities are not restricted to contact-oriented social networks, those that are focused on building your own list of contacts and interacting with them. There are other examples of sites that leverage social networking to foster user activity and engagement around other types of content. Examples go from early SNS such as Flickr, the photography related networking site, to Github, the most popular social network repository nowadays. It is not an accident that the popularity of these websites comes hand-in-hand with their social network capabilities The scenario is even richer, due to the fact that SNS interact with each other, sharing and exporting contact lists and authentication as well as providing a valuable channel to publize user activity in other sites. These interactions are very recent and they are still finding their way to the point where SNS overcome their condition of data silos to a stage of full interoperability between sites, in the same way email and instant messaging networks work today. This work introduces a technology that allows to rapidly build any kind of distributed social network website. It first introduces a new technique to create middleware that can provide any kind of content management feature to a popular model-view-controller (MVC) web development framework, Ruby on Rails. It provides developers with tools that allow them to abstract from the complexities related with content management and focus on the development of specific content. This same technique is also used to provide the framework with social network features. Additionally, it describes a new metric of code reuse to assert the validity of the kind of middleware that is emerging in MVC frameworks. Secondly, the characteristics of top popular SNS are analysed in order to find the common patterns shown in them. This analysis is the ground for defining the requirements of a framework for building social network websites. Next, a reference architecture for supporting the features found in the analysis is proposed. This architecture has been implemented in a software component, called Social Stream, and tested in several social networks, both contact- and content-oriented, in local neighbourhood associations and EU-founded research projects. It has also been the ground for several Master’s theses. It has been released as a free and open source software that has obtained a growing community and that is now being used beyond the scope of this work. The social architecture has enabled the definition of a new social-based access control model that overcomes some of the limitations currenly present in access control models for social networks. Furthermore, paradigms and case studies in distributed SNS have been analysed, gathering a set of features for distributed social networking. Finally the architecture of the framework has been extended to support distributed SNS capabilities. Its implementation has also been validated in EU-founded research projects.
Resumo:
In this paper we propose a flexible Multi-Agent Architecture together with a methodology for indoor location which allows us to locate any mobile station (MS) such as a Laptop, Smartphone, Tablet or a robotic system in an indoor environment using wireless technology. Our technology is complementary to the GPS location finder as it allows us to locate a mobile system in a specific room on a specific floor using the Wi-Fi networks. The idea is that any MS will have an agent known at a Fuzzy Location Software Agent (FLSA) with a minimum capacity processing at its disposal which collects the power received at different Access Points distributed around the floor and establish its location on a plan of the floor of the building. In order to do so it will have to communicate with the Fuzzy Location Manager Software Agent (FLMSA). The FLMSAs are local agents that form part of the management infrastructure of the Wi-Fi network of the Organization. The FLMSA implements a location estimation methodology divided into three phases (measurement, calibration and estimation) for locating mobile stations (MS). Our solution is a fingerprint-based positioning system that overcomes the problem of the relative effect of doors and walls on signal strength and is independent of the network device manufacturer. In the measurement phase, our system collects received signal strength indicator (RSSI) measurements from multiple access points. In the calibration phase, our system uses these measurements in a normalization process to create a radio map, a database of RSS patterns. Unlike traditional radio map-based methods, our methodology normalizes RSS measurements collected at different locations on a floor. In the third phase, we use Fuzzy Controllers to locate an MS on the plan of the floor of a building. Experimental results demonstrate the accuracy of the proposed method. From these results it is clear that the system is highly likely to be able to locate an MS in a room or adjacent room.
Resumo:
With the advent of cloud computing, many applications have embraced the ensuing paradigm shift towards modern distributed key-value data stores, like HBase, in order to benefit from the elastic scalability on offer. However, many applications still hesitate to make the leap from the traditional relational database model simply because they cannot compromise on the standard transactional guarantees of atomicity, isolation, and durability. To get the best of both worlds, one option is to integrate an independent transaction management component with a distributed key-value store. In this paper, we discuss the implications of this approach for durability. In particular, if the transaction manager provides durability (e.g., through logging), then we can relax durability constraints in the key-value store. However, if a component fails (e.g., a client or a key-value server), then we need a coordinated recovery procedure to ensure that commits are persisted correctly. In our research, we integrate an independent transaction manager with HBase. Our main contribution is a failure recovery middleware for the integrated system, which tracks the progress of each commit as it is flushed down by the client and persisted within HBase, so that we can recover reliably from failures. During recovery, commits that were interrupted by the failure are replayed from the transaction management log. Importantly, the recovery process does not interrupt transaction processing on the available servers. Using a benchmark, we evaluate the impact of component failure, and subsequent recovery, on application performance.
Resumo:
A distributed power architecture for aerospace application with very restrictive specifications is analyzed. Parameters as volume, weight and losses are analyzed for the considered power architectures. In order to protect the 3 phase generator against high load steps, an intermediate bus (based in a high capacitance) to provide energy to the loads during the high load steps is included. Prototypes of the selected architecture for the rectifier and EMI filter are built and the energy control is validated.
Resumo:
In this paper, an architecture based on a scalable and flexible set of Evolvable Processing arrays is presented. FPGA-native Dynamic Partial Reconfiguration (DPR) is used for evolution, which is done intrinsically, letting the system to adapt autonomously to variable run-time conditions, including the presence of transient and permanent faults. The architecture supports different modes of operation, namely: independent, parallel, cascaded or bypass mode. These modes of operation can be used during evolution time or during normal operation. The evolvability of the architecture is combined with fault-tolerance techniques, to enhance the platform with self-healing features, making it suitable for applications which require both high adaptability and reliability. Experimental results show that such a system may benefit from accelerated evolution times, increased performance and improved dependability, mainly by increasing fault tolerance for transient and permanent faults, as well as providing some fault identification possibilities. The evolvable HW array shown is tailored for window-based image processing applications.
Resumo:
Mediterranean Dehesas are one of the European natural habitat types of Community interest (43/92/EEC Directive), associated to high diversity levels and producer of important goods and services. In this work, tree contribution and grazing influence over pasture alpha diversity in a Dehesa in Central Spain was studied. We analyzed Richness and Shannon-Wiener (SW) indexes on herbaceous layer under 16 holms oak trees (64 sampling units distributed in two directions and in two distances to the trunk) distributed in four different grazing management zones (depending on species and stocking rate). Floristic composition by species or morphospecies and species abundance were analyzed for each sample unit. Linear mixed models (LMM) and generalized linear mixed models (GLMMs) were used to study relationships between alpha diversity measures and independent factors. Edge crown influence showed the highest values of Richness and SW index. No significant differences were found between orientations under tree crown influence. Grazing management had a significant effect over Richness and SW measures, specially the grazing species (cattle or sheep). We preliminary quantify and analyze the interaction of tree stratum and grazing management over herbaceous diversity in a year of extreme climatic conditions.
Resumo:
In coffee processing the fermentation stage is considered one of the critical operations by its impact on the final quality of the product. However, the level of control of the fermentation process on each farm is often not adequate; the use of sensorics for controlling coffee fermentation is not common. The objective of this work is to characterize the fermentation temperature in a fermentation tank by applying spatial interpolation and a new methodology of data analysis based on phase space diagrams of temperature data, collected by means of multi-distributed, low cost and autonomous wireless sensors. A real coffee fermentation was supervised in the Cauca region (Colombia) with a network of 24 semi-passive TurboTag RFID temperature loggers with vacuum plastic cover, submerged directly in the fermenting mass. Temporal evolution and spatial distribution of temperature is described in terms of the phase diagram areas which characterizes the cyclic behaviour of temperature and highlights the significant heterogeneity of thermal conditions at different locations in the tank where the average temperature of the fermentation was 21.2 °C, although there were temperature ranges of 4.6°C, and average spatial standard deviation of ±1.21ºC. In the upper part of the tank we found high heterogeneity of temperatures, the higher temperatures and therefore the higher fermentation rates. While at the bottom, it has been computed an area in the phase diagram practically half of the area occupied by the sensors of the upper tank, therefore this location showed higher temperature homogeneity
Resumo:
En muchas áreas de la ingeniería, la integridad y confiabilidad de las estructuras son aspectos de extrema importancia. Estos son controlados mediante el adecuado conocimiento de danos existentes. Típicamente, alcanzar el nivel de conocimiento necesario que permita caracterizar la integridad estructural implica el uso de técnicas de ensayos no destructivos. Estas técnicas son a menudo costosas y consumen mucho tiempo. En la actualidad, muchas industrias buscan incrementar la confiabilidad de las estructuras que emplean. Mediante el uso de técnicas de última tecnología es posible monitorizar las estructuras y en algunos casos, es factible detectar daños incipientes que pueden desencadenar en fallos catastróficos. Desafortunadamente, a medida que la complejidad de las estructuras, los componentes y sistemas incrementa, el riesgo de la aparición de daños y fallas también incrementa. Al mismo tiempo, la detección de dichas fallas y defectos se torna más compleja. En años recientes, la industria aeroespacial ha realizado grandes esfuerzos para integrar los sensores dentro de las estructuras, además de desarrollar algoritmos que permitan determinar la integridad estructural en tiempo real. Esta filosofía ha sido llamada “Structural Health Monitoring” (o “Monitorización de Salud Estructural” en español) y este tipo de estructuras han recibido el nombre de “Smart Structures” (o “Estructuras Inteligentes” en español). Este nuevo tipo de estructuras integran materiales, sensores, actuadores y algoritmos para detectar, cuantificar y localizar daños dentro de ellas mismas. Una novedosa metodología para detección de daños en estructuras se propone en este trabajo. La metodología está basada en mediciones de deformación y consiste en desarrollar técnicas de reconocimiento de patrones en el campo de deformaciones. Estas últimas, basadas en PCA (Análisis de Componentes Principales) y otras técnicas de reducción dimensional. Se propone el uso de Redes de difracción de Bragg y medidas distribuidas como sensores de deformación. La metodología se validó mediante pruebas a escala de laboratorio y pruebas a escala real con estructuras complejas. Los efectos de las condiciones de carga variables fueron estudiados y diversos experimentos fueron realizados para condiciones de carga estáticas y dinámicas, demostrando que la metodología es robusta ante condiciones de carga desconocidas. ABSTRACT In many engineering fields, the integrity and reliability of the structures are extremely important aspects. They are controlled by the adequate knowledge of existing damages. Typically, achieving the level of knowledge necessary to characterize the structural integrity involves the usage of nondestructive testing techniques. These are often expensive and time consuming. Nowadays, many industries look to increase the reliability of the structures used. By using leading edge techniques it is possible to monitoring these structures and in some cases, detect incipient damage that could trigger catastrophic failures. Unfortunately, as the complexity of the structures, components and systems increases, the risk of damages and failures also increases. At the same time, the detection of such failures and defects becomes more difficult. In recent years, the aerospace industry has done great efforts to integrate the sensors within the structures and, to develop algorithms for determining the structural integrity in real time. The ‘philosophy’ has being called “Structural Health Monitoring” and these structures have been called “smart structures”. These new types of structures integrate materials, sensors, actuators and algorithms to detect, quantify and locate damage within itself. A novel methodology for damage detection in structures is proposed. The methodology is based on strain measurements and consists in the development of strain field pattern recognition techniques. The aforementioned are based on PCA (Principal Component Analysis) and other dimensional reduction techniques. The use of fiber Bragg gratings and distributed sensing as strain sensors is proposed. The methodology have been validated by using laboratory scale tests and real scale tests with complex structures. The effects of the variable load conditions were studied and several experiments were performed for static and dynamic load conditions, demonstrating that the methodology is robust under unknown load conditions.
Resumo:
This paper describes ExperNet, an intelligent multi-agent system that was developed under an EU funded project to assist in the management of a large-scale data network. ExperNet assists network operators at various nodes of a WAN to detect and diagnose hardware failures and network traffic problems and suggests the most feasible solution, through a web-based interface. ExperNet is composed by intelligent agents, capable of both local problem solving and social interaction among them for coordinating problem diagnosis and repair. The current network state is captured and maintained by conventional network management and monitoring software components, which have been smoothly integrated into the system through sophisticated information exchange interfaces. For the implementation of the agents, a distributed Prolog system enhanced with networking facilities was developed. The agents’ knowledge base is developed in an extensible and reactive knowledge base system capable of handling multiple types of knowledge representation. ExperNet has been developed, installed and tested successfully in an experimental network zone of Ukraine.
Resumo:
The efficiency of a Power Plant is affected by the distribution of the pulverized coal within the furnace. The coal, which is pulverized in the mills, is transported and distributed by the primary gas through the mill-ducts to the interior of the furnace. This is done with a double function: dry and enter the coal by different levels for optimizing the combustion in the sense that a complete combustion occurs with homogeneous heat fluxes to the walls. The mill-duct systems of a real Power Plant are very complex and they are not yet well understood. In particular, experimental data concerning the mass flows of coal to the different levels are very difficult to measure. CFD modeling can help to determine them. An Eulerian/Lagrangian approach is used due to the low solid–gas volume ratio.
Resumo:
EPICS (Experimental Physics and Industrial Control System) lies in a set of software tools and applications which provide a software infrastructure for building distributed data acquisition and control systems. Currently there is an increase in use of such systems in large Physics experiments like ITER, ESS, and FREIA. In these experiments, advanced data acquisition systems using FPGA-based technology like FlexRIO are more frequently been used. The particular case of ITER (International Thermonuclear Experimental Reactor), the instrumentation and control system is supported by CCS (CODAC Core System), based on RHEL (Red Hat Enterprise Linux) operating system, and by the plant design specifications in which every CCS element is defined either hardware, firmware or software. In this degree final project the methodology proposed in Implementation of Intelligent Data Acquisition Systems for Fusion Experiments using EPICS and FlexRIO Technology Sanz et al. [1] is used. The final objective is to provide a document describing the fulfilled process and the source code of the data acquisition system accomplished. The use of the proposed methodology leads to have two diferent stages. The first one consists of the hardware modelling with graphic design tools like LabVIEWFPGA which later will be implemented in the FlexRIO device. In the next stage the design cycle is completed creating an EPICS controller that manages the device using a generic device support layer named NDS (Nominal Device Support). This layer integrates the data acquisition system developed into CCS (Control, data access and communication Core System) as an EPICS interface to the system. The use of FlexRIO technology drives the use of LabVIEW and LabVIEW FPGA respectively. RESUMEN. EPICS (Experimental Physics and Industrial Control System) es un conjunto de herramientas software utilizadas para el desarrollo e implementación de sistemas de adquisición de datos y control distribuidos. Cada vez es más utilizado para entornos de experimentación física a gran escala como ITER, ESS y FREIA entre otros. En estos experimentos se están empezando a utilizar sistemas de adquisición de datos avanzados que usan tecnología basada en FPGA como FlexRIO. En el caso particular de ITER, el sistema de instrumentación y control adoptado se basa en el uso de la herramienta CCS (CODAC Core System) basado en el sistema operativo RHEL (Red Hat) y en las especificaciones del diseño del sistema de planta, en la cual define todos los elementos integrantes del CCS, tanto software como firmware y hardware. En este proyecto utiliza la metodología propuesta para la implementación de sistemas de adquisición de datos inteligente basada en EPICS y FlexRIO. Se desea generar una serie de ejemplos que cubran dicho ciclo de diseño completo y que serían propuestos como casos de uso de dichas tecnologías. Se proporcionará un documento en el que se describa el trabajo realizado así como el código fuente del sistema de adquisición. La metodología adoptada consta de dos etapas diferenciadas. En la primera de ellas se modela el hardware y se sintetiza en el dispositivo FlexRIO utilizando LabVIEW FPGA. Posteriormente se completa el ciclo de diseño creando un controlador EPICS que maneja cada dispositivo creado utilizando una capa software genérica de manejo de dispositivos que se denomina NDS (Nominal Device Support). Esta capa integra la solución en CCS realizando la interfaz con la capa EPICS del sistema. El uso de la tecnología FlexRIO conlleva el uso del lenguaje de programación y descripción hardware LabVIEW y LabVIEW FPGA respectivamente.