923 resultados para automated full waveform logging system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Descripción y evaluación de sistema de estimulación cognitiva a través de la TDT orientada a personas con enfermedad de Parkinson, con supervisión por parte de sus terapeutas de forma remota. Abstract: This paper details the full design, implementation, and validation of an e-health service in order to improve the community health care services for patients with cognitive disorders. Specifically, the new service allows Parkinson’s disease patients benefit from the possibility of doing cognitive stimulation therapy (CST) at home by using a familiar device such as a TV set. Its use instead of a PC could be a major advantage for some patients whose lack of familiarity with the use of a PC means that they can do therapy only in the presence of a therapist. For these patients this solution could bring about a great improvement in their autonomy. At the same time, this service provides therapists with the ability to conduct follow-up of therapy sessions via the web,benefiting from greater and easier control of the therapy exercises performed by patients and allowing them to customize new exercises in accordance with the particular needs of each patient. As a result, this kind of CST is considered to be a complement of other therapies oriented to the Parkinson patients. Furthermore, with small changes, the system could be useful for patients with a different cognitive disease such as Alzheimer’s or mild cognitive impairment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the advantages of social networks is the possibility to socialize and personalize the content created or shared by the users. In mobile social networks, where the devices have limited capabilities in terms of screen size and computing power, Multimedia Recommender Systems help to present the most relevant content to the users, depending on their tastes, relationships and profile. Previous recommender systems are not able to cope with the uncertainty of automated tagging and are knowledge domain dependant. In addition, the instantiation of a recommender in this domain should cope with problems arising from the collaborative filtering inherent nature (cold start, banana problem, large number of users to run, etc.). The solution presented in this paper addresses the abovementioned problems by proposing a hybrid image recommender system, which combines collaborative filtering (social techniques) with content-based techniques, leaving the user the liberty to give these processes a personal weight. It takes into account aesthetics and the formal characteristics of the images to overcome the problems of current techniques, improving the performance of existing systems to create a mobile social networks recommender with a high degree of adaptation to any kind of user.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

onceptual design phase is partially supported by product lifecycle management/computer-aided design (PLM/CAD) systems causing discontinuity of the design information flow: customer needs — functional requirements — key characteristics — design parameters (DPs) — geometric DPs. Aiming to address this issue, it is proposed a knowledge-based approach is proposed to integrate quality function deployment, failure mode and effects analysis, and axiomatic design into a commercial PLM/CAD system. A case study, main subject of this article, was carried out to validate the proposed process, to evaluate, by a pilot development, how the commercial PLM/CAD modules and application programming interface could support the information flow, and based on the pilot scheme results to propose a full development framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an automatic-dependent surveillance-broadcast (ADS-B) implementation for air-to-air and ground-based experimental surveillance within a prototype of a fully automated air traffic management (ATM) system, under a trajectory-based-operations paradigm. The system is built using an air-inclusive implementation of system wide information management (SWIM). This work describes the relations between airborne and ground surveillance (SURGND), the prototype surveillance systems, and their algorithms. System's performance is analyzed with simulated and real data. Results show that the proposed ADS-B implementation can fulfill the most demanding surveillance accuracy requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to design a path following control system for a car-like mobile robot using classical linear control techniques, so that it adapts on-line to varying conditions during the trajectory following task. The main advantages of the proposed control structure is that well known linear control theory can be applied in calculating the PID controllers to full control requirements, while at the same time it is exible to be applied in non-linear changing conditions of the path following task. For this purpose the Frenet frame kinematic model of the robot is linearised at a varying working point that is calculated as a function of the actual velocity, the path curvature and kinematic parameters of the robot, yielding a transfer function that varies during the trajectory. The proposed controller is formed by a combination of an adaptive PID and a feed-forward controller, which varies accordingly with the working conditions and compensates the non-linearity of the system. The good features and exibility of the proposed control structure have been demonstrated through realistic simulations that include both kinematics and dynamics of the car-like robot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural Health Monitoring (SHM) requires integrated "all in one" electronic devices capable of performing analysis of structural integrity and on-board damage detection in aircraft?s structures. PAMELA III (Phased Array Monitoring for Enhanced Life Assessment, version III) SHM embedded system is an example of this device type. This equipment is capable of generating excitation signals to be applied to an array of integrated piezoelectric Phased Array (PhA) transducers stuck to aircraft structure, acquiring the response signals, and carrying out the advanced signal processing to obtain SHM maps. PAMELA III is connected with a host computer in order to receive the configuration parameters and sending the obtained SHM maps, alarms and so on. This host can communicate with PAMELA III through an Ethernet interface. To avoid the use of wires where necessary, it is possible to add Wi-Fi capabilities to PAMELA III, connecting a Wi-Fi node working as a bridge, and to establish a wireless communication between PAMELA III and the host. However, in a real aircraft scenario, several PAMELA III devices must work together inside closed structures. In this situation, it is not possible for all PAMELA III devices to establish a wireless communication directly with the host, due to the signal attenuation caused by the different obstacles of the aircraft structure. To provide communication among all PAMELA III devices and the host, a wireless mesh network (WMN) system has been implemented inside a closed aluminum wingbox. In a WMN, as long as a node is connected to at least one other node, it will have full connectivity to the entire network because each mesh node forwards packets to other nodes in the network as required. Mesh protocols automatically determine the best route through the network and can dynamically reconfigure the network if a link drops out. The advantages and disadvantages on the use of a wireless mesh network system inside closed aerospace structures are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays one of the challenges of materials science is to find new technologies that will be able to make the most of renewable energies. An example of new proposals in this field are the intermediate-band (IB) materials, which promise higher efficiencies in photovoltaic applications (through the intermediate band solar cells), or in heterogeneous photocatalysis (using nanoparticles of them, for the light-induced degradation of pollutants or for the efficient photoevolution of hydrogen from water). An IB material consists in a semiconductor in which gap a new level is introduced [1], the intermediate band (IB), which should be partially filled by electrons and completely separated of the valence band (VB) and of the conduction band (CB). This scheme (figure 1) allows an electron from the VB to be promoted to the IB, and from the latter to the CB, upon absorption of photons with energy below the band gap Eg, so that energy can be absorbed in a wider range of the solar spectrum and a higher current can be obtained without sacrificing the photovoltage (or the chemical driving force) corresponding to the full bandgap Eg, thus increasing the overall efficiency. This concept, applied to photocatalysis, would allow using photons of a wider visible range while keeping the same redox capacity. It is important to note that this concept differs from the classic photocatalyst doping principle, which essentially tries just to decrease the bandgap. This new type of materials would keep the full bandgap potential but would use also lower energy photons. In our group several IB materials have been proposed, mainly for the photovoltaic application, based on extensively doping known semiconductors with transition metals [2], examining with DFT calculations their electronic structures. Here we refer to In2S3 and SnS2, which contain octahedral cations; when doped with Ti or V an IB is formed according to quantum calculations (see e.g. figure 2). We have used a solvotermal synthesis method to prepare in nanocrystalline form the In2S3 thiospinel and the layered compound SnS2 (which when undoped have bandgaps of 2.0 and 2.2 eV respectively) where the cation is substituted by vanadium at a ?10% level. This substitution has been studied, characterizing the materials by different physical and chemical techniques (TXRF, XRD, HR-TEM/EDS) (see e.g. figure 3) and verifying with UV spectrometry that this substitution introduces in the spectrum the sub-bandgap features predicted by the calculations (figure 4). For both sulphide type nanoparticles (doped and undoped) the photocatalytic activity was studied by following at room temperature the oxidation of formic acid in aqueous suspension, a simple reaction which is easily monitored by UV-Vis spectroscopy. The spectral response of the process is measured using a collection of band pass filters that allow only some wavelengths into the reaction system. Thanks to this method the spectral range in which the materials are active in the photodecomposition (which coincides with the band gap for the undoped samples) can be checked, proving that for the vanadium substituted samples this range is increased, making possible to cover all the visible light range. Furthermore it is checked that these new materials are more photocorrosion resistant than the toxic CdS witch is a well know compound frequently used in tests of visible light photocatalysis. These materials are thus promising not only for degradation of pollutants (or for photovoltaic cells) but also for efficient photoevolution of hydrogen from water; work in this direction is now being pursued.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the use of the "infotaxis" search strategy as the navigation system of a robotic platform, able to search and localize infectious foci by detecting the changes in the profile of volatile organic compounds emitted by and infected plant. We builded a simple and cost effective robot platform that substitutes odour sensors in favour of light sensors and study their robustness and performance under non ideal conditions such as the exitence of obstacles due to land topology or weeds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Territory or zone design processes entail partitioning a geographic space, organized as a set of areal units, into different regions or zones according to a specific set of criteria that are dependent on the application context. In most cases, the aim is to create zones of approximately equal sizes (zones with equal numbers of inhabitants, same average sales, etc.). However, some of the new applications that have emerged, particularly in the context of sustainable development policies, are aimed at defining zones of a predetermined, though not necessarily similar, size. In addition, the zones should be built around a given set of seeds. This type of partitioning has not been sufficiently researched; therefore, there are no known approaches for automated zone delimitation. This study proposes a new method based on a discrete version of the adaptive additively weighted Voronoi diagram that makes it possible to partition a two-dimensional space into zones of specific sizes, taking both the position and the weight of each seed into account. The method consists of repeatedly solving a traditional additively weighted Voronoi diagram, so that each seed?s weight is updated at every iteration. The zones are geographically connected using a metric based on the shortest path. Tests conducted on the extensive farming system of three municipalities in Castile-La Mancha (Spain) have established that the proposed heuristic procedure is valid for solving this type of partitioning problem. Nevertheless, these tests confirmed that the given seed position determines the spatial configuration the method must solve and this may have a great impact on the resulting partition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SSR es el acrónimo de SoundScape Renderer (tool for real-time spatial audio reproduction providing a variety of rendering algorithms), es un programa escrito en su mayoría en C++. El programa permite al usuario escuchar tanto sonidos grabados con anterioridad como sonidos en directo. El sonido o los sonidos se oirán, desde el punto de vista del oyente, como si el sonido se produjese en el punto que el programa decida, lo interesante de este proyecto es que el sonido podrá cambiar de lugar, moverse, etc. Todo en tiempo real. Esto se consigue sin modificar el sonido al grabarlo pero sí al emitirlo, el programa calcula las variaciones necesarias para que al emitir el sonido al oyente le llegue como si el sonido realmente se generase en un punto del espacio o lo más parecido posible. La sensación de movimiento no deja de ser el punto anterior cambiando de lugar. La idea era crear una aplicación web basada en Canvas de HTML5 que se comunicará con esta interfaz de usuario remota. Así se solucionarían todos los problemas de compatibilidad ya que cualquier dispositivo con posibilidad de visualizar páginas web podría correr una aplicación basada en estándares web, por ejemplo un sistema con Windows o un móvil con navegador. El protocolo debía de ser WebSocket porque es un protocolo HTML5 y ofrece las “garantías” de latencia que una aplicación con necesidades de información en tiempo real requiere. Nos permite una comunicación full-dúplex asíncrona sin mucho payload que es justo lo que se venía a evitar al no usar polling normal de HTML. El problema que surgió fue que la interfaz de usuario de red que tenía el programa no era compatible con WebSocket debido a un handshacking inicial y obligatorio que realiza el protocolo, por lo que se necesitaba otra interfaz de red. Se decidió entonces cambiar a JSON como formato para el intercambio de mensajes. Al final el proyecto comprende no sólo la aplicación web basada en Canvas sino también un servidor funcional y la definición de una nueva interfaz de usuario de red con su protocolo añadido. ABSTRACT. This project aims to become a part of the SSR tool to extend its capabilities in the field of the access. SSR is an acronym for SoundScape Renderer, is a program mostly written in C++ that allows you to hear already recorded or live sound with a variety of sound equipment as if the sound came from a desired place in the space. Like the web-page of the SSR says surely better explained: “The SoundScape Renderer (SSR) is a tool for real-time spatial audio reproduction providing a variety of rendering algorithms.” The application can be used with a graphical interface written in Qt but has also a network interface for external applications to use it. This network interface communicates using XML messages. A good example of it is the Android client. This Android client is already working. In order to use the application should be run it by loading an audio source and the wanted environment so that the renderer knows what to do. In that moment the server binds and anyone can use the network interface. Since the network interface is documented everyone can make an application to interact with this network interface. So the application can have as many user interfaces as wanted. The part that is developed in this project has nothing to do neither with audio rendering nor even with the reproduction of the spatial audio. The part that is developed here is about the interface used in the SSR application. As it can be deduced from the title: “Distributed Web Interface for Real-Time Spatial Audio Reproduction System”, this work aims only to offer the interface via web for the SSR (“Real-Time Spatial Audio Reproduction System”). The idea is not to make a new graphical interface for SSR but to allow more types of interfaces and communication. To accomplish the objective of allowing more graphical interfaces this project is going to use a new network interface. By now the SSR application is using only XML for data interchange but this new network interface support JSON. This project comprehends the server that launch the application, the user interface and the new network interface. It is done with these modules in order to allow creating new user interfaces that can communicate with the server or new servers that can communicate with the user interface by defining a complete network interface for data interchange.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La seguridad verificada es una metodología para demostrar propiedades de seguridad de los sistemas informáticos que se destaca por las altas garantías de corrección que provee. Los sistemas informáticos se modelan como programas probabilísticos y para probar que verifican una determinada propiedad de seguridad se utilizan técnicas rigurosas basadas en modelos matemáticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automáticos para construir demostraciones completamente formales cuya corrección es certificada mecánicamente (por ordenador). La seguridad verificada demostró ser una técnica muy efectiva para razonar sobre diversas nociones de seguridad en el área de criptografía. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad “aproximada”. La característica distintiva de estas nociones de seguridad es que se expresan como una condición de “similitud” entre las distribuciones de salida de dos programas probabilísticos y esta similitud se cuantifica usando alguna noción de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas áreas como la minería de datos privados, el análisis de flujo de información y la criptografía. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementación concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una noción de privacidad que ha recibido mucha atención en los últimos años y tiene como objetivo evitar la publicación datos confidenciales en la minería de datos. La falta de técnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lógicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribución teórica es una versión quantitativa de una lógica de Hoare relacional para programas probabilísticos. Las pruebas de correción de estas lógicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, además, una herramienta para razonar sobre propiedades de programas a través de estas lógicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografía en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodología construyendo pruebas certificadas por ordendor de varios sistemas cuyo análisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construcción para diseñar funciones de hash “seguras” sobre curvas elípticas y algoritmos diferencialmente privados para varios problemas de optimización combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a “similarity” condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build “safe” hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objeto de esta Tesis doctoral es el desarrollo de una metodologia para la deteccion automatica de anomalias a partir de datos hiperespectrales o espectrometria de imagen, y su cartografiado bajo diferentes condiciones tipologicas de superficie y terreno. La tecnologia hiperespectral o espectrometria de imagen ofrece la posibilidad potencial de caracterizar con precision el estado de los materiales que conforman las diversas superficies en base a su respuesta espectral. Este estado suele ser variable, mientras que las observaciones se producen en un numero limitado y para determinadas condiciones de iluminacion. Al aumentar el numero de bandas espectrales aumenta tambien el numero de muestras necesarias para definir espectralmente las clases en lo que se conoce como Maldicion de la Dimensionalidad o Efecto Hughes (Bellman, 1957), muestras habitualmente no disponibles y costosas de obtener, no hay mas que pensar en lo que ello implica en la Exploracion Planetaria. Bajo la definicion de anomalia en su sentido espectral como la respuesta significativamente diferente de un pixel de imagen respecto de su entorno, el objeto central abordado en la Tesis estriba primero en como reducir la dimensionalidad de la informacion en los datos hiperespectrales, discriminando la mas significativa para la deteccion de respuestas anomalas, y segundo, en establecer la relacion entre anomalias espectrales detectadas y lo que hemos denominado anomalias informacionales, es decir, anomalias que aportan algun tipo de informacion real de las superficies o materiales que las producen. En la deteccion de respuestas anomalas se asume un no conocimiento previo de los objetivos, de tal manera que los pixeles se separan automaticamente en funcion de su informacion espectral significativamente diferenciada respecto de un fondo que se estima, bien de manera global para toda la escena, bien localmente por segmentacion de la imagen. La metodologia desarrollada se ha centrado en la implicacion de la definicion estadistica del fondo espectral, proponiendo un nuevo enfoque que permite discriminar anomalias respecto fondos segmentados en diferentes grupos de longitudes de onda del espectro, explotando la potencialidad de separacion entre el espectro electromagnetico reflectivo y emisivo. Se ha estudiado la eficiencia de los principales algoritmos de deteccion de anomalias, contrastando los resultados del algoritmo RX (Reed and Xiaoli, 1990) adoptado como estandar por la comunidad cientifica, con el metodo UTD (Uniform Targets Detector), su variante RXD-UTD, metodos basados en subespacios SSRX (Subspace RX) y metodo basados en proyecciones de subespacios de imagen, como OSPRX (Orthogonal Subspace Projection RX) y PP (Projection Pursuit). Se ha desarrollado un nuevo metodo, evaluado y contrastado por los anteriores, que supone una variacion de PP y describe el fondo espectral mediante el analisis discriminante de bandas del espectro electromagnetico, separando las anomalias con el algortimo denominado Detector de Anomalias de Fondo Termico o DAFT aplicable a sensores que registran datos en el espectro emisivo. Se han evaluado los diferentes metodos de deteccion de anomalias en rangos del espectro electromagnetico del visible e infrarrojo cercano (Visible and Near Infrared-VNIR), infrarrojo de onda corta (Short Wavelenght Infrared-SWIR), infrarrojo medio (Meadle Infrared-MIR) e infrarrojo termico (Thermal Infrared-TIR). La respuesta de las superficies en las distintas longitudes de onda del espectro electromagnetico junto con su entorno, influyen en el tipo y frecuencia de las anomalias espectrales que puedan provocar. Es por ello que se han utilizado en la investigacion cubos de datos hiperepectrales procedentes de los sensores aeroportados cuya estrategia y diseno en la construccion espectrometrica de la imagen difiere. Se han evaluado conjuntos de datos de test de los sensores AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) y MASTER (MODIS/ASTER Simulator). Se han disenado experimentos sobre ambitos naturales, urbanos y semiurbanos de diferente complejidad. Se ha evaluado el comportamiento de los diferentes detectores de anomalias a traves de 23 tests correspondientes a 15 areas de estudio agrupados en 6 espacios o escenarios: Urbano - E1, Semiurbano/Industrial/Periferia Urbana - E2, Forestal - E3, Agricola - E4, Geologico/Volcanico - E5 y Otros Espacios Agua, Nubes y Sombras - E6. El tipo de sensores evaluados se caracteriza por registrar imagenes en un amplio rango de bandas, estrechas y contiguas, del espectro electromagnetico. La Tesis se ha centrado en el desarrollo de tecnicas que permiten separar y extraer automaticamente pixeles o grupos de pixeles cuya firma espectral difiere de manera discriminante de las que tiene alrededor, adoptando para ello como espacio muestral parte o el conjunto de las bandas espectrales en las que ha registrado radiancia el sensor hiperespectral. Un factor a tener en cuenta en la investigacion ha sido el propio instrumento de medida, es decir, la caracterizacion de los distintos subsistemas, sensores imagen y auxiliares, que intervienen en el proceso. Para poder emplear cuantitativamente los datos medidos ha sido necesario definir las relaciones espaciales y espectrales del sensor con la superficie observada y las potenciales anomalias y patrones objetivos de deteccion. Se ha analizado la repercusion que en la deteccion de anomalias tiene el tipo de sensor, tanto en su configuracion espectral como en las estrategias de diseno a la hora de registrar la radiacion prodecente de las superficies, siendo los dos tipos principales de sensores estudiados los barredores o escaneres de espejo giratorio (whiskbroom) y los barredores o escaneres de empuje (pushbroom). Se han definido distintos escenarios en la investigacion, lo que ha permitido abarcar una amplia variabilidad de entornos geomorfologicos y de tipos de coberturas, en ambientes mediterraneos, de latitudes medias y tropicales. En resumen, esta Tesis presenta una tecnica de deteccion de anomalias para datos hiperespectrales denominada DAFT en su variante de PP, basada en una reduccion de la dimensionalidad proyectando el fondo en un rango de longitudes de onda del espectro termico distinto de la proyeccion de las anomalias u objetivos sin firma espectral conocida. La metodologia propuesta ha sido probada con imagenes hiperespectrales reales de diferentes sensores y en diferentes escenarios o espacios, por lo tanto de diferente fondo espectral tambien, donde los resultados muestran los beneficios de la aproximacion en la deteccion de una gran variedad de objetos cuyas firmas espectrales tienen suficiente desviacion respecto del fondo. La tecnica resulta ser automatica en el sentido de que no hay necesidad de ajuste de parametros, dando resultados significativos en todos los casos. Incluso los objetos de tamano subpixel, que no pueden distinguirse a simple vista por el ojo humano en la imagen original, pueden ser detectados como anomalias. Ademas, se realiza una comparacion entre el enfoque propuesto, la popular tecnica RX y otros detectores tanto en su modalidad global como local. El metodo propuesto supera a los demas en determinados escenarios, demostrando su capacidad para reducir la proporcion de falsas alarmas. Los resultados del algoritmo automatico DAFT desarrollado, han demostrado la mejora en la definicion cualitativa de las anomalias espectrales que identifican a entidades diferentes en o bajo superficie, reemplazando para ello el modelo clasico de distribucion normal con un metodo robusto que contempla distintas alternativas desde el momento mismo de la adquisicion del dato hiperespectral. Para su consecucion ha sido necesario analizar la relacion entre parametros biofisicos, como la reflectancia y la emisividad de los materiales, y la distribucion espacial de entidades detectadas respecto de su entorno. Por ultimo, el algoritmo DAFT ha sido elegido como el mas adecuado para sensores que adquieren datos en el TIR, ya que presenta el mejor acuerdo con los datos de referencia, demostrando una gran eficacia computacional que facilita su implementacion en un sistema de cartografia que proyecte de forma automatica en un marco geografico de referencia las anomalias detectadas, lo que confirma un significativo avance hacia un sistema en lo que se denomina cartografia en tiempo real. The aim of this Thesis is to develop a specific methodology in order to be applied in automatic detection anomalies processes using hyperspectral data also called hyperspectral scenes, and to improve the classification processes. Several scenarios, areas and their relationship with surfaces and objects have been tested. The spectral characteristics of reflectance parameter and emissivity in the pattern recognition of urban materials in several hyperspectral scenes have also been tested. Spectral ranges of the visible-near infrared (VNIR), shortwave infrared (SWIR) and thermal infrared (TIR) from hyperspectral data cubes of AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) and MASTER (MODIS/ASTER Simulator) have been used in this research. It is assumed that there is not prior knowledge of the targets in anomaly detection. Thus, the pixels are automatically separated according to their spectral information, significantly differentiated with respect to a background, either globally for the full scene, or locally by the image segmentation. Several experiments on different scenarios have been designed, analyzing the behavior of the standard RX anomaly detector and different methods based on subspace, image projection and segmentation-based anomaly detection methods. Results and their consequences in unsupervised classification processes are discussed. Detection of spectral anomalies aims at extracting automatically pixels that show significant responses in relation of their surroundings. This Thesis deals with the unsupervised technique of target detection, also called anomaly detection. Since this technique assumes no prior knowledge about the target or the statistical characteristics of the data, the only available option is to look for objects that are differentiated from the background. Several methods have been developed in the last decades, allowing a better understanding of the relationships between the image dimensionality and the optimization of search procedures as well as the subpixel differentiation of the spectral mixture and its implications in anomalous responses. In other sense, image spectrometry has proven to be efficient in the characterization of materials, based on statistical methods using a specific reflection and absorption bands. Spectral configurations in the VNIR, SWIR and TIR have been successfully used for mapping materials in different urban scenarios. There has been an increasing interest in the use of high resolution data (both spatial and spectral) to detect small objects and to discriminate surfaces in areas with urban complexity. This has come to be known as target detection which can be either supervised or unsupervised. In supervised target detection, algorithms lean on prior knowledge, such as the spectral signature. The detection process for matching signatures is not straightforward due to the complications of converting data airborne sensor with material spectra in the ground. This could be further complicated by the large number of possible objects of interest, as well as uncertainty as to the reflectance or emissivity of these objects and surfaces. An important objective in this research is to establish relationships that allow linking spectral anomalies with what can be called informational anomalies and, therefore, identify information related to anomalous responses in some places rather than simply spotting differences from the background. The development in recent years of new hyperspectral sensors and techniques, widen the possibilities for applications in remote sensing of the Earth. Remote sensing systems measure and record electromagnetic disturbances that the surveyed objects induce in their surroundings, by means of different sensors mounted on airborne or space platforms. Map updating is important for management and decisions making people, because of the fast changes that usually happen in natural, urban and semi urban areas. It is necessary to optimize the methodology for obtaining the best from remote sensing techniques from hyperspectral data. The first problem with hyperspectral data is to reduce the dimensionality, keeping the maximum amount of information. Hyperspectral sensors augment considerably the amount of information, this allows us to obtain a better precision on the separation of material but at the same time it is necessary to calculate a bigger number of parameters, and the precision lowers with the increase in the number of bands. This is known as the Hughes effects (Bellman, 1957) . Hyperspectral imagery allows us to discriminate between a huge number of different materials however some land and urban covers are made up with similar material and respond similarly which produces confusion in the classification. The training and the algorithm used for mapping are also important for the final result and some properties of thermal spectrum for detecting land cover will be studied. In summary, this Thesis presents a new technique for anomaly detection in hyperspectral data called DAFT, as a PP's variant, based on dimensionality reduction by projecting anomalies or targets with unknown spectral signature to the background, in a range thermal spectrum wavelengths. The proposed methodology has been tested with hyperspectral images from different imaging spectrometers corresponding to several places or scenarios, therefore with different spectral background. The results show the benefits of the approach to the detection of a variety of targets whose spectral signatures have sufficient deviation in relation to the background. DAFT is an automated technique in the sense that there is not necessary to adjust parameters, providing significant results in all cases. Subpixel anomalies which cannot be distinguished by the human eye, on the original image, however can be detected as outliers due to the projection of the VNIR end members with a very strong thermal contrast. Furthermore, a comparison between the proposed approach and the well-known RX detector is performed at both modes, global and local. The proposed method outperforms the existents in particular scenarios, demonstrating its performance to reduce the probability of false alarms. The results of the automatic algorithm DAFT have demonstrated improvement in the qualitative definition of the spectral anomalies by replacing the classical model by the normal distribution with a robust method. For their achievement has been necessary to analyze the relationship between biophysical parameters such as reflectance and emissivity, and the spatial distribution of detected entities with respect to their environment, as for example some buried or semi-buried materials, or building covers of asbestos, cellular polycarbonate-PVC or metal composites. Finally, the DAFT method has been chosen as the most suitable for anomaly detection using imaging spectrometers that acquire them in the thermal infrared spectrum, since it presents the best results in comparison with the reference data, demonstrating great computational efficiency that facilitates its implementation in a mapping system towards, what is called, Real-Time Mapping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An ED-tether mission to Jupiter is presented. A bare tether carrying cathodic devices at both ends but no power supply, and using no propellant, could move 'freely' among Jupiter's 4 great moons. The tour scheme would have current naturally driven throughout by the motional electric field, the Lorentz force switching direction with current around a 'drag' radius of 160,00 kms, where the speed of the jovian ionosphere equals the speed of a spacecraft in circular orbit. With plasma density and magnetic field decreasing rapidly with distance from Jupiter, drag/thrust would only be operated in the inner plasmasphere, current being near shut off conveniently in orbit by disconnecting cathodes or plugging in a very large resistance; the tether could serve as its own power supply by plugging in an electric load where convenient, with just some reduction in thrust or drag. The periapsis of the spacecraft in a heliocentric transfer orbit from Earth would lie inside the drag sphere; with tether deployed and current on around periapsis, magnetic drag allows Jupiter to capture the spacecraft into an elliptic orbit of high eccentricity. Current would be on at succesive perijove passes and off elsewhere, reducing the eccentricity by lowering the apoapsis progressively to allow visits of the giant moons. In a second phase, current is on around apoapsis outside the drag sphere, rising the periapsis until the full orbit lies outside that sphere. In a third phase, current is on at periapsis, increasing the eccentricity until a last push makes the orbit hyperbolic to escape Jupiter. Dynamical issues such as low gravity-gradient at Jupiter and tether orientation in elliptic orbits of high eccentricity are discussed.