845 resultados para mining data streams
Resumo:
One way to organize knowledge and make its search and retrieval easier is to create a structural representation divided by hierarchically related topics. Once this structure is built, it is necessary to find labels for each of the obtained clusters. In many cases the labels must be built using all the terms in the documents of the collection. This paper presents the SeCLAR method, which explores the use of association rules in the selection of good candidates for labels of hierarchical document clusters. The purpose of this method is to select a subset of terms by exploring the relationship among the terms of each document. Thus, these candidates can be processed by a classical method to generate the labels. An experimental study demonstrates the potential of the proposed approach to improve the precision and recall of labels obtained by classical methods only considering the terms which are potentially more discriminative. © 2012 - IOS Press and the authors. All rights reserved.
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.
Resumo:
The ability of cryogenic photonic crystals to carry out high performance microwave signal processing operations has been developed into systems that can: rapidly record broadband microwave spectra with fine resolution and high dynamic range; search for patterns in 40 gigabits per second data streams; and communicate via spread- spectrum signals that are well below the noise floor. The basic concepts of the technology and its many applications, along with an overview of university-industry partnerships and the growing photonics industry in Bozeman, will be presented.
Resumo:
In this article we describe a method for automatically generating text summaries of data corresponding to traces of spatial movement in geographical areas. The method can help humans to understand large data streams, such as the amounts of GPS data recorded by a variety of sensors in mobile phones, cars, etc. We describe the knowledge representations we designed for our method and the main components of our method for generating the summaries: a discourse planner, an abstraction module and a text generator. We also present evaluation results that show the ability of our method to generate certain types of geospatial and temporal descriptions.
Resumo:
In recent years, applications in domains such as telecommunications, network security or large scale sensor networks showed the limits of the traditional store-then-process paradigm. In this context, Stream Processing Engines emerged as a candidate solution for all these applications demanding for high processing capacity with low processing latency guarantees. With Stream Processing Engines, data streams are not persisted but rather processed on the fly, producing results continuously. Current Stream Processing Engines, either centralized or distributed, do not scale with the input load due to single-node bottlenecks. Moreover, they are based on static configurations that lead to either under or over-provisioning. This Ph.D. thesis discusses StreamCloud, an elastic paralleldistributed stream processing engine that enables for processing of large data stream volumes. Stream- Cloud minimizes the distribution and parallelization overhead introducing novel techniques that split queries into parallel subqueries and allocate them to independent sets of nodes. Moreover, Stream- Cloud elastic and dynamic load balancing protocols enable for effective adjustment of resources depending on the incoming load. Together with the parallelization and elasticity techniques, Stream- Cloud defines a novel fault tolerance protocol that introduces minimal overhead while providing fast recovery. StreamCloud has been fully implemented and evaluated using several real word applications such as fraud detection applications or network analysis applications. The evaluation, conducted using a cluster with more than 300 cores, demonstrates the large scalability, the elasticity and fault tolerance effectiveness of StreamCloud. Resumen En los útimos años, aplicaciones en dominios tales como telecomunicaciones, seguridad de redes y redes de sensores de gran escala se han encontrado con múltiples limitaciones en el paradigma tradicional de bases de datos. En este contexto, los sistemas de procesamiento de flujos de datos han emergido como solución a estas aplicaciones que demandan una alta capacidad de procesamiento con una baja latencia. En los sistemas de procesamiento de flujos de datos, los datos no se persisten y luego se procesan, en su lugar los datos son procesados al vuelo en memoria produciendo resultados de forma continua. Los actuales sistemas de procesamiento de flujos de datos, tanto los centralizados, como los distribuidos, no escalan respecto a la carga de entrada del sistema debido a un cuello de botella producido por la concentración de flujos de datos completos en nodos individuales. Por otra parte, éstos están basados en configuraciones estáticas lo que conducen a un sobre o bajo aprovisionamiento. Esta tesis doctoral presenta StreamCloud, un sistema elástico paralelo-distribuido para el procesamiento de flujos de datos que es capaz de procesar grandes volúmenes de datos. StreamCloud minimiza el coste de distribución y paralelización por medio de una técnica novedosa la cual particiona las queries en subqueries paralelas repartiéndolas en subconjuntos de nodos independientes. Ademas, Stream- Cloud posee protocolos de elasticidad y equilibrado de carga que permiten una optimización de los recursos dependiendo de la carga del sistema. Unidos a los protocolos de paralelización y elasticidad, StreamCloud define un protocolo de tolerancia a fallos que introduce un coste mínimo mientras que proporciona una rápida recuperación. StreamCloud ha sido implementado y evaluado mediante varias aplicaciones del mundo real tales como aplicaciones de detección de fraude o aplicaciones de análisis del tráfico de red. La evaluación ha sido realizada en un cluster con más de 300 núcleos, demostrando la alta escalabilidad y la efectividad tanto de la elasticidad, como de la tolerancia a fallos de StreamCloud.
Resumo:
Los sistemas de videoconferencia y colaboración en tiempo real para múltiples usuarios permiten a sus usuarios comunicarse por medio de vídeo, audio y datos. Históricamente estos han sido sistemas caros de obtener y de mantener. El paso de las décadas ha limado estos problemas acercado el mundo de comunicación en tiempo real a un grupo mucho más amplio, llegando a usarse en diversos ámbitos como la educación o la medicina. En este sentido, el último gran salto evolutivo al que hemos asistido ha sido la transición de este tipo de aplicaciones hacia la Web. Varias tecnologías han permitido este viaje hacia el navegador. Las Aplicaciones Ricas de Internet (RIAs), que permiten crear aplicaciones Web interactivas huyendo del clásico esquema de petición y respuesta y llevando funcionalidades propias de las aplicaciones nativas a la Web. Por otro lado, la computación en la nube o Cloud Computing, con su modelo de pago por uso de recursos virtualizados, ha llevado a la creación de servicios que se adaptan mejor a la demanda, han habilitado este viaje hacia el navegador. No obstante, como cada cambio, este salto presenta una serie de retos para los sistemas de videoconferencia establecidos. Esta tesis doctoral propone un conjunto de arquitecturas, mecanismos y algoritmos para adaptar los sistemas de multiconferencia al entorno Web, teniendo en cuenta que este es accedido desde dispositivos diferentes y mediante redes de acceso variadas. Para ello se comienza por el estudio de los requisitos que debe cumplir un sistema de videoconferencia en la Web. Como resultado se diseña, implementa y desarrolla un servicio de videoconferencia que permite la colaboración avanzada entre múltiples usuarios mediante vídeo, audio y compartición de escritorio. Posteriormente, se plantea un sistema de comunicación entre una aplicación nativa y Web, proponiendo técnicas de adaptación entre los dos entornos que permiten la conversación de manera transparente para los usuarios. Estos sistemas permiten facilitar la transición hacia tecnologías Web. Como siguiente paso, se identificaron los principales problemas que existen para la comunicación multiusuario en dispositivos de tamaño reducido (teléfonos inteligentes) utilizando redes de acceso heterogéneas. Se propone un mecanismo, combinación de transcodificación y algoritmos de adaptación de calidad para superar estas limitaciones y permitir a los usuarios de este tipo de dispositivos participar en igualdad de condiciones. La aparición de WebRTC como tecnología disruptiva en este entorno, permitiendo nuevas posibilidades de comunicación en navegadores, motiva la segunda iteración de esta tesis. Aquí se presenta un nuevo esquema de adaptación a la demanda para servidores de videoconferencia diseñado para las necesidades del entorno Web y para aprovechar las características de Cloud Computing. Finalmente, esta tesis repasa las conclusiones obtenidas como fruto del trabajo llevado a cabo, reflejando la evolución de la videoconferencia Web desde sus inicios hasta nuestros días. ABSTRACT Multiuser Videoconferencing and real-time collaboration systems allow users to communicate using video, audio and data streams. These systems have been historically expensive to obtain and maintain. Over the last few decades, technological breakthroughs have mitigated those costs and popularized real time video communication, allowing its use in environments such as education or health. The last big evolutionary leap forward has been the transition of these types of applications towards theWeb. Several technologies have allowed this journey to theWeb browser. Firstly, Rich Internet Applications (RIAs) enable the creation of dynamic Web pages that defy the classical request-response interaction and provide an experience similar to their native counterparts. On the other hand, Cloud Computing brings the leasing of virtualized hardware resources in a pay-peruse model and, with it, better scalability in resource-demanding services. However, as with every change, this evolution imposes a set of challenges on existing videoconferencing solutions. This dissertation proposes a set of architectures, mechanisms and algorithms that aim to adapt multi-conferencing systems to the Web platform, taking into account the variety of devices and access networks that come with it. To this end, this thesis starts with a study concerning the requirements that must be met by new Web videoconferencing systems. The result of this study is the design, development and implementation of a new videoconferencing services that provides advanced collaboration to its user by providing video and audio communication as well as desktop sharing. After this, a new communication system between Web and native applications is presented. This system proposes adaptation mechanisms to bridge the two worlds providing a seamless integration transparent to users who can now access the powerful native application via an easy Web interface. The next step is to identify the main challenges posed by multi-conferencing on small devices (smartphones) with heterogeneous access networks. This dissertation proposes a mechanism that combines transcoding and adaptive quality algorithms to overcome those limitations. A second iteration in this dissertation is motivated by WebRTC. WebRTC appears as a disrupting technology by enabling new real-time communication possibilities in browsers. A new mechanism for flexible videoconferencing server scalability is presented. This mechanism aims to address the strong scalability requirements in the Web environment by taking advantage of Cloud Computing. Finally, the dissertation discusses the results obtained throughout the study, capturing the evolution of Web videoconferencing systems.
Resumo:
In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.
Resumo:
As plataformas de e-Learning são cada vez mais utilizadas na educação à distância, facto que se encontra diretamente relacionado com a possibilidade de proporcionarem aos seus alunos a valência de poderem assistir a cursos em qualquer lugar. Dentro do âmbito das plataformas de e-Learning encontra-se um grupo especialmente interessante: as plataformas adaptativas, que tendem a substituir o professor (presencial) através de interatividade, variabilidade de conteúdos, automatização e capacidade para resolução de problemas e simulação de comportamentos educacionais. O projeto ADAPT (plataforma adaptativa de e-Learning) consiste na criação de uma destas plataformas, implementando tutoria inteligente, resolução de problemas com base em experiências passadas, algoritmos genéticos e link-mining. É na área de link-mining que surge o desenvolvimento desta dissertação que documenta o desenvolvimento de quatro módulos distintos: O primeiro módulo consiste num motor de busca para sugestão de conteúdos alternativos; o segundo módulo consiste na identificação de mudanças de estilo de aprendizagem; o terceiro módulo consiste numa plataforma de análise de dados que implementa várias técnicas de data mining e estatística para fornecer aos professores/tutores informações importantes que não seriam visíveis sem recurso a este tipo de técnicas; por fim, o último módulo consiste num sistema de recomendações que sugere aos alunos os artigos mais adequados com base nas consultas de alunos com perfis semelhantes. Esta tese documenta o desenvolvimento dos vários protótipos para cada um destes módulos. Os testes efetuados para cada módulo mostram que as metodologias utilizadas são válidas e viáveis.
Resumo:
The apposition compound eyes of gonodactyloid stomatopods are divided into a ventral and a dorsal hemisphere by six equatorial rows of enlarged ommatidia, the mid-band (MB). Whereas the hemispheres are specialized for spatial vision, the MB consists of four dorsal rows of ommatidia specialized for colour vision and two ventral rows specialized for polarization vision. The eight retinula cell axons (RCAs) from each ommatidium project retinotopically onto one corresponding lamina cartridge, so that the three retinal data streams (spatial, colour and polarization) remain anatomically separated. This study investigates whether the retinal specializations are reflected in differences in the RCA arrangement within the corresponding lamina cartridges. We have found that, in all three eye regions, the seven short visual fibres (svfs) formed by retinula cells 1-7 (R1-R7) terminate at two distinct lamina levels, geometrically separating the terminals of photoreceptors sensitive to either orthogonal e-vector directions or different wavelengths of light. This arrangement is required for the establishment of spectral and polarization opponency mechanisms. The long visual fibres (lvfs) of the eighth retinula cells (R8) pass through the lamina and project retinotopically to the distal medulla externa. Differences between the three eye regions exist in the packing of svf terminals and in the branching patterns of the lvfs within the lamina. We hypothesize that the R8 cells of MB rows 1-4 are incorporated into the colour vision system formed by R1-R7, whereas the R8 cells of MB rows 5 and 6 form a separate neural channel from R1 to R7 for polarization processing.
Resumo:
Sharing data among organizations often leads to mutual benefit. Recent technology in data mining has enabled efficient extraction of knowledge from large databases. This, however, increases risks of disclosing the sensitive knowledge when the database is released to other parties. To address this privacy issue, one may sanitize the original database so that the sensitive knowledge is hidden. The challenge is to minimize the side effect on the quality of the sanitized database so that nonsensitive knowledge can still be mined. In this paper, we study such a problem in the context of hiding sensitive frequent itemsets by judiciously modifying the transactions in the database. To preserve the non-sensitive frequent itemsets, we propose a border-based approach to efficiently evaluate the impact of any modification to the database during the hiding process. The quality of database can be well maintained by greedily selecting the modifications with minimal side effect. Experiments results are also reported to show the effectiveness of the proposed approach. © 2005 IEEE
Resumo:
A network concept is introduced that exploits transparent optical grooming of traffic between an access network and a metro core ring network. This network is enabled by an optical router that allows bufferless aggregation of metro network traffic into higher-capacity data streams for core network transmission. A key functionality of the router is WDM to time-division multiplexing (TDM) transmultiplexing.
Resumo:
A novel simple all-optical nonlinear pulse processing technique using loop mirror intensity filtering and nonlinear broadening in normal dispersion fiber is described. The pulse processor offers reamplification and cleaning up of the optical signals and phase margin improvement. The efficiency of the technique is demonstrated by application to 40-Gb/s return-to-zero optical data streams.
Resumo:
A novel all-optical regeneration technique using loop-mirror intensity-filtering and nonlinear broadening in normal-dispersion fibre is described. The device offers 2R-regeneration function and phase margin improvement. The technique is applied to 40Gbit/s return-to-zero optical data streams.