970 resultados para Semantics - Data processing
Resumo:
[ES]El proyecto contiene módulos de simulación, procesado de datos, mapeo y localización, desarrollados en C++ utilizando ROS (Robot Operating System) y PCL (Point Cloud Library). Ha sido desarrollado bajo el proyecto de robótica submarina AVORA.Se han caracterizado el vehículo y el sensor, y se han analizado diferentes tecnologías de sensores y mapeo. Los datos pasan por tres etapas: Conversión a nube de puntos, filtrado por umbral, eliminación de puntos espureos y, opcionalmente, detección de formas. Estos datos son utilizados para construir un mapa de superficie multinivel. La otra herramienta desarrollada es un algoritmo de Punto más Cercano Iterativo (ICP) modificado, que tiene en cuenta el modo de funcionamiento del sonar de imagen utilizado.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Con il trascorrere del tempo, le reti di stazioni permanenti GNSS (Global Navigation Satellite System) divengono sempre più un valido supporto alle tecniche di rilevamento satellitare. Esse sono al tempo stesso un’efficace materializzazione del sistema di riferimento e un utile ausilio ad applicazioni di rilevamento topografico e di monitoraggio per il controllo di deformazioni. Alle ormai classiche applicazioni statiche in post-processamento, si affiancano le misure in tempo reale sempre più utilizzate e richieste dall’utenza professionale. In tutti i casi risulta molto importante la determinazione di coordinate precise per le stazioni permanenti, al punto che si è deciso di effettuarla tramite differenti ambienti di calcolo. Sono stati confrontati il Bernese, il Gamit (che condividono l’approccio differenziato) e il Gipsy (che utilizza l’approccio indifferenziato). L’uso di tre software ha reso indispensabile l’individuazione di una strategia di calcolo comune in grado di garantire che, i dati ancillari e i parametri fisici adottati, non costituiscano fonte di diversificazione tra le soluzioni ottenute. L’analisi di reti di dimensioni nazionali oppure di reti locali per lunghi intervalli di tempo, comporta il processamento di migliaia se non decine di migliaia di file; a ciò si aggiunge che, talora a causa di banali errori, oppure al fine di elaborare test scientifici, spesso risulta necessario reiterare le elaborazioni. Molte risorse sono quindi state investite nella messa a punto di procedure automatiche finalizzate, da un lato alla preparazione degli archivi e dall’altro all’analisi dei risultati e al loro confronto qualora si sia in possesso di più soluzioni. Dette procedure sono state sviluppate elaborando i dataset più significativi messi a disposizione del DISTART (Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento del Territorio - Università di Bologna). E’ stato così possibile, al tempo stesso, calcolare la posizione delle stazioni permanenti di alcune importanti reti locali e nazionali e confrontare taluni fra i più importanti codici scientifici che assolvono a tale funzione. Per quanto attiene il confronto fra i diversi software si è verificato che: • le soluzioni ottenute dal Bernese e da Gamit (i due software differenziati) sono sempre in perfetto accordo; • le soluzioni Gipsy (che utilizza il metodo indifferenziato) risultano, quasi sempre, leggermente più disperse rispetto a quelle degli altri software e mostrano talvolta delle apprezzabili differenze numeriche rispetto alle altre soluzioni, soprattutto per quanto attiene la coordinata Est; le differenze sono però contenute in pochi millimetri e le rette che descrivono i trend sono comunque praticamente parallele a quelle degli altri due codici; • il citato bias in Est tra Gipsy e le soluzioni differenziate, è più evidente in presenza di determinate combinazioni Antenna/Radome e sembra essere legato all’uso delle calibrazioni assolute da parte dei diversi software. E’ necessario altresì considerare che Gipsy è sensibilmente più veloce dei codici differenziati e soprattutto che, con la procedura indifferenziata, il file di ciascuna stazione di ciascun giorno, viene elaborato indipendentemente dagli altri, con evidente maggior elasticità di gestione: se si individua un errore strumentale su di una singola stazione o se si decide di aggiungere o togliere una stazione dalla rete, non risulta necessario il ricalcolo dell’intera rete. Insieme alle altre reti è stato possibile analizzare la Rete Dinamica Nazionale (RDN), non solo i 28 giorni che hanno dato luogo alla sua prima definizione, bensì anche ulteriori quattro intervalli temporali di 28 giorni, intercalati di sei mesi e che coprono quindi un intervallo temporale complessivo pari a due anni. Si è così potuto verificare che la RDN può essere utilizzata per l’inserimento in ITRF05 (International Terrestrial Reference Frame) di una qualsiasi rete regionale italiana nonostante l’intervallo temporale ancora limitato. Da un lato sono state stimate le velocità ITRF (puramente indicative e non ufficiali) delle stazioni RDN e, dall’altro, è stata effettuata una prova di inquadramento di una rete regionale in ITRF, tramite RDN, e si è verificato che non si hanno differenze apprezzabili rispetto all’inquadramento in ITRF, tramite un congruo numero di stazioni IGS/EUREF (International GNSS Service / European REference Frame, SubCommission for Europe dello International Association of Geodesy).
Resumo:
Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
The present PhD thesis was focused on the development and application of chemical methodology (Py-GC-MS) and data-processing method by multivariate data analysis (chemometrics). The chromatographic and mass spectrometric data obtained with this technique are particularly suitable to be interpreted by chemometric methods such as PCA (Principal Component Analysis) as regards data exploration and SIMCA (Soft Independent Models of Class Analogy) for the classification. As a first approach, some issues related to the field of cultural heritage were discussed with a particular attention to the differentiation of binders used in pictorial field. A marker of egg tempera the phosphoric acid esterified, a pyrolysis product of lecithin, was determined using HMDS (hexamethyldisilazane) rather than the TMAH (tetramethylammonium hydroxide) as a derivatizing reagent. The validity of analytical pyrolysis as tool to characterize and classify different types of bacteria was verified. The FAMEs chromatographic profiles represent an important tool for the bacterial identification. Because of the complexity of the chromatograms, it was possible to characterize the bacteria only according to their genus, while the differentiation at the species level has been achieved by means of chemometric analysis. To perform this study, normalized areas peaks relevant to fatty acids were taken into account. Chemometric methods were applied to experimental datasets. The obtained results demonstrate the effectiveness of analytical pyrolysis and chemometric analysis for the rapid characterization of bacterial species. Application to a samples of bacterial (Pseudomonas Mendocina), fungal (Pleorotus ostreatus) and mixed- biofilms was also performed. A comparison with the chromatographic profiles established the possibility to: • Differentiate the bacterial and fungal biofilms according to the (FAMEs) profile. • Characterize the fungal biofilm by means the typical pattern of pyrolytic fragments derived from saccharides present in the cell wall. • Individuate the markers of bacterial and fungal biofilm in the same mixed-biofilm sample.
Resumo:
n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.
Resumo:
We have realized a Data Acquisition chain for the use and characterization of APSEL4D, a 32 x 128 Monolithic Active Pixel Sensor, developed as a prototype for frontier experiments in high energy particle physics. In particular a transition board was realized for the conversion between the chip and the FPGA voltage levels and for the signal quality enhancing. A Xilinx Spartan-3 FPGA was used for real time data processing, for the chip control and the communication with a Personal Computer through a 2.0 USB port. For this purpose a firmware code, developed in VHDL language, was written. Finally a Graphical User Interface for the online system monitoring, hit display and chip control, based on windows and widgets, was realized developing a C++ code and using Qt and Qwt dedicated libraries. APSEL4D and the full acquisition chain were characterized for the first time with the electron beam of the transmission electron microscope and with 55Fe and 90Sr radioactive sources. In addition, a beam test was performed at the T9 station of the CERN PS, where hadrons of momentum of 12 GeV/c are available. The very high time resolution of APSEL4D (up to 2.5 Mfps, but used at 6 kfps) was fundamental in realizing a single electron Young experiment using nanometric double slits obtained by a FIB technique. On high statistical samples, it was possible to observe the interference and diffractions of single isolated electrons traveling inside a transmission electron microscope. For the first time, the information on the distribution of the arrival time of the single electrons has been extracted.
Ultrasensitive chemiluminescence bioassays based on microfluidics in miniaturized analytical devices
Resumo:
The activity carried out during my PhD was principally addressed to the development of portable microfluidic analytical devices based on biospecific molecular recognition reactions and CL detection. In particular, the development of biosensors required the study of different materials and procedures for their construction, with particular attention to the development of suitable immobilization procedures, fluidic systems and the selection of the suitable detectors. Different methods were exploited, such as gene probe hybridization assay or immunoassay, based on different platform (functionalized glass slide or nitrocellulose membrane) trying to improve the simplicity of the assay procedure. Different CL detectors were also employed and compared with each other in the search for the best compromise between portability and sensitivity. The work was therefore aimed at miniaturization and simplification of analytical devices and the study involved all aspects of the system, from the analytical methodology to the type of detector, in order to combine high sensitivity with easiness-of-use and rapidity. The latest development involving the use of smartphone as chemiluminescent detector paves the way for a new generation of analytical devices in the clinical diagnostic field thanks to the ideal combination of sensibility a simplicity of the CL with the day-by-day increase in the performance of the new generation smartphone camera. Moreover, the connectivity and data processing offered by smartphones can be exploited to perform analysis directly at home with simple procedures. The system could eventually be used to monitor patient health and directly notify the physician of the analysis results allowing a decrease in costs and an increase in the healthcare availability and accessibility.
Resumo:
Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
Negli ultimi anni i dati, la loro gestione e gli strumenti per la loro analisi hanno subito una trasformazione. Si è visto un notevole aumento dei dati raccolti dagli utenti, che si aggira tra il 40 e il 60 percento annuo, grazie ad applicazioni web, sensori, ecc.. Ciò ha fatto nascere il termine Big Data, con il quale ci si riferisce a dataset talmente grandi che non sono gestibili da sistemi tradizionali, come DBMS relazionali in esecuzione su una singola macchina. Infatti, quando la dimensione di un dataset supera pochi terabyte, si è obbligati ad utilizzare un sistema distribuito, in cui i dati sono partizionati su più macchine. Per gestire i Big Data sono state create tecnologie che riescono ad usare la potenza computazionale e la capacità di memorizzazione di un cluster, con un incremento prestazionale proporzionale al numero di macchine presenti sullo stesso. Il più utilizzato di questi sistemi è Hadoop, che offre un sistema per la memorizzazione e l’analisi distribuita dei dati. Grazie alla ridondanza dei dati ed a sofisticati algoritmi, Hadoop riesce a funzionare anche in caso di fallimento di uno o più macchine del cluster, in modo trasparente all’utente. Su Hadoop si possono eseguire diverse applicazioni, tra cui MapReduce, Hive e Apache Spark. É su quest’ultima applicazione, nata per il data processing, che è maggiormente incentrato il progetto di tesi. Un modulo di Spark, chiamato Spark SQL, verrà posto in confronto ad Hive nella velocità e nella flessibilità nell’eseguire interrogazioni su database memorizzati sul filesystem distribuito di Hadoop.
Resumo:
In the last years radar sensor networks for localization and tracking in indoor environment have generated more and more interest, especially for anti-intrusion security systems. These networks often use Ultra Wide Band (UWB) technology, which consists in sending very short (few nanoseconds) impulse signals. This approach guarantees high resolution and accuracy and also other advantages such as low price, low power consumption and narrow-band interference (jamming) robustness. In this thesis the overall data processing (done in MATLAB environment) is discussed, starting from experimental measures from sensor devices, ending with the 2D visualization of targets movements over time and focusing mainly on detection and localization algorithms. Moreover, two different scenarios and both single and multiple target tracking are analyzed.
Resumo:
Successful software systems cope with complexity by organizing classes into packages. However, a particular organization may be neither straightforward nor obvious for a given developer. As a consequence, classes can be misplaced, leading to duplicated code and ripple effects with minor changes effecting multiple packages. We claim that contextual information is the key to rearchitecture a system. Exploiting contextual information, we propose a technique to detect misplaced classes by analyzing how client packages access the classes of a given provider package. We define locality as a measure of the degree to which classes reused by common clients appear in the same package. We then use locality to guide a simulated annealing algorithm to obtain optimal placements of classes in packages. The result is the identification of classes that are candidates for relocation. We apply the technique to three applications and validate the usefulness of our approach via developer interviews.