980 resultados para ArcGis Runtime SDK for Androide
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
In questa tesi sono stati apportati due importanti contributi nel campo degli acceleratori embedded many-core. Abbiamo implementato un runtime OpenMP ottimizzato per la gestione del tasking model per sistemi a processori strettamente accoppiati in cluster e poi interconnessi attraverso una network on chip. Ci siamo focalizzati sulla loro scalabilità e sul supporto di task di granularità fine, come è tipico nelle applicazioni embedded. Il secondo contributo di questa tesi è stata proporre una estensione del runtime di OpenMP che cerca di prevedere la manifestazione di errori dati da fenomeni di variability tramite una schedulazione efficiente del carico di lavoro.
Resumo:
Obiettivo di questa tesi è quello di illustrare il mondo della realtà aumentata (AR) ed in particolare delle tecnologie software disponibili per lo sviluppo di applicazioni su dispositivi Android. Si partirà dal darne una definizione e riassumerne i principali fatti storici, all'illustrarne i vari hardware disponibili sul mercato e le tecnologie software per sviluppare progetti. Non verranno tralasciati utilizzi e settori di ricerca, e si presenterà poi il sistema operativo Android. Dopo uno sguardo alla sua architettura e alle sue caratteristiche, nonché al linguaggio di programmazione Java, cardine per lo sviluppo in questo sistema, si presenteranno alcune API dell'SDK nativo che si rivelano utili per lo sviluppo di applicazioni per la realtà aumentata. Infine, verrà presentato un approfondimento sull'SDK Metaio.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
Ubuntu Touch è un nuovo sistema operativo per Desktop e Cellulari che nasce dalla necessità di unire sistemi eterogenei sotto un'unica piattaforma. L'infrastruttura di Touch, che garantisce convergenza fra dispositivi diversi, è basata sull'innovativo server grafico Mir e sull'interfaccia grafica Unity. L'Application Model è notevolmente migliorato, le applicazioni sono confinate attraverso AppArmor, e scambiano fra loro contenuti tramite il servizio Content-Hub. I tool di sviluppo supportati sono le tecnologie web (HTML5 e JavaScript) e C++ su Framework Qt (possibilità di utilizzare QML). Gli aggiornamenti di sistema, del core, sono sia parziali, attraverso archivi "delta" che introducono solo i cambiamenti necessari, sia full, sovrascrivono l'intero dispositivo. Lo sviluppo in Ubuntu SDK è veloce e agile. Notevole la gestione degli emulatori, ma pecca di alcune feature tutt'ora mancanti. Gli Scope sono application content indipendent, vera innovazione in Ubuntu. Per sperimentare questa tecnologia si sviluppa uno scope per la ricerca di libri nella Biblioteca Gian Paolo Dore della Facoltà di Ingegneria di Bologna.
Resumo:
L’utilizzo del Multibeam Echo sounder (MBES) in ambienti di transizione poco profondi, con condizioni ambientali complesse come la laguna di Venezia, è ancora in fase di studio e i dati biologici e sedimentologici inerenti ai canali della laguna di Venezia sono attualmente scarsi e datati in letteratura. Questo studio ha lo scopo di mappare gli habitat e gli oggetti antropici di un canale della laguna di Venezia in un intervallo di profondità tra 0.3 e 20 m (Canale San Felice) analizzando i dati batimetrici e di riflettività (backscatter) acquisiti da ISMAR-Venezia nell’ambito del progetto RITMARE. A tale scopo il fondale del canale San Felice (Venezia) è stato caratterizzato dal punto di vista geomorfologico, sedimentologico e biologico; descrivendo anche l’eventuale presenza di oggetti antropici. L’ecoscandaglio utilizzato è il Kongsberg EM2040 Dual-Compact Multibeam in grado di emettere 800 beam (400 per trasduttore) ad una frequenza massima di 400kHZ e ci ha consentito di ricavare ottimi risultati, nonostante le particolari caratteristiche degli ambienti lagunari. I dati acquisiti sono stati processati tramite il software CARIS Hydrographic information processing system (Hips) & Sips, attraverso cui è possibile applicare le correzioni di marea e velocità del suono e migliorare la qualità dei dati grezzi ricavati da MBES. I dati sono stati quindi convertiti in ESRI Grid, formato compatibile con il software ArcGIS 10.2.1 (2013) che abbiamo impiegato per le interpretazioni e per la produzione delle mappe. Tecniche di ground-truthing, basate su riprese video e prelievi di sedimento (benna Van Veen 7l), sono state utilizzate per validare il backscatter, dimostrandosi molto efficaci e soddisfacenti per poter descrivere i fondali dal punto di vista biologico e del substrato e quindi degli habitat del canale lagunare. Tutte le informazioni raccolte durante questo studio sono state organizzate all’interno di un geodatabase, realizzato per i dati relativi alla laguna di Venezia.
Resumo:
Il lavoro svolto per la tesi consiste nella realizzazione di un'applicazione Android che permetta all’utente di scattare o caricare dalla gallery una foto personale e prelevare da una ListView fotografie di abiti da provare mediante trascinamento di quest’ultimi sulla foto dell’utente. Le fasi di lavoro sono state principalmente quattro: - Ricerca sullo stato dell’arte della tecnologia legata al Virtual Dressing Room (storia, elenco e descrizione dei metodi utilizzati da piattaforme esistenti, esempi reali di queste metodologie) - Progettazione con individuazione degli obiettivi e featuring dell’applicazione - Implementazione dell'applicazione (creazione dei layout e codice java delle activity:inserimento taglie e scelta uomo/donna, scatto/caricamento foto, creazione del database e utilizzo mediante ListView, visualizzazione e gestione del carrello). Scrittura del volume di tesi (introduzione e descrizione della tecnologia, progettazione, implementazione con descrizione su Android SDK, Android Studio e implementazione con descrizione dei layout e classi).
Resumo:
Nella prima parte di questa tesi viene introdotto il concetto di Internet of Things. Vengono discussi gli elementi costituitivi fondamentali di tale tecnologia, le differenti architetture proposte nel corso degli anni e le sfide che devono ancora essere affrontate per vedere realizzato l’IoT. Questa prima parte si conclude inoltre con due esempi di applicazione dell’IoT. Questi due esempi, Smart City e Smart Healthcare, hanno l’obbiettivo di evidenziare quali sono i vantaggi ed i servizi che possono essere offerti all’utente finale una volta applicato l’IoT. Nel secondo capitolo invece, vengono presentate le funzionalità della piattaforma IoT ThingWorx, la quale mette a disposizione un ambiente di sviluppo per applicazioni IoT con l’obbiettivo di ridurre i tempi e quindi anche i costi di sviluppo delle stesse. Questa piattaforma cerca di ridurre al minimo la necessità di scrivere codice, utilizzando un sistema di sviluppo di tipo “Drag and Drop”. ThingWorx mette anche a disposizione degli SDK per facilitare la programmazione dei device, gestendo soprattutto la parte di comunicazione nodo – piattaforma. Questo argomento viene trattato ampiamente nella parte finale di questo capitolo dopo aver visto quali sono i concetti fondamentali di modellazione e rappresentazione dei dati sui quali si basa la piattaforma. Nel terzo e ultimo capitolo di questa tesi viene presentato innanzitutto il tutorial Android di ThingWorx. Svolgere e successivamente estendere il tutorial ha evidenziato alcune limitazioni del modello iniziale e questo ci ha portato a progettare e sviluppare il componente Aggregated & Complex Event Manager per la gestione di eventi complessi e che permette di sgravare parzialmente la piattaforma da tale compito. La tesi si conclude evidenziando, tramite dei test, alcune differenze fra la situazione iniziale nella quale il componente non viene utilizzato e la situazione finale, nella quale invece viene usato.
Resumo:
To support development tools like debuggers, runtime systems need to provide a meta-programming interface to alter their semantics and access internal data. Reflective capabilities are typically fixed by the Virtual Machine (VM). Unanticipated reflective features must either be simulated by complex program transformations, or they require the development of a specially tailored VM. We propose a novel approach to behavioral reflection that eliminates the barrier between applications and the VM by manipulating an explicit tower of first-class interpreters. Pinocchio is a proof-of-concept implementation of our approach which enables radical changes to the interpretation of programs by explicitly instantiating subclasses of the base interpreter. We illustrate the design of Pinocchio through non-trivial examples that extend runtime semantics to support debugging, parallel debugging, and back-in-time object-flow debugging. Although performance is not yet addressed, we also discuss numerous opportunities for optimization, which we believe will lead to a practical approach to behavioral reflection.
Resumo:
Features encapsulate the domain knowledge of a software system and thus are valuable sources of information for a reverse engineer. When analyzing the evolution of a system, we need to know how and which features were modified to recover both the change intention and its extent, namely which source artifacts are affected. Typically, the implementation of a feature crosscuts a number of source artifacts. To obtain a mapping between features to the source artifacts, we exercise the features and capture their execution traces. However this results in large traces that are difficult to interpret. To tackle this issue we compact the traces into simple sets of source artifacts that participate in a feature's runtime behavior. We refer to these compacted traces as feature views. Within a feature view, we partition the source artifacts into disjoint sets of characterized software entities. The characterization defines the level of participation of a source entity in the features. We then analyze the features over several versions of a system and we plot their evolution to reveal how and hich features were affected by changes in the code. We show the usefulness of our approach by applying it to a case study where we address the problem of merging parallel development tracks of the same system.
Resumo:
An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.
Resumo:
Compiler optimizations help to make code run faster at runtime. When the compilation is done before the program is run, compilation time is less of an issue, but how do on-the-fly compilation and optimization impact the overall runtime? If the compiler must compete with the running application for resources, the running application will take more time to complete. This paper investigates the impact of specific compiler optimizations on the overall runtime of an application. A foldover Plackett and Burman design is used to choose compiler optimizations that appear to contribute to shorter overall runtimes. These selected optimizations are compared with the default optimization levels in the Jikes RVM. This method selects optimizations that result in a shorter overall runtime than the default O0, O1, and O2 levels. This shows that careful selection of compiler optimizations can have a significant, positive impact on overall runtime.
Resumo:
Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.
Resumo:
The bridge inspection industry has yet to utilize a rapidly growing technology that shows promise to help improve the inspection process. This thesis investigates the abilities that 3D photogrammetry is capable of providing to the bridge inspector for a number of deterioration mechanisms. The technology can provide information about the surface condition of some bridge components, primarily focusing on the surface defects of a concrete bridge which include cracking, spalling and scaling. Testing was completed using a Canon EOS 7D camera which then processed photos using AgiSoft PhotoScan to align the photos and develop models. Further processing of the models was done using ArcMap in the ArcGIS 10 program to view the digital elevation models of the concrete surface. Several experiments were completed to determine the ability of the technique for the detection of the different defects. The cracks that were able to be resolved in this study were a 1/8 inch crack at a distance of two feet above the surface. 3D photogrammetry was able to be detect a depression of 1 inch wide with 3/16 inch depth which would be sufficient to measure any scaling or spalling that would be required be the inspector. The percentage scaled or spalled was also able to be calculated from the digital elevation models in ArcMap. Different camera factors including the distance from the defects, number of photos and angle, were also investigated to see how each factor affected the capabilities. 3D photogrammetry showed great promise in the detection of scaling or spalling of the concrete bridge surface.
Resumo:
High resolution digital elevation models (DEMs) of Santiaguito and Pacaya volcanoes, Guatemala, were used to estimate volume changes and eruption rates between 1954 and 2001. The DEMs were generated from contour maps and aerial photography, which were analyzed in ArcGIS 9.0®. Because both volcanoes were growing substantially over the five decade period, they provide a good data set for exploring effective methodology for estimating volume changes. The analysis shows that the Santiaguito dome complex grew by 0.78 ± 0.07 km3 (0.52 ± 0.05 m3 s-1) over the 1954-2001 period with nearly all the growth occurring on the El Brujo (1958-75) and Caliente domes (1971-2001). Adding information from field data prior to 1954, the total volume extruded from Santiaguito since 1922 is estimated at 1.48 ± 0.19 km3. Santiaguito’s growth rate is lower than most other volcanic domes, but it has been sustained over a much longer period and has undergone a change toward more exogenous and progressively slower extrusion with time. At Santiaguito some of the material being added at the dome is subsequently transported downstream by block and ash flows, mudflows and floods, creating channel shifting and areas of aggradation and erosion. At Pacaya volcano a total volume of 0.21 ± 0.05 km3 was erupted between 1961 and 2001 for an average extrusion rate of 0.17 ± 0.04 m3 s-1. Both the Santiaguito and Pacaya eruption rate estimates reported here are minima, because they do not include estimates of materials which are transported downslope after eruption and data on ashfall which may result in significant volumes of material spread over broad areas. Regular analysis of high resolution DEMs using the methods outlined here, would help quantify the effects of fluvial changes to downstream populated areas, as well as assist in tracking hazards related to dome collapse and eruption.