882 resultados para Visualisation Tools


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This introduction gives a general perspective of the debugging methodology and the tools developed in the ESPRIT IV project DiSCiPl Debugging Systems for Constraint Programming. It has been prepared by the editors of this volume by substantial rewriting of the DiSCiPl deliverable CP Debugging Tools [1]. This introduction is organised as follows. Section 1 outlines the DiSCiPl view of debugging, its associated debugging methodology, and motivates the kinds of tools proposed: the assertion based tools, the declarative diagnoser and the visualisation tools. Sections 2 through 4 provide a short presentation of the tools of each kind. Finally, Section 5 presents a summary of the tools developed in the project. This introduction gives only a general view of the DiSCiPl debugging methodology and tools. For details and for specific bibliographic referenees the reader is referred to the subsequent chapters.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

When analysing software metrics, users find that visualisation tools lack support for (1) the detection of patterns within metrics; and (2) enabling analysis of software corpora. In this paper we present Explora, a visualisation tool designed for the simultaneous analysis of multiple metrics of systems in software corpora. Explora incorporates a novel lightweight visualisation technique called PolyGrid that promotes the detection of graphical patterns. We present an example where we analyse the relation of subtype polymorphism with inheritance and invocation in corpora of Smalltalk and Java systems and find that (1) subtype polymorphism is more likely to be found in large hierarchies; (2) as class hierarchies grow horizontally, they also do so vertically; and (3) in polymorphic hierarchies the length of the name of the classes is orthogonal to the cardinality of the call sites.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY This PhD research, funded by the Swiss Sciences Foundation, is principally devoted to enhance the recognition, the visualisation and the characterization of geobodies through innovative 3D seismic approaches. A series of case studies from the Australian North West Shelf ensures the development of reproducible integrated 3D workflows and gives new insight into local and regional stratigraphic as well as structural issues. This project was initiated in year 2000 at the Geology and Palaeontology Institute of the University of Lausanne (Switzerland). Several collaborations ensured the improvement of technical approaches as well as the assessment of geological models. - Investigations into the Timor Sea structural style were carried out at the Tectonics Special Research Centre of the University of Western Australia and in collaboration with Woodside Energy in Perth. - Seismic analysis and attributes classification approach were initiated with Schlumberger Oilfield Australia in Perth; assessments and enhancements of the integrated seismic approaches benefited from collaborations with scientists from Schlumberger Stavanger Research (Norway). Adapting and refining from "linear" exploration techniques, a conceptual "helical" 3D seismic approach has been developed. In order to investigate specific geological issues this approach, integrating seismic attributes and visualisation tools, has been refined and adjusted leading to the development of two specific workflows: - A stratigraphic workflow focused on the recognition of geobodies and the characterization of depositional systems. Additionally, it can support the modelling of the subsidence and incidentally the constraint of the hydrocarbon maturity of a given area. - A structural workflow used to quickly and accurately define major and secondary fault systems. The integration of the 3D structural interpretation results ensures the analysis of the fault networks kinematics which can affect hydrocarbon trapping mechanisms. The application of these integrated workflows brings new insight into two complex settings on the Australian North West Shelf and ensures the definition of astonishing stratigraphic and structural outcomes. The stratigraphic workflow ensures the 3D characterization of the Late Palaeozoic glacial depositional system on the Mermaid Nose (Dampier Subbasin, Northern Carnarvon Basin) that presents similarities with the glacial facies along the Neotethys margin up to Oman (chapter 3.1). A subsidence model reveals the Phanerozoic geodynamic evolution of this area (chapter 3.2) and emphasizes two distinct mode of regional extension for the Palaeozoic (Neotethys opening) and Mesozoic (abyssal plains opening). The structural workflow is used for the definition of the structural evolution of the Laminaria High area (Bonaparte Basin). Following a regional structural characterization of the Timor Sea (chapter 4.1), a thorough analysis of the Mesozoic fault architecture reveals a local rotation of the stress field and the development of reverse structures (flower structures) in extensional setting, that form potential hydrocarbon traps (chapter 4.2). The definition of the complex Neogene structural architecture associated with the fault kinematic analysis and a plate flexure model (chapter 4.3) suggest that the Miocene to Pleistocene reactivation phases recorded at the Laminaria High most probably result from the oblique normal reactivation of the underlying Mesozoic fault planes. This episode is associated with the deformation of the subducting Australian plate. Based on these results three papers were published in international journals and two additional publications will be submitted. Additionally this research led to several communications in international conferences. Although the different workflows presented in this research have been primarily developed and used for the analysis of specific stratigraphic and structural geobodies on the Australian North West Shelf, similar integrated 3D seismic approaches will have applications to hydrocarbon exploration and production phases; for instance increasing the recognition of potential source rocks, secondary migration pathways, additional traps or reservoir breaching mechanisms. The new elements brought by this research further highlight that 3D seismic data contains a tremendous amount of hidden geological information waiting to be revealed and that will undoubtedly bring new insight into depositional systems, structural evolution and geohistory of the areas reputed being explored and constrained and other yet to be constrained. The further development of 3D texture attributes highlighting specific features of the seismic signal, the integration of quantitative analysis for stratigraphic and structural processes, the automation of the interpretation workflow as well as the formal definition of "seismo-morphologic" characteristics of a wide range of geobodies from various environments would represent challenging examples of continuation of this present research. The 21st century will most probably represent a transition period between fossil and other alternative energies. The next generation of seismic interpreters prospecting for hydrocarbon will undoubtedly face new challenges mostly due to the shortage of obvious and easy targets. They will probably have to keep on integrating techniques and geological processes in order to further capitalise the seismic data for new potentials definition. Imagination and creativity will most certainly be among the most important quality required from such geoscientists.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A multi-scale framework for decision support is presented that uses a combination of experiments, models, communication, education and decision support tools to arrive at a realistic strategy to minimise diffuse pollution. Effective partnerships between researchers and stakeholders play a key part in successful implementation of this strategy. The Decision Support Matrix (DSM) is introduced as a set of visualisations that can be used at all scales, both to inform decision making and as a communication tool in stakeholder workshops. A demonstration farm is presented and one of its fields is taken as a case study. Hydrological and nutrient flow path models are used for event based simulation (TOPCAT), catchment scale modelling (INCA) and field scale flow visualisation (TopManage). One of the DSMs; The Phosphorus Export Risk Matrix (PERM) is discussed in detail. The PERM was developed iteratively as a point of discussion in stakeholder workshops, as a decision support and education tool. The resulting interactive PERM contains a set of questions and proposed remediation measures that reflect both expert and local knowledge. Education and visualisation tools such as GIS, risk indicators, TopManage and the PERM are found to be invaluable in communicating improved farming practice to stakeholders. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cultural heritage sites all over the world are at risk due to aggressive urban expansion, development, wars and general obsolescence. Not all objects are recorded in detail although they may have social and historical significance. For example more emphasis is placed on the recording of castles and palaces than on crofters’ cottages or tenement blocks, although their history can be just as rich. This paper will investigate the historic fabric of Aberdeen through the use of digital scanning, supported by a range of media including old photographs and paintings. Dissemination of social heritage through visualisations will be explored and how this can aid the understanding of space within the city or specific area. Focus will be given to the major statues/monuments within the context of the city centre, exploring their importance in their environment. In addition studying why many have been re-located away from their original site, the reasons why, and how we have perhaps lost some of the social and historical importance of why that monument was first located there. It will be argued that Digital Media could be utilised for much more than re-creation and re-presentation of physical entities. Digital scanning, in association with visualisation tools, is used to capture the essence of both the cultural heritage and the society that created or used the sites in association with visualisation tools and in some way re-enacting the original importance placed upon the monument in its original location, through adoption of BIM Heritage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Continuous Plankton Recorder (CPR) survey, operated by the Sir Alister Hardy Foundation for Ocean Science (SAHFOS), is the largest plankton monitoring programme in the world and has spanned > 70 yr. The dataset contains information from -200 000 samples, with over 2.3 million records of individual taxa. Here we outline the evolution of the CPR database through changes in technology, and how this has increased data access. Recent high-impact publications and the expanded role of CPR data in marine management demonstrate the usefulness of the dataset. We argue that solely supplying data to the research community is not sufficient in the current research climate; to promote wider use, additional tools need to be developed to provide visual representation and summary statistics. We outline 2 software visualisation tools, SAHFOS WinCPR and the digital CPR Atlas, which provide access to CPR data for both researchers and non-plankton specialists. We also describe future directions of the database, data policy and the development of visualisation tools. We believe that the approach at SAHFOS to increase data accessibility and provide new visualisation tools has enhanced awareness of the data and led to the financial security of the organisation; it also provides a good model of how long-term monitoring programmes can evolve to help secure their future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper surveys the context of feature extraction by neural network approaches, and compares and contrasts their behaviour as prospective data visualisation tools in a real world problem. We also introduce and discuss a hybrid approach which allows us to control the degree of discriminatory and topographic information in the extracted feature space.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Visualisation of program executions has been used in applications which include education and debugging. However, traditional visualisation techniques often fall short of expectations or are altogether inadequate for new programming paradigms, such as Constraint Logic Programming (CLP), whose declarative and operational semantics differ in some crucial ways from those of other paradigms. In particular, traditional ideas regarding the behaviour of data often cannot be lifted in a straightforward way to (C)LP from other families of programming languages. In this chapter we discuss techniques for visualising data evolution in CLP. We briefly review some previously proposed visualisation paradigms, and also propose a number of (to our knowledge) novel ones. The graphical representations have been chosen based on the perceived needs of a programmer trying to analyse the behaviour and characteristics of an execution. In particular, we concentrate on the representation of the run-time values of the variables, and the constraints among them. Given our interest in visualising large executions, we also pay attention to abstraction techniques, i.e., techniques which are intended to help in reducing the complexity of the visual information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The evaluation of geospatial data quality and trustworthiness presents a major challenge to geospatial data users when making a dataset selection decision. The research presented here therefore focused on defining and developing a GEO label – a decision support mechanism to assist data users in efficient and effective geospatial dataset selection on the basis of quality, trustworthiness and fitness for use. This thesis thus presents six phases of research and development conducted to: (a) identify the informational aspects upon which users rely when assessing geospatial dataset quality and trustworthiness; (2) elicit initial user views on the GEO label role in supporting dataset comparison and selection; (3) evaluate prototype label visualisations; (4) develop a Web service to support GEO label generation; (5) develop a prototype GEO label-based dataset discovery and intercomparison decision support tool; and (6) evaluate the prototype tool in a controlled human-subject study. The results of the studies revealed, and subsequently confirmed, eight geospatial data informational aspects that were considered important by users when evaluating geospatial dataset quality and trustworthiness, namely: producer information, producer comments, lineage information, compliance with standards, quantitative quality information, user feedback, expert reviews, and citations information. Following an iterative user-centred design (UCD) approach, it was established that the GEO label should visually summarise availability and allow interrogation of these key informational aspects. A Web service was developed to support generation of dynamic GEO label representations and integrated into a number of real-world GIS applications. The service was also utilised in the development of the GEO LINC tool – a GEO label-based dataset discovery and intercomparison decision support tool. The results of the final evaluation study indicated that (a) the GEO label effectively communicates the availability of dataset quality and trustworthiness information and (b) GEO LINC successfully facilitates ‘at a glance’ dataset intercomparison and fitness for purpose-based dataset selection.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.