995 resultados para Science Visualisation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (Computer Tomography, Magnetic Resonance Imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software developers often ask questions about software systems and software ecosystems that entail exploration and navigation, such as who uses this component?, and where is this feature implemented?. Software visualisation can be a great aid to understanding and exploring the answers to such questions, but visualisations require expertise to implement effectively, and they do not always scale well to large systems. We propose to automatically generate software visualisations based on software models derived from open source software corpora and from an analysis of the properties of typical developers queries and commonly used visualisations. The key challenges we see are (1) understanding how to match queries to suitable visualisations, and (2) scaling visualisations effectively to very large software systems and corpora. In the paper we motivate the idea of automatic software visualisation, we enumerate the challenges and our proposals to address them, and we describe some very initial results in our attempts to develop scalable visualisations of open source software corpora.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When analysing software metrics, users find that visualisation tools lack support for (1) the detection of patterns within metrics; and (2) enabling analysis of software corpora. In this paper we present Explora, a visualisation tool designed for the simultaneous analysis of multiple metrics of systems in software corpora. Explora incorporates a novel lightweight visualisation technique called PolyGrid that promotes the detection of graphical patterns. We present an example where we analyse the relation of subtype polymorphism with inheritance and invocation in corpora of Smalltalk and Java systems and find that (1) subtype polymorphism is more likely to be found in large hierarchies; (2) as class hierarchies grow horizontally, they also do so vertically; and (3) in polymorphic hierarchies the length of the name of the classes is orthogonal to the cardinality of the call sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population measures for genetic programs are defined and analysed in an attempt to better understand the behaviour of genetic programming. Some measures are simple, but do not provide sufficient insight. The more meaningful ones are complex and take extra computation time. Here we present a unified view on the computation of population measures through an information hypertree (iTree). The iTree allows for a unified and efficient calculation of population measures via a basic tree traversal. © Springer-Verlag 2004.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most current 3D landscape visualisation systems either use bespoke hardware solutions, or offer a limited amount of interaction and detail when used in realtime mode. We are developing a modular, data driven 3D visualisation system that can be readily customised to specific requirements. By utilising the latest software engineering methods and bringing a dynamic data driven approach to geo-spatial data visualisation we will deliver an unparalleled level of customisation in near-photo realistic, realtime 3D landscape visualisation. In this paper we show the system framework and describe how this employs data driven techniques. In particular we discuss how data driven approaches are applied to the spatiotemporal management aspect of the application framework, and describe the advantages these convey. © Springer-Verlag Berlin Heidelberg 2006.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract not available

Relevância:

30.00% 30.00%

Publicador:

Resumo:

C3S2E '16 Proceedings of the Ninth International C* Conference on Computer Science & Software Engineering

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualisation provides good support for software analysis. It copes with the intangible nature of software by providing concrete representations of it. By reducing the complexity of software, visualisations are especially useful when dealing with large amounts of code. One domain that usually deals with large amounts of source code data is empirical analysis. Although there are many tools for analysis and visualisation, they do not cope well software corpora. In this paper we present Explora, an infrastructure that is specifically targeted at visualising corpora. We report on early results when conducting a sample analysis on Smalltalk and Java corpora.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To report on the use of chronic myeloid leukemia as a theme of basic clinical integration for first year medical students to motivate and enable in-depth understanding of the basic sciences of the future physician. During the past thirteen years we have reviewed and updated the curriculum of the medical school of the Universidade Estadual de Campinas. The main objective of the new curriculum is to teach the students how to learn to learn. Since then, a case of chronic myeloid leukemia has been introduced to first year medical students and discussed in horizontal integration with all themes taught during a molecular and cell biology course. Cell structure and components, protein, chromosomes, gene organization, proliferation, cell cycle, apoptosis, signaling and so on are all themes approached during this course. At the end of every topic approached, the students prepare in advance the corresponding topic of clinical cases chosen randomly during the class, which are then presented by them. During the final class, a paper regarding mutations in the abl gene that cause resistance to tyrosine kinase inhibitors is discussed. After each class, three tests are solved in an interactive evaluation. The course has been successful since its beginning, 13 years ago. Great motivation of those who participated in the course was observed. There were less than 20% absences in the classes. At least three (and as many as nine) students every year were interested in starting research training in the field of hematology. At the end of each class, an interactive evaluation was performed and more than 70% of the answers were correct in each evaluation. Moreover, for the final evaluation, the students summarized, in a written report, the molecular and therapeutic basis of chronic myeloid leukemia, with scores ranging from 0 to 10. Considering all 13 years, a median of 78% of the class scored above 5 (min 74%-max 85%), and a median of 67% scored above 7. Chronic myeloid leukemia is an excellent example of a disease that can be used for clinical basic integration as this disorder involves well known protein, cytogenetic and cell function abnormalities, has well-defined diagnostic strategies and a target oriented therapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present review addresses certain important aspects regarding nanoparticles and the environment, with an emphasis on plant science. The production and characterization of nanoparticles is the focus of this review, providing an idea of the range and the consolidation of these aspects in the literature, with modifications on the routes of synthesis and the application of the analytical techniques for characterization of the nanoparticles (NPs). Additionally, aspects related to the interaction between the NPs and plants, their toxicities, and the phytoremediation process, among others, are also discussed. Future trends are also presented, supplying evidence for certain possibilities regarding new research involving nanoparticles and plants.