957 resultados para Tags visualization
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.
Resumo:
La tesi descrive il sistema denominato GARTP che visualizza l'analisi dell'anticipo e del ritardo nel trasporto pubblico, su una mappa cartografica.
Resumo:
BACKGROUND: Local anaesthetic blocks of the greater occipital nerve (GON) are frequently performed in different types of headache, but no selective approaches exist. Our cadaver study compares the sonographic visibility of the nerve and the accuracy and specificity of ultrasound-guided injections at two different sites. METHODS: After sonographic measurements in 10 embalmed cadavers, 20 ultrasound-guided injections of the GON were performed with 0.1 ml of dye at the classical site (superior nuchal line) followed by 20 at a newly described site more proximal (C2, superficial to the obliquus capitis inferior muscle). The spread of dye and coloration of nerve were evaluated by dissection. RESULTS: The median sonographic diameter of the GON was 4.2 x 1.4 mm at the classical and 4.0 x 1.8 mm at the new site. The nerves were found at a median depth of 8 and 17.5 mm, respectively. In 16 of 20 in the classical approach and 20 of 20 in the new approach, the nerve was successfully coloured with the dye. This corresponds to a block success rate of 80% (95% confidence interval: 58-93%) vs 100% (95% confidence interval: 86-100%), which is statistically significant (McNemar's test, P=0.002). CONCLUSIONS: Our findings confirm that the GON can be visualized using ultrasound both at the level of the superior nuchal line and C2. This newly described approach superficial to the obliquus capitis inferior muscle has a higher success rate and should allow a more precise blockade of the nerve.
Resumo:
It is one of the most important tasks of the forensic pathologist to explain the forensically relevant medical findings to medical non-professionals. However, it is often difficult to comment on the nature and potential consequences of organ injuries in a comprehensive way to individuals with limited knowledge of anatomy and physiology. This rare case of survived pancreatic transaction after kicks to the abdomen illustrates how the application of dedicated software programs for three-dimensional reconstruction can overcome these difficulties, allowing for clear and concise visualization of complex findings.
Resumo:
To determine whether intravenous morphine comedication improves bile duct visualization, diameter and/or volume applying intravenous CT cholangiography in a porcine liver model.
Resumo:
Background Parasitic wasps constitute one of the largest group of venomous animals. Although some physiological effects of their venoms are well documented, relatively little is known at the molecular level on the protein composition of these secretions. To identify the majority of the venom proteins of the endoparasitoid wasp Chelonus inanitus (Hymenoptera: Braconidae), we have randomly sequenced 2111 expressed sequence tags (ESTs) from a cDNA library of venom gland. In parallel, proteins from pure venom were separated by gel electrophoresis and individually submitted to a nano-LC-MS/MS analysis allowing comparison of peptides and ESTs sequences. Results About 60% of sequenced ESTs encoded proteins whose presence in venom was attested by mass spectrometry. Most of the remaining ESTs corresponded to gene products likely involved in the transcriptional and translational machinery of venom gland cells. In addition, a small number of transcripts were found to encode proteins that share sequence similarity with well-known venom constituents of social hymenopteran species, such as hyaluronidase-like proteins and an Allergen-5 protein. An overall number of 29 venom proteins could be identified through the combination of ESTs sequencing and proteomic analyses. The most highly redundant set of ESTs encoded a protein that shared sequence similarity with a venom protein of unknown function potentially specific of the Chelonus lineage. Venom components specific to C. inanitus included a C-type lectin domain containing protein, a chemosensory protein-like protein, a protein related to yellow-e3 and ten new proteins which shared no significant sequence similarity with known sequences. In addition, several venom proteins potentially able to interact with chitin were also identified including a chitinase, an imaginal disc growth factor-like protein and two putative mucin-like peritrophins. Conclusions The use of the combined approaches has allowed to discriminate between cellular and truly venom proteins. The venom of C. inanitus appears as a mixture of conserved venom components and of potentially lineage-specific proteins. These new molecular data enrich our knowledge on parasitoid venoms and more generally, might contribute to a better understanding of the evolution and functional diversity of venom proteins within Hymenoptera.
Resumo:
Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.
Resumo:
Software visualization can be of great use for understanding and exploring a software system in an intuitive manner. Spatial representation of software is a promising approach of increasing interest. However, little is known about how developers interact with spatial visualizations that are embedded in the IDE. In this paper, we present a pilot study that explores the use of Software Cartography for program comprehension of an unknown system. We investigated whether developers establish a spatial memory of the system, whether clustering by topic offers a sound base layout, and how developers interact with maps. We report our results in the form of observations, hypotheses, and implications. Key findings are a) that developers made good use of the map to inspect search results and call graphs, and b) that developers found the base layout surprising and often confusing. We conclude with concrete advice for the design of embedded software maps
Resumo:
Library of Congress Subject Headings (LCSH), the standard subject language used in library catalogues, are often criticized for their lack of currency, biased language, and atypical syndetic structure. Conversely, folksonomies (or tags), which rely on the natural language of their users, offer a flexibility often lacking in controlled vocabularies and may offer a means of augmenting more rigid controlled vocabularies such as LCSH. Content analysis studies have demonstrated the potential for folksonomies to be used as a means of enhancing subject access to materials, and libraries are beginning to integrate tagging systems into their catalogues. This study examines the utility of tags as a means of enhancing subject access to materials in library online public access catalogues (OPACs) through usability testing with the LibraryThing for Libraries catalogue enhancements. Findings indicate that while they cannot replace LCSH, tags do show promise for aiding information seeking in OPACs. In the context of information systems design, the study revealed that while folksonomies have the potential to enhance subject access to materials, that potential is severely limited by the current inability of catalogue interfaces to support tag-based searches alongside standard catalogue searches.
Resumo:
Load flow visualization, which is an important step in structural and machine assembly design may aid in the analysis and eventual synthesis of compliant mechanisms. In this paper, we present a kineto-static formulation to visualize load flow in compliant mechanisms. This formulation uses the concept of transferred forces to quantify load flow from input to the output of a compliant mechanism. The magnitude and direction of load flow in the constituent members enables functional decomposition of the compliant mechanism into (i) Constraints (C): members that are constrained to deform in a particular direction and (ii) Transmitters (T): members that transmit load to the output. Furthermore, it is shown that a constraint member and an adjacent transmitter member can be grouped together to constitute a fundamental building block known as an CT set whose load flow behavior is maximally decoupled from the rest of the mechanism. We can thereby explain the deformation behavior of a number of compliant mechanisms from literature by visualizing load flow, and identifying building blocks.