132 resultados para VISUALIZATIONS
Resumo:
Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.
Resumo:
Software visualization can be of great use for understanding and exploring a software system in an intuitive manner. Spatial representation of software is a promising approach of increasing interest. However, little is known about how developers interact with spatial visualizations that are embedded in the IDE. In this paper, we present a pilot study that explores the use of Software Cartography for program comprehension of an unknown system. We investigated whether developers establish a spatial memory of the system, whether clustering by topic offers a sound base layout, and how developers interact with maps. We report our results in the form of observations, hypotheses, and implications. Key findings are a) that developers made good use of the map to inspect search results and call graphs, and b) that developers found the base layout surprising and often confusing. We conclude with concrete advice for the design of embedded software maps
Resumo:
Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. Transactions in JEAs simplify the development of code that deals with failure recovery and multi-user coordination by guaranteeing atomicity of sets of operations. The heterogeneous nature of JEAs, however, can obfuscate conceptual errors in the application code, and in particular can hide incorrect declarations of transaction scope. In this paper we present a technique to expose and analyze the application transaction scope in JEAs by merging and analyzing information from multiple sources. We also present several novel visualizations that aid in the analysis of transaction scope by highlighting anomalies in the specification of transactions and violations of architectural constraints. We have validated our approach on two versions of a large commercial case study.
Resumo:
Data visualization is the process of representing data as pictures to support reasoning about the underlying data. For the interpretation to be as easy as possible, we need to be as close as possible to the original data. As most visualization tools have an internal meta-model, which is different from the one for the presented data, they usually need to duplicate the original data to conform to their meta-model. This leads to an increase in the resources needed, increase which is not always justified. In this work we argue for the need of having an engine that is as close as possible to the data and we present our solution of moving the visualization tool to the data, instead of moving the data to the visualization tool. Our solution also emphasizes the necessity of reusing basic blocks to express complex visualizations and allowing the programmer to script the visualization using his preferred tools, rather than a third party format. As a validation of the expressiveness of our framework, we show how we express several already published visualizations and describe the pros and cons of the approach.
Resumo:
Background: The recent development of semi-automated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. Methods: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. Results: We found that graphical representations can reveal substantial non-biological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. Conclusions: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.
Resumo:
According to the current view, the formation of new alveolar septa from preexisting ones ceases due to the reduction of a double- to a single-layered capillaries network inside the alveolar septa (microvasculature maturation postnatal days 14-21 in rats). We challenged this view by measuring stereologically the appearance of new alveolar septa and by studying the alveolar capillary network in three-dimensional (3-D) visualizations obtained by high-resolution synchrotron radiation X-ray tomographic microscopy. We observed that new septa are formed at least until young adulthood (rats, days 4-60) and that roughly half of the new septa are lifted off of mature septa containing single-layered capillary networks. At the basis of newly forming septa, we detected a local duplication of the capillary network. We conclude that new alveoli may be formed in principle at any time and at any location inside the lung parenchyma and that lung development continues into young adulthood. We define two phases during developmental alveolarization. Phase one (days 4-21), lifting off of new septa from immature preexisting septa, and phase two (day 14 through young adulthood), formation of septa from mature preexisting septa. Clinically, our results ask for precautions using drugs influencing structural lung development during both phases of alveolarization.
Resumo:
Many reverse engineering approaches have been developed to analyze software systems written in different languages like C/C++ or Java. These approaches typically rely on a meta-model, that is either specific for the language at hand or language independent (e.g. UML). However, one language that was hardly addressed is Lisp. While at first sight it can be accommodated by current language independent meta-models, Lisp has some unique features (e.g. macros, CLOS entities) that are crucial for reverse engineering Lisp systems. In this paper we propose a suite of new visualizations that reveal the special traits of the Lisp language and thus help in understanding complex Lisp systems. To validate our approach we apply them on several large Lisp case studies, and summarize our experience in terms of a series of recurring visual patterns that we have detected.
Resumo:
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no “natural“ mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its \emph{vocabulary}, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
Resumo:
Understanding the functioning of brains is an extremely challenging endeavour - both for researches as well as for students. Interactive media and tools, like simulations, databases and visualizations or virtual laboratories proved to be not only indispensable in research but also in education to help understanding brain function. Accordingly, a wide range of such media and tools are now available and it is getting increasingly difficult to see an overall picture. Written by researchers, tool developers and experienced academic teachers, this special issue of Brains, Minds & Media covers a broad range of interactive research media and tools with a strong emphasis on their use in neural and cognitive sciences education. The focus lies not only on the tools themselves, but also on the question of how research tools can significantly enhance learning and teaching and how a curricular integration can be achieved. This collection gives a comprehensive overview of existing tools and their usage as well as the underlying educational ideas and thus provides an orientation guide not only for teaching researchers but also for interested teachers and students.
Resumo:
In recent years interactive media and tools, like scientific simulations and simulation environments or dynamic data visualizations, became established methods in the neural and cognitive sciences. Hence, university teachers of neural and cognitive sciences are faced with the challenge to integrate these media into the neuroscientific curriculum. Especially simulations and dynamic visualizations offer great opportunities for teachers and learners, since they are both illustrative and explorable. However, simulations bear instructional problems: they are abstract, demand some computer skills and conceptual knowledge about what simulations intend to explain. By following two central questions this article provides an overview on possible approaches to be applied in neuroscience education and opens perspectives for their curricular integration: (i) How can complex scientific media be transformed for educational use in an efficient and (for students on all levels) comprehensible manner and (ii) by what technical infrastructure can this transformation be supported? Exemplified by educational simulations for the neurosciences and their application in courses, answers to these questions are proposed a) by introducing a specific educational simulation approach for the neurosciences b) by introducing an e-learning environment for simulations, and c) by providing examples of curricular integration on different levels which might help academic teachers to integrate newly created or existing interactive educational resources in their courses.
Resumo:
In most rodents and some other mammals, the removal of one lung results in compensatory growth associated with dramatic angiogenesis and complete restoration of lung capacity. One pivotal mechanism in neoalveolarization is neovascularization, because without angiogenesis new alveoli can not be formed. The aim of this study is to image and analyze three-dimensionally the different patterns of neovascularization seen following pneumonectomy in mice on a sub-micron-scale. C57/BL6 mice underwent a left-sided pneumonectomy. Lungs were harvested at various timepoints after pneumonectomy. Volume analysis by microCT revealed a striking increase of 143 percent in the cardiac lobe 14 days after pneumonectomy. Analysis of microvascular corrosion casting demonstrated spatially heterogenous vascular densitities which were in line with the perivascular and subpleural compensatory growth pattern observed in anti-PCNA-stained lung sections. Within these regions an expansion of the vascular plexus with increased pillar formations and sprouting angiogenesis, originating both from pre-existing bronchial and pulmonary vessels was observed. Also, type II pneumocytes and alveolar macrophages were seen to participate actively in alveolar neo-angiogenesis after pneumonectomy. 3D-visualizations obtained by high-resolution synchrotron radiation X-ray tomographic microscopy showed the appearance of double-layered vessels and bud-like alveolar baskets as have already been described in normal lung development. Scanning electron microscopy data of microvascular architecture also revealed a replication of perialveolar vessel networks through septum formation as already seen in developmental alveolarization. In addition, the appearance of pillar formations and duplications on alveolar entrance ring vessels in mature alveoli are indicative of vascular remodeling. These findings indicate that sprouting and intussusceptive angiogenesis are pivotal mechanisms in adult lung alveolarization after pneumonectomy. Various forms of developmental neoalveolarization may also be considered to contribute in compensatory lung regeneration.
Resumo:
Web-scale knowledge retrieval can be enabled by distributed information retrieval, clustering Web clients to a large-scale computing infrastructure for knowledge discovery from Web documents. Based on this infrastructure, we propose to apply semiotic (i.e., sub-syntactical) and inductive (i.e., probabilistic) methods for inferring concept associations in human knowledge. These associations can be combined to form a fuzzy (i.e.,gradual) semantic net representing a map of the knowledge in the Web. Thus, we propose to provide interactive visualizations of these cognitive concept maps to end users, who can browse and search the Web in a human-oriented, visual, and associative interface.
Resumo:
Pulmonary airways are subdivided into conducting and gas-exchanging airways. An acinus is defined as the small tree of gas-exchanging airways, which is fed by the most distal purely conducting airway. Until now a dissector of five consecutive sections or airway casts were used to count acini. We developed a faster method to estimate the number of acini in young adult rats. Right middle lung lobes were critical point dried or paraffin embedded after heavy metal staining and imaged by X-ray micro-CT or synchrotron radiation-based X-rays tomographic microscopy. The entrances of the acini were counted in three-dimensional (3D) stacks of images by scrolling through them and using morphological criteria (airway wall thickness and appearance of alveoli). Segmentation stopper were placed at the acinar entrances for 3D visualizations of the conducting airways. We observed that acinar airways start at various generations and that one transitional bronchiole may serve more than one acinus. A mean of 5612 (±547) acini per lung and a mean airspace volume of 0.907 (±0.108) μL per acinus were estimated. In 60-day-old rats neither the number of acini nor the mean acinar volume did correlate with the body weight or the lung volume.
Resumo:
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages. Traditional (and-) parallel models offer generality, being able to exploit parallelism in a large class of programs (including that exploited by data parallelism techniques). Data parallelism techniques on the other hand offer increased performance for a restricted class of programs. The thesis of this paper is that these two forms of parallelism are not fundamentally different and that relating them opens the possibility of obtaining the advantages of both within the same system. Some relevant issues are discussed and solutions proposed. The discussion is illustrated through visualizations of actual parallel executions implementing the ideas proposed.
Resumo:
We address the design and implementation of visual paradigms for observing the execution of constraint logic programs, aiming at debugging, tuning and optimization, and teaching. We focus on the display of data in CLP executions, where representation for constrained variables and for the constrains themselves are seeked. Two tools, VIFID and TRIFID, exemplifying the devised depictions, have been implemented, and are used to showcase the usefulness of the visualizations developed.