32 resultados para Source code visualization


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to analyze software systems, it is necessary to model them. Static software models are commonly imported by parsing source code and related data. Unfortunately, building custom parsers for most programming languages is a non-trivial endeavour. This poses a major bottleneck for analyzing software systems programmed in languages for which importers do not already exist. Luckily, initial software models do not require detailed parsers, so it is possible to start analysis with a coarse-grained importer, which is then gradually refined. In this paper we propose an approach to "agile modeling" that exploits island grammars to extract initial coarse-grained models, parser combinators to enable gradual refinement of model importers, and various heuristics to recognize language structure, keywords and other language artifacts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software corpora facilitate reproducibility of analyses, however, static analysis for an entire corpus still requires considerable effort, often duplicated unnecessarily by multiple users. Moreover, most corpora are designed for single languages increasing the effort for cross-language analysis. To address these aspects we propose Pangea, an infrastructure allowing fast development of static analyses on multi-language corpora. Pangea uses language-independent meta-models stored as object model snapshots that can be directly loaded into memory and queried without any parsing overhead. To reduce the effort of performing static analyses, Pangea provides out-of-the box support for: creating and refining analyses in a dedicated environment, deploying an analysis on an entire corpus, using a runner that supports parallel execution, and exporting results in various formats. In this tool demonstration we introduce Pangea and provide several usage scenarios that illustrate how it reduces the cost of analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually, water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a programmer has to create water tailored to each individual island. Such an approach is fragile, however, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by a programmer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing - bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. We integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Polymorphism, along with inheritance, is one of the most important features in object-oriented languages, but it is also one of the biggest obstacles to source code comprehension. Depending on the run-time type of the receiver of a message, any one of a number of possible methods may be invoked. Several algorithms for creating accurate call-graphs using static analysis already exist, however, they consume significant time and memory resources. We propose an approach that will combine static and dynamic analysis and yield the best possible precision with a minimal trade-off between used resources and accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software developers are often unsure of the exact name of the API method they need to use to invoke the desired behavior. Most state-of-the-art documentation browsers present API artefacts in alphabetical order. Albeit easy to implement, alphabetical order does not help much: if the developer knew the name of the required method, he could have just searched for it in the first place. In a context where multiple projects use the same API, and their source code is available, we can improve the API presentation by organizing the elements in the order in which they are more likely to be used by the developer. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem and this data is used then to improve tools. We present a preliminary study on the potential of this approach to improve the API presentation by reducing the time it takes to find the method that implements a given feature. We also briefly present our experience with two proof-of-concept tools implemented for Smalltalk and Java.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamically typed languages lack information about the types of variables in the source code. Developers care about this information as it supports program comprehension. Ba- sic type inference techniques are helpful, but may yield many false positives or negatives. We propose to mine information from the software ecosys- tem on how frequently given types are inferred unambigu- ously to improve the quality of type inference for a single system. This paper presents an approach to augment existing type inference techniques by supplementing the informa- tion available in the source code of a project with data from other projects written in the same language. For all available projects, we track how often messages are sent to instance variables throughout the source code. Predictions for the type of a variable are made based on the messages sent to it. The evaluation of a proof-of-concept prototype shows that this approach works well for types that are sufficiently popular, like those from the standard librarie, and tends to create false positives for unpopular or domain specific types. The false positives are, in most cases, fairly easily identifiable. Also, the evaluation data shows a substantial increase in the number of correctly inferred types when compared to the non-augmented type inference.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Open innovation is increasingly being adopted in business and describes a situation in which firms exchange ideas and knowledge with external participants, such as customers, suppliers, partner firms, and universities. This article extends the concept of open innovation with a push model of open innovation: knowledge is voluntarily created outside a firm by individuals and organisations who proceed to push knowledge into a firm’s open innovation project. For empirical analysis, we examine source code and newsgroup data on the Eclipse Development Platform. We find that outsiders invest as much in the firm’s project as the founding firm itself. Based on the insights from Eclipse, we develop four propositions: ‘preemptive generosity’ of a firm, ‘continuous commitment’, ‘adaptive governance structure’, and ‘low entry barrier’ are contexts that enable the push model of open innovation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a language engineer has to create water tailored to each individual island. Such an approach is fragile, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by an engineer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing —- bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. Our work focuses on applications of island parsing to data extraction from source code. We have integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Visualisation provides good support for software analysis. It copes with the intangible nature of software by providing concrete representations of it. By reducing the complexity of software, visualisations are especially useful when dealing with large amounts of code. One domain that usually deals with large amounts of source code data is empirical analysis. Although there are many tools for analysis and visualisation, they do not cope well software corpora. In this paper we present Explora, an infrastructure that is specifically targeted at visualising corpora. We report on early results when conducting a sample analysis on Smalltalk and Java corpora.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dental identification is the most valuable method to identify human remains in single cases with major postmortem alterations as well as in mass casualties because of its practicability and demanding reliability. Computed tomography (CT) has been investigated as a supportive tool for forensic identification and has proven to be valuable. It can also scan the dentition of a deceased within minutes. In the present study, we investigated currently used restorative materials using ultra-high-resolution dual-source CT and the extended CT scale for the purpose of a color-encoded, in scale, and artifact-free visualization in 3D volume rendering. In 122 human molars, 220 cavities with 2-, 3-, 4- and 5-mm diameter were prepared. With presently used filling materials (different composites, temporary filling materials, ceramic, and liner), these cavities were restored in six teeth for each material and cavity size (exception amalgam n = 1). The teeth were CT scanned and images reconstructed using an extended CT scale. Filling materials were analyzed in terms of resulting Hounsfield units (HU) and filling size representation within the images. Varying restorative materials showed distinctively differing radiopacities allowing for CT-data-based discrimination. Particularly, ceramic and composite fillings could be differentiated. The HU values were used to generate an updated volume-rendering preset for postmortem extended CT scale data of the dentition to easily visualize the position of restorations, the shape (in scale), and the material used which is color encoded in 3D. The results provide the scientific background for the application of 3D volume rendering to visualize the human dentition for forensic identification purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We recently reported on the Multi Wave Animator (MWA), a novel open-source tool with capability of recreating continuous physiologic signals from archived numerical data and presenting them as they appeared on the patient monitor. In this report, we demonstrate for the first time the power of this technology in a real clinical case, an intraoperative cardiopulmonary arrest following reperfusion of a liver transplant graft. Using the MWA, we animated hemodynamic and ventilator data acquired before, during, and after cardiac arrest and resuscitation. This report is accompanied by an online video that shows the most critical phases of the cardiac arrest and resuscitation and provides a basis for analysis and discussion. This video is extracted from a 33-min, uninterrupted video of cardiac arrest and resuscitation, which is available online. The unique strength of MWA, its capability to accurately present discrete and continuous data in a format familiar to clinicians, allowed us this rare glimpse into events leading to an intraoperative cardiac arrest. Because of the ability to recreate and replay clinical events, this tool should be of great interest to medical educators, researchers, and clinicians involved in quality assurance and patient safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detailed knowledge of the characteristics of the radiation field shaped by a multileaf collimator (MLC) is essential in intensity modulated radiotherapy (IMRT). A previously developed multiple source model (MSM) for a 6 MV beam was extended to a 15 MV beam and supplemented with an accurate model of an 80-leaf dynamic MLC. Using the supplemented MSM and the MC code GEANT, lateral dose distributions were calculated in a water phantom and a portal water phantom. A field which is normally used for the validation of the step and shoot technique and a field from a realistic IMRT treatment plan delivered with dynamic MLC are investigated. To assess possible spectral changes caused by the modulation of beam intensity by an MLC, the energy spectra in five portal planes were calculated for moving slits of different widths. The extension of the MSM to 15 MV was validated by analysing energy fluences, depth doses and dose profiles. In addition, the MC-calculated primary energy spectrum was verified with an energy spectrum which was reconstructed from transmission measurements. MC-calculated dose profiles using the MSM for the step and shoot case and for the dynamic MLC case are in very good agreement with the measured data from film dosimetry. The investigation of a 13 cm wide field shows an increase in mean photon energy of up to 16% for the 0.25 cm slit compared to the open beam for 6 MV and of up to 6% for 15 MV, respectively. In conclusion, the MSM supplemented with the dynamic MLC has proven to be a powerful tool for investigational and benchmarking purposes or even for dose calculations in IMRT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multiple source model (MSM) for the 6 MV beam of a Varian Clinac 2300 C/D was developed by simulating radiation transport through the accelerator head for a set of square fields using the GEANT Monte Carlo (MC) code. The corresponding phase space (PS) data enabled the characterization of 12 sources representing the main components of the beam defining system. By parametrizing the source characteristics and by evaluating the dependence of the parameters on field size, it was possible to extend the validity of the model to arbitrary rectangular fields which include the central 3 x 3 cm2 field without additional precalculated PS data. Finally, a sampling procedure was developed in order to reproduce the PS data. To validate the MSM, the fluence, energy fluence and mean energy distributions determined from the original and the reproduced PS data were compared and showed very good agreement. In addition, the MC calculated primary energy spectrum was verified by an energy spectrum derived from transmission measurements. Comparisons of MC calculated depth dose curves and profiles, using original and PS data reproduced by the MSM, agree within 1% and 1 mm. Deviations from measured dose distributions are within 1.5% and 1 mm. However, the real beam leads to some larger deviations outside the geometrical beam area for large fields. Calculated output factors in 10 cm water depth agree within 1.5% with experimentally determined data. In conclusion, the MSM produces accurate PS data for MC photon dose calculations for the rectangular fields specified.