985 resultados para Source code visualization
Resumo:
The applications of the Finite Element Method (FEM) for three-dimensional domains are already well documented in the framework of Computational Electromagnetics. However, despite the power and reliability of this technique for solving partial differential equations, there are only a few examples of open source codes available and dedicated to the solid modeling and automatic constrained tetrahedralization, which are the most time consuming steps in a typical three-dimensional FEM simulation. Besides, these open source codes are usually developed separately by distinct software teams, and even under conflicting specifications. In this paper, we describe an experiment of open source code integration for solid modeling and automatic mesh generation. The integration strategy and techniques are discussed, and examples and performance results are given, specially for complicated and irregular volumes which are not simply connected. © 2011 IEEE.
Resumo:
Neste trabalho, é implementada uma interface gráfica de usuários (GUI) usando a ferramenta Qt da Nokia (versão 3.0). A interface visa simplificar a criação de cenários para a realização de simulações paralelas usando a técnica numérica Local Nonorthogonal Finite Difference Time-Domain (LN-FDTD), aplicada para solucionar as equações de Maxwell. O simulador foi desenvolvido usando a linguagem de programação C e paralelizado utilizando threads. Para isto, a biblioteca pthread foi empregada. A visualização 3D do cenário a ser simulado (e da malha) é realizada por um programa especialmente desenvolvido que utiliza a biblioteca OpenGL. Para melhorar o desenvolvimento e alcançar os objetivos do projeto computacional, foram utilizados conceitos da Engenharia de Software, tais como o modelo de processo de software por prototipagem. Ao privar o usuário de interagir diretamente com o código-fonte da simulação, a probabilidade de ocorrência de erros humanos durante o processo de construção de cenários é minimizada. Para demonstrar o funcionamento da ferramenta desenvolvida, foi realizado um estudo relativo ao efeito de flechas em linhas de baixa tensão nas tensões transitórias induzidas nas mesmas por descargas atmosféricas. As tensões induzidas nas tomadas da edificação também são estudadas.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
KIVA is an open Computational Fluid Dynamics (CFD) source code that is capable to compute the transient two and three-dimensional chemically reactive fluid flows with spray. The latest version in the family of KIVA codes is the KIVA-4 which is capable of handling the unstructured mesh. This project focuses on the implementation of the Conjugate Heat Transfer code (CHT) in KIVA-4. The previous version of KIVA code with conjugate heat transfer code has been developed at Michigan Technological University by Egel Urip and is be used in this project. During the first phase of the project, the difference in the code structure between the previous version of KIVA and the KIVA-4 has been studied, which is the most challenging part of the project. The second phase involves the reverse engineering where the CHT code in previous version is extracted and implemented in KIVA-4 according to the new code structure. The validation of the implemented code is performed using a 4-valve Pentroof engine case. A solid cylinder wall has been developed using GRIDGEN which surrounds 3/4th of the engine cylinder and heat transfer to the solid wall during one engine cycle (0-720 Crank Angle Degree) is compared with that of the reference result. The reference results are nothing but the same engine case run in the previous version with the original code developed by Egel. The results of current code are very much comparable to that of the reference results which verifies that successful implementation of the CHT code in KIVA-4.
Resumo:
Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.
Resumo:
Maintaining object-oriented systems that use inheritance and polymorphism is difficult, since runtime information, such as which methods are actually invoked at a call site, is not visible in the static source code. We have implemented Senseo, an Eclipse plugin enhancing Eclipse's static source views with various dynamic metrics, such as runtime types, the number of objects created, or the amount of memory allocated in particular methods.
Resumo:
Code clone detection helps connect developers across projects, if we do it on a large scale. The cornerstones that allow clone detection to work on a large scale are: (1) bad hashing (2) lightweight parsing using regular expressions and (3) MapReduce pipelines. Bad hashing means to determine whether or not two artifacts are similar by checking whether their hashes are identical. We show a bad hashing scheme that works well on source code. Lightweight parsing using regular expressions is our technique of obtaining entire parse trees from regular expressions, robustly and efficiently. We detail the algorithm and implementation of one such regular expression engine. MapReduce pipelines are a way of expressing a computation such that it can automatically and simply be parallelized. We detail the design and implementation of one such MapReduce pipeline that is efficient and debuggable. We show a clone detector that combines these cornerstones to detect code clones across all projects, across all versions of each project.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
Este trabalho trata do desenvolvimento de um sistema computacional, para a geração de dados e apresentação de resultados, específico para as estruturas de edifícios. As rotinas desenvolvidas devem trabalhar em conjunto com um sistema computacional para análise de estruturas com base no Método dos Elementos Finitos, contemplando tanto as estruturas de pavimentos; com a utilização de elementos de barra, placa/casca e molas; como as estruturas de contraventamento; com a utilização de elementos de barra tridimensional e recursos especiais como nó mestre e trechos rígidos. A linguagem computacional adotada para a elaboração das rotinas mencionadas é o Object Pascal do DELPHI, um ambiente de programação visual estruturado na programação orientada a objetos do Object Pascal. Essa escolha tem como objetivo, conseguir um sistema computacional onde alterações e adições de funções possam ser realizadas com facilidade, sem que todo o conjunto de programas precise ser analisado e modificado. Por fim, o programa deve servir como um verdadeiro ambiente para análise de estruturas de edifícios, controlando através de uma interface amigável com o usuário uma série de outros programas já desenvolvidos em FORTRAN, como por exemplo o dimensionamento de vigas, pilares, etc.
Resumo:
Extensible Business Reporting Language (XBRL) is being adopted by European regulators as a data standard for the exchange of business information. This paper examines the approach of XBRL International (XII) to the meta-data standard's development and diffusion. We theorise the development of XBRL using concepts drawn from a model of successful open source projects. Comparison of the open source model to XBRL enables us to identify a number of interesting similarities and differences. In common with open source projects, the benefits and progress of XBRL have been overstated and 'hyped' by enthusiastic participants. While XBRL is an open data standard in terms of access to the equivalent of its 'source code' we find that the governance structure of the XBRL consortium is significantly different to a model open source approach. The barrier to participation that is created by requiring paid membership and a focus on transacting business at physical conferences and meetings is identified as particularly critical. Decisions about the technical structure of XBRL, the regulator-led pattern of adoption and the organisation of XII are discussed. Finally areas for future research are identified.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. ^ The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as (1) closure or connectedness within the group, (2) bridging ties which extend outside of the group, and (3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. ^ The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. ^ Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software. ^
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as 1) closure or connectedness within the group, 2) bridging ties which extend outside of the group, and 3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software.
Resumo:
Context: Obfuscation is a common technique used to protect software against mali- cious reverse engineering. Obfuscators manipulate the source code to make it harder to analyze and more difficult to understand for the attacker. Although different ob- fuscation algorithms and implementations are available, they have never been directly compared in a large scale study. Aim: This paper aims at evaluating and quantifying the effect of several different obfuscation implementations (both open source and commercial), to help developers and project manager to decide which one could be adopted. Method: In this study we applied 44 obfuscations to 18 subject applications covering a total of 4 millions lines of code. The effectiveness of these source code obfuscations has been measured using 10 code metrics, considering modularity, size and complexity of code. Results: Results show that some of the considered obfuscations are effective in mak- ing code metrics change substantially from original to obfuscated code, although this change (called potency of the obfuscation) is different on different metrics. In the pa- per we recommend which obfuscations to select, given the security requirements of the software to be protected.