945 resultados para Source code visualization


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The design and implementation of an ERP system involves capturing the information necessary for implementing the system's structure and behavior that support enterprise management. This process should start on the enterprise modeling level and finish at the coding level, going down through different abstraction layers. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools jeopardizes the advantages of source code availability. Moreover, the distributed, decentralized decision-making, and source-code driven development culture of open source communities, generally doesn't rely on methods for modeling the higher abstraction levels necessary for an ERP solution. The aim of this paper is to present a model driven development process for the open source ERP ERP5. The proposed process covers the different abstraction levels involved, taking into account well established standards and common practices, as well as new approaches, by supplying Enterprise, Requirements, Analysis, Design, and Implementation workflows. Copyright 2008 ACM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this work is to predict the minimum fluidization velocity Umf in a gas-solid fluidized bed. The study was carried out with an experimental apparatus for sand particles with diameters between 310μm and 590μm, and density of 2,590kg/m3. The experimental results were compared with numerical simulations developed in MFIX (Multiphase Flow with Interphase eXchange) open source code [1], for three different sizes of particles: 310mum, 450μm and 590μm. A homogeneous mixture with the three kinds of particles was also studied. The influence of the particle diameter was presented and discussed. The Ergun equation was also used to describe the minimum fluidization velocity. The experimental data presented a good agreement with Ergun equation and numerical simulations. Copyright © 2011 by ASME.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A routine was developed in C++ for the processing of social and environmental census data acquired by the Brazilian Institute of Geography and Statistics (IBGE). The routine employs a simple graphical environment. The data generated are presented in a tabular format, which facilitates a broad and objective view of the values, and provides a convenient means of querying the database. The source code used to develop the routine permits updates and changes, as required by the user. Statistical and mathematical analysis enables the generation of social and environmental indicators, together with quantitative and qualitative classification of the socio-environmental quality of the region analyzed. As an example, the routine was applied using census data for the city of Sorocaba (São Paulo State, Brazil), including conditions of household occupation, water supply, sanitation, level of education, income, and other factors. It is envisaged that the proposed analytical model will assist professionals from different fields of research and teaching to develop urban planning and management strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes strategies and techniques to perform modeling and automatic mesh generation of the aorta artery and its tunics (adventitia, media and intima walls), using open source codes. The models were constructed in the Blender package and Python scripts were used to export the data necessary for the mesh generation in TetGen. The strategies proposed are able to provide meshes of complicated and irregular volumes, with a large number of mesh elements involved (12,000,000 tetrahedrons approximately). These meshes can be used to perform computational simulations by Finite Element Method (FEM). © Published under licence by IOP Publishing Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Identify opportunities for software parallelism is a task that takes a lot of human time, but once some code patterns for parallelism are identified, a software could quickly accomplish this task. Thus, automating this process brings many benefits such as saving time and reducing errors caused by the programmer [1]. This work aims at developing a software environment that identifies opportunities for parallelism in a source code written in C language, and generates a program with the same behavior, but with higher degree of parallelism, compatible with a graphics processor compatible with CUDA architecture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a proposal for the automation of the camera calibration process, locating and measuring image points in coded targets with sub-pixel precision. This automatic technique helps minimize localization errors, regardless of camera orientation and image scale. To develop this technique, several types of coded targets were analyzed and the ARUCO type was chosen due to its simplicity, ability to represent up to 1024 different targets and availability of source code implemented with the OpenCV library. ARUCO targets were generated and two calibration sheets were assembled to be used for the acquisition of images for camera calibration. The developed software can locate targets in the acquired images and it automatically extracts the coordinates of the four corners with sub-pixel accuracy. Experiments were conducted with real data showing that the targets are correctly identified unless excessive noise or fragmentation occurs mainly in the outer target square. The results with the calibration of a low cost camera showed that the process works and that the measurement accuracy of the corners achieves sub-pixel precision.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Background The search for enriched (aka over-represented or enhanced) ontology terms in a list of genes obtained from microarray experiments is becoming a standard procedure for a system-level analysis. This procedure tries to summarize the information focussing on classification designs such as Gene Ontology, KEGG pathways, and so on, instead of focussing on individual genes. Although it is well known in statistics that association and significance are distinct concepts, only the former approach has been used to deal with the ontology term enrichment problem. Results BayGO implements a Bayesian approach to search for enriched terms from microarray data. The R source-code is freely available at http://blasto.iq.usp.br/~tkoide/BayGO in three versions: Linux, which can be easily incorporated into pre-existent pipelines; Windows, to be controlled interactively; and as a web-tool. The software was validated using a bacterial heat shock response dataset, since this stress triggers known system-level responses. Conclusion The Bayesian model accounts for the fact that, eventually, not all the genes from a given category are observable in microarray data due to low intensity signal, quality filters, genes that were not spotted and so on. Moreover, BayGO allows one to measure the statistical association between generic ontology terms and differential expression, instead of working only with the common significance analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES] IPOL es una revista científica de procesamiento digital de imágenes y diversos métodos de análisis de imágenes. En cada publicación se incorpora una demo donde cualquier persona puede probar, vía web, el funcionamiento del método descrito en dicha publicación. De esta forma, se puede usar el método sin tener conocimiento de programación ni tener que instalarlo en su ordenador. En este proyecto fin de carrera se quiere desarrollar una aplicación que permita la ejecución de las demos desde un dispositivo móvil. Con ello, se pretende hacer más accesible la ejecución de algoritmo de procesamiento de imágenes y aumentar su divulgación científica.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Agent Communication Languages (ACLs) have been developed to provide a way for agents to communicate with each other supporting cooperation in Multi-Agent Systems. In the past few years many ACLs have been proposed for Multi-Agent Systems, such as KQML and FIPA-ACL. The goal of these languages is to support high-level, human like communication among agents, exploiting Knowledge Level features rather than symbol level ones. Adopting these ACLs, and mainly the FIPA-ACL specifications, many agent platforms and prototypes have been developed. Despite these efforts, an important issue in the research on ACLs is still open and concerns how these languages should deal (at the Knowledge Level) with possible failures of agents. Indeed, the notion of Knowledge Level cannot be straightforwardly extended to a distributed framework such as MASs, because problems concerning communication and concurrency may arise when several Knowledge Level agents interact (for example deadlock or starvation). The main contribution of this Thesis is the design and the implementation of NOWHERE, a platform to support Knowledge Level Agents on the Web. NOWHERE exploits an advanced Agent Communication Language, FT-ACL, which provides high-level fault-tolerant communication primitives and satisfies a set of well defined Knowledge Level programming requirements. NOWHERE is well integrated with current technologies, for example providing full integration for Web services. Supporting different middleware used to send messages, it can be adapted to various scenarios. In this Thesis we present the design and the implementation of the architecture, together with a discussion of the most interesting details and a comparison with other emerging agent platforms. We also present several case studies where we discuss the benefits of programming agents using the NOWHERE architecture, comparing the results with other solutions. Finally, the complete source code of the basic examples can be found in appendix.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atmosphärische Aerosolpartikel wirken in vielerlei Hinsicht auf die Menschen und die Umwelt ein. Eine genaue Charakterisierung der Partikel hilft deren Wirken zu verstehen und dessen Folgen einzuschätzen. Partikel können hinsichtlich ihrer Größe, ihrer Form und ihrer chemischen Zusammensetzung charakterisiert werden. Mit der Laserablationsmassenspektrometrie ist es möglich die Größe und die chemische Zusammensetzung einzelner Aerosolpartikel zu bestimmen. Im Rahmen dieser Arbeit wurde das SPLAT (Single Particle Laser Ablation Time-of-flight mass spectrometer) zur besseren Analyse insbesondere von atmosphärischen Aerosolpartikeln weiterentwickelt. Der Aerosoleinlass wurde dahingehend optimiert, einen möglichst weiten Partikelgrößenbereich (80 nm - 3 µm) in das SPLAT zu transferieren und zu einem feinen Strahl zu bündeln. Eine neue Beschreibung für die Beziehung der Partikelgröße zu ihrer Geschwindigkeit im Vakuum wurde gefunden. Die Justage des Einlasses wurde mithilfe von Schrittmotoren automatisiert. Die optische Detektion der Partikel wurde so verbessert, dass Partikel mit einer Größe < 100 nm erfasst werden können. Aufbauend auf der optischen Detektion und der automatischen Verkippung des Einlasses wurde eine neue Methode zur Charakterisierung des Partikelstrahls entwickelt. Die Steuerelektronik des SPLAT wurde verbessert, so dass die maximale Analysefrequenz nur durch den Ablationslaser begrenzt wird, der höchsten mit etwa 10 Hz ablatieren kann. Durch eine Optimierung des Vakuumsystems wurde der Ionenverlust im Massenspektrometer um den Faktor 4 verringert.rnrnNeben den hardwareseitigen Weiterentwicklungen des SPLAT bestand ein Großteil dieser Arbeit in der Konzipierung und Implementierung einer Softwarelösung zur Analyse der mit dem SPLAT gewonnenen Rohdaten. CRISP (Concise Retrieval of Information from Single Particles) ist ein auf IGOR PRO (Wavemetrics, USA) aufbauendes Softwarepaket, das die effiziente Auswertung der Einzelpartikel Rohdaten erlaubt. CRISP enthält einen neu entwickelten Algorithmus zur automatischen Massenkalibration jedes einzelnen Massenspektrums, inklusive der Unterdrückung von Rauschen und von Problemen mit Signalen die ein intensives Tailing aufweisen. CRISP stellt Methoden zur automatischen Klassifizierung der Partikel zur Verfügung. Implementiert sind k-means, fuzzy-c-means und eine Form der hierarchischen Einteilung auf Basis eines minimal aufspannenden Baumes. CRISP bietet die Möglichkeit die Daten vorzubehandeln, damit die automatische Einteilung der Partikel schneller abläuft und die Ergebnisse eine höhere Qualität aufweisen. Daneben kann CRISP auf einfache Art und Weise Partikel anhand vorgebener Kriterien sortieren. Die CRISP zugrundeliegende Daten- und Infrastruktur wurde in Hinblick auf Wartung und Erweiterbarkeit erstellt. rnrnIm Rahmen der Arbeit wurde das SPLAT in mehreren Kampagnen erfolgreich eingesetzt und die Fähigkeiten von CRISP konnten anhand der gewonnen Datensätze gezeigt werden.rnrnDas SPLAT ist nun in der Lage effizient im Feldeinsatz zur Charakterisierung des atmosphärischen Aerosols betrieben zu werden, während CRISP eine schnelle und gezielte Auswertung der Daten ermöglicht.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Physiologic data display is essential to decision making in critical care. Current displays echo first-generation hemodynamic monitors dating to the 1970s and have not kept pace with new insights into physiology or the needs of clinicians who must make progressively more complex decisions about their patients. The effectiveness of any redesign must be tested before deployment. Tools that compare current displays with novel presentations of processed physiologic data are required. Regenerating conventional physiologic displays from archived physiologic data is an essential first step. OBJECTIVES: The purposes of the study were to (1) describe the SSSI (single sensor single indicator) paradigm that is currently used for physiologic signal displays, (2) identify and discuss possible extensions and enhancements of the SSSI paradigm, and (3) develop a general approach and a software prototype to construct such "extended SSSI displays" from raw data. RESULTS: We present Multi Wave Animator (MWA) framework-a set of open source MATLAB (MathWorks, Inc., Natick, MA, USA) scripts aimed to create dynamic visualizations (eg, video files in AVI format) of patient vital signs recorded from bedside (intensive care unit or operating room) monitors. Multi Wave Animator creates animations in which vital signs are displayed to mimic their appearance on current bedside monitors. The source code of MWA is freely available online together with a detailed tutorial and sample data sets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

What was I working on before the weekend? and What were the members of my team working on during the last week? are common questions that are frequently asked by a developer. They can be answered if one keeps track of who changes what in the source code. In this work, we present Replay, a tool that allows one to replay past changes as they happened at a fine-grained level, where a developer can watch what she has done or understand what her colleagues have done in past development sessions. With this tool, developers are able to not only understand what sequence of changes brought the system to a certain state (e.g., the introduction of a defect), but also deduce reasons for why her colleagues performed those changes. One of the applications of such a tool is also discovering the changes that broke the code of a developer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A feature represents a functional requirement fulfilled by a system. Since many maintenance tasks are expressed in terms of features, it is important to establish the correspondence between a feature and its implementation in source code. Traditional approaches to establish this correspondence exercise features to generate a trace of runtime events, which is then processed by post-mortem analysis. These approaches typically generate large amounts of data to analyze. Due to their static nature, these approaches do not support incremental and interactive analysis of features. We propose a radically different approach called live feature analysis, which provides a model at runtime of features. Our approach analyzes features on a running system and also makes it possible to grow feature representations by exercising different scenarios of the same feature, and identifies execution elements even to the sub-method level. We describe how live feature analysis is implemented effectively by annotating structural representations of code based on abstract syntax trees. We illustrate our live analysis with a case study where we achieve a more complete feature representation by exercising and merging variants of feature behavior and demonstrate the efficiency or our technique with benchmarks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To derive tests for randomness, nonlinear-independence, and stationarity, we combine surrogates with a nonlinear prediction error, a nonlinear interdependence measure, and linear variability measures, respectively. We apply these tests to intracranial electroencephalographic recordings (EEG) from patients suffering from pharmacoresistant focal-onset epilepsy. These recordings had been performed prior to and independent from our study as part of the epilepsy diagnostics. The clinical purpose of these recordings was to delineate the brain areas to be surgically removed in each individual patient in order to achieve seizure control. This allowed us to define two distinct sets of signals: One set of signals recorded from brain areas where the first ictal EEG signal changes were detected as judged by expert visual inspection ("focal signals") and one set of signals recorded from brain areas that were not involved at seizure onset ("nonfocal signals"). We find more rejections for both the randomness and the nonlinear-independence test for focal versus nonfocal signals. In contrast more rejections of the stationarity test are found for nonfocal signals. Furthermore, while for nonfocal signals the rejection of the stationarity test increases the rejection probability of the randomness and nonlinear-independence test substantially, we find a much weaker influence for the focal signals. In consequence, the contrast between the focal and nonfocal signals obtained from the randomness and nonlinear-independence test is further enhanced when we exclude signals for which the stationarity test is rejected. To study the dependence between the randomness and nonlinear-independence test we include only focal signals for which the stationarity test is not rejected. We show that the rejection of these two tests correlates across signals. The rejection of either test is, however, neither necessary nor sufficient for the rejection of the other test. Thus, our results suggest that EEG signals from epileptogenic brain areas are less random, more nonlinear-dependent, and more stationary compared to signals recorded from nonepileptogenic brain areas. We provide the data, source code, and detailed results in the public domain.