54 resultados para Eliade, Mircea
Resumo:
Recovering the architecture is the first step towards reengineering a software system. Many reverse engineering tools use top-down exploration as a way of providing a visual and interactive process for architecture recovery. During the exploration process, the user navigates through various views on the system by choosing from several exploration operations. Although some sequences of these operations lead to views which, from the architectural point of view, are mode relevant than others, current tools do not provide a way of predicting which exploration paths are worth taking and which are not. In this article we propose a set of package patterns which are used for augmenting the exploration process with in formation about the worthiness of the various exploration paths. The patterns are defined based on the internal package structure and on the relationships between the package and the other packages in the system. To validate our approach, we verify the relevance of the proposed patterns for real-world systems by analyzing their frequency of occurrence in six open-source software projects.
Resumo:
Data visualization is the process of representing data as pictures to support reasoning about the underlying data. For the interpretation to be as easy as possible, we need to be as close as possible to the original data. As most visualization tools have an internal meta-model, which is different from the one for the presented data, they usually need to duplicate the original data to conform to their meta-model. This leads to an increase in the resources needed, increase which is not always justified. In this work we argue for the need of having an engine that is as close as possible to the data and we present our solution of moving the visualization tool to the data, instead of moving the data to the visualization tool. Our solution also emphasizes the necessity of reusing basic blocks to express complex visualizations and allowing the programmer to script the visualization using his preferred tools, rather than a third party format. As a validation of the expressiveness of our framework, we show how we express several already published visualizations and describe the pros and cons of the approach.
Resumo:
BACKGROUND: Cystic fibrosis (CF) is associated with at least 1 pathogen point sequence variant on each CFTR allele. Some symptomatic patients, however, have only 1 detectable pathogen sequence variant and carry, on the other allele, a large deletion that is not detected by conventional screening methods. METHODS: For relative quantitative real-time PCR detection of large deletions in the CFTR gene, we designed DNA-specific primers for each exon of the gene and primers for a reference gene (beta2-microglobulin). For PCR we used a LightCycler system (Roche) and calculated the gene-dosage ratio of CFTR to beta2-microglobulin. We tested the method by screening all 27 exons in 3 healthy individuals and 2 patients with only 1 pathogen sequence variant. We then performed specific deletion screenings in 10 CF patients with known large deletions and a blinded analysis in which we screened 24 individuals for large deletions by testing 8 of 27 exons. RESULTS: None of the ratios for control samples were false positive (for deletions or duplications); moreover, for all samples from patients with known large deletions, the calculated ratios for deleted exons were close to 0.5. In addition, the results from the blinded analysis demonstrated that our method can also be used for the screening of single individuals. CONCLUSIONS: The LightCycler assay allows reliable and rapid screening for large deletions in the CFTR gene and detects the copy number of all 27 exons.
Resumo:
BACKGROUND: As for Cystic Fibrosis (CF) and many other hereditary diseases there is still a lack in understanding the relationship between genetic (e.g. allelic) and phenotypic diversity. Therefore methods which allow fine quantification of allelic proportions of mRNA transcripts are of high importance. METHODS: We used either genomic DNA (gDNA) or total RNA extracted from nasal cells as starting nucleic acid template for our assay. The subjects included in this study were 9 CF patients compound heterozygous for the F508del mutation and each one F508del homozygous and one wild type homozygous respectively. We established a novel ligation based quantification method which allows fine quantification of the allelic proportions of ss and ds CFTR cDNA. To verify reliability and accuracy of this novel assay we compared it with semiquantitative fluorescent PCR (SQF-PCR). RESULTS: We established a novel assay for allele specific quantification of gene expression which combines the benefits of the specificity of the ligation reaction and the accuracy of quantitative real-time PCR. The comparison with SQF-PCR clearly demonstrates that LASQ allows fine quantification of allelic proportions. CONCLUSION: This assay represents an alternative to other fine quantitative methods such as ARMS PCR and Pyrosequencing.
Resumo:
This paper presents a case study of analyzing a legacy PL/1 ecosystem that has grown for 40 years to support the business needs of a large banking company. In order to support the stakeholders in analyzing it we developed St1-PL/1 — a tool that parses the code for association data and computes structural metrics which it then visualizes using top-down interactive exploration. Before building the tool and after demonstrating it to stakeholders we conducted several interviews to learn about legacy ecosystem analysis requirements. We briefly introduce the tool and then present results of analysing the case study. We show that although the vision for the future is to have an ecosystem architecture in which systems are as decoupled as possible the current state of the ecosystem is still removed from this. We also present some of the lessons learned during our experience discussions with stakeholders which include their interests in automatically assessing the quality of the legacy code.
Resumo:
We present the results of an investigation into the nature of the information needs of software developers who work in projects that are part of larger ecosystems. In an open- question survey we asked framework and library developers about their information needs with respect to both their upstream and downstream projects. We investigated what kind of information is required, why is it necessary, and how the developers obtain this information. The results show that the downstream needs are grouped into three categories roughly corresponding to the different stages in their relation with an upstream: selection, adop- tion, and co-evolution. The less numerous upstream needs are grouped into two categories: project statistics and code usage. The current practices part of the study shows that to sat- isfy many of these needs developers use non-specific tools and ad hoc methods. We believe that this is a largely unexplored area of research.
Resumo:
By analyzing the transactions in Stack Overflow we can get a glimpse of the way in which the different geographical regions in the world contribute to the knowledge market represented by the website. In this paper we aggregate the knowledge transfer from the level of the users to the level of geographical regions and learn that Europe and North America are the principal and virtually equal contributors; Asia comes as a distant third, mainly represented by India; and Oceania contributes less than Asia but more than South America and Africa together.
Resumo:
Highly available software systems occasionally need to be updated while avoiding downtime. Dynamic software updates reduce down-time, but still require the system to reach a quiescent state in which a global update can be performed. This can be difficult for multi-threaded systems. We present a novel approach to dynamic updates using first-class contexts, called Theseus. First-class contexts make global updates unnecessary: existing threads run to termination in an old context, while new threads start in a new, updated context; consistency between contexts is ensured with the help of bidirectional transformations. We show that for multi-threaded systems with coherent memory, first-class contexts offer a practical and flexible approach to dynamic updates, with acceptable overhead.
Resumo:
A close to native structure of bulk biological specimens can be imaged by cryo-electron microscopy of vitreous sections (CEMOVIS). In some cases structural information can be combined with X-ray data leading to atomic resolution in situ. However, CEMOVIS is not routinely used. The two critical steps consist of producing a frozen section ribbon of a few millimeters in length and transferring the ribbon onto an electron microscopy grid. During these steps, the first sections of the ribbon are wrapped around an eyelash (unwrapping is frequent). When a ribbon is sufficiently attached to the eyelash, the operator must guide the nascent ribbon. Steady hands are required. Shaking or overstretching may break the ribbon. In turn, the ribbon immediately wraps around itself or flies away and thereby becomes unusable. Micromanipulators for eyelashes and grids as well as ionizers to attach section ribbons to grids were proposed. The rate of successful ribbon collection, however, remained low for most operators. Here we present a setup composed of two micromanipulators. One of the micromanipulators guides an electrically conductive fiber to which the ribbon sticks with unprecedented efficiency in comparison to a not conductive eyelash. The second micromanipulator positions the grid beneath the newly formed section ribbon and with the help of an ionizer the ribbon is attached to the grid. Although manipulations are greatly facilitated, sectioning artifacts remain but the likelihood to investigate high quality sections is significantly increased due to the large number of sections that can be produced with the reported tool.