139 resultados para Mircea Eliade
Resumo:
BACKGROUND: As for Cystic Fibrosis (CF) and many other hereditary diseases there is still a lack in understanding the relationship between genetic (e.g. allelic) and phenotypic diversity. Therefore methods which allow fine quantification of allelic proportions of mRNA transcripts are of high importance. METHODS: We used either genomic DNA (gDNA) or total RNA extracted from nasal cells as starting nucleic acid template for our assay. The subjects included in this study were 9 CF patients compound heterozygous for the F508del mutation and each one F508del homozygous and one wild type homozygous respectively. We established a novel ligation based quantification method which allows fine quantification of the allelic proportions of ss and ds CFTR cDNA. To verify reliability and accuracy of this novel assay we compared it with semiquantitative fluorescent PCR (SQF-PCR). RESULTS: We established a novel assay for allele specific quantification of gene expression which combines the benefits of the specificity of the ligation reaction and the accuracy of quantitative real-time PCR. The comparison with SQF-PCR clearly demonstrates that LASQ allows fine quantification of allelic proportions. CONCLUSION: This assay represents an alternative to other fine quantitative methods such as ARMS PCR and Pyrosequencing.
Resumo:
This paper presents a case study of analyzing a legacy PL/1 ecosystem that has grown for 40 years to support the business needs of a large banking company. In order to support the stakeholders in analyzing it we developed St1-PL/1 — a tool that parses the code for association data and computes structural metrics which it then visualizes using top-down interactive exploration. Before building the tool and after demonstrating it to stakeholders we conducted several interviews to learn about legacy ecosystem analysis requirements. We briefly introduce the tool and then present results of analysing the case study. We show that although the vision for the future is to have an ecosystem architecture in which systems are as decoupled as possible the current state of the ecosystem is still removed from this. We also present some of the lessons learned during our experience discussions with stakeholders which include their interests in automatically assessing the quality of the legacy code.
Resumo:
We present the results of an investigation into the nature of the information needs of software developers who work in projects that are part of larger ecosystems. In an open- question survey we asked framework and library developers about their information needs with respect to both their upstream and downstream projects. We investigated what kind of information is required, why is it necessary, and how the developers obtain this information. The results show that the downstream needs are grouped into three categories roughly corresponding to the different stages in their relation with an upstream: selection, adop- tion, and co-evolution. The less numerous upstream needs are grouped into two categories: project statistics and code usage. The current practices part of the study shows that to sat- isfy many of these needs developers use non-specific tools and ad hoc methods. We believe that this is a largely unexplored area of research.
Resumo:
By analyzing the transactions in Stack Overflow we can get a glimpse of the way in which the different geographical regions in the world contribute to the knowledge market represented by the website. In this paper we aggregate the knowledge transfer from the level of the users to the level of geographical regions and learn that Europe and North America are the principal and virtually equal contributors; Asia comes as a distant third, mainly represented by India; and Oceania contributes less than Asia but more than South America and Africa together.
Resumo:
Highly available software systems occasionally need to be updated while avoiding downtime. Dynamic software updates reduce down-time, but still require the system to reach a quiescent state in which a global update can be performed. This can be difficult for multi-threaded systems. We present a novel approach to dynamic updates using first-class contexts, called Theseus. First-class contexts make global updates unnecessary: existing threads run to termination in an old context, while new threads start in a new, updated context; consistency between contexts is ensured with the help of bidirectional transformations. We show that for multi-threaded systems with coherent memory, first-class contexts offer a practical and flexible approach to dynamic updates, with acceptable overhead.
Resumo:
A close to native structure of bulk biological specimens can be imaged by cryo-electron microscopy of vitreous sections (CEMOVIS). In some cases structural information can be combined with X-ray data leading to atomic resolution in situ. However, CEMOVIS is not routinely used. The two critical steps consist of producing a frozen section ribbon of a few millimeters in length and transferring the ribbon onto an electron microscopy grid. During these steps, the first sections of the ribbon are wrapped around an eyelash (unwrapping is frequent). When a ribbon is sufficiently attached to the eyelash, the operator must guide the nascent ribbon. Steady hands are required. Shaking or overstretching may break the ribbon. In turn, the ribbon immediately wraps around itself or flies away and thereby becomes unusable. Micromanipulators for eyelashes and grids as well as ionizers to attach section ribbons to grids were proposed. The rate of successful ribbon collection, however, remained low for most operators. Here we present a setup composed of two micromanipulators. One of the micromanipulators guides an electrically conductive fiber to which the ribbon sticks with unprecedented efficiency in comparison to a not conductive eyelash. The second micromanipulator positions the grid beneath the newly formed section ribbon and with the help of an ionizer the ribbon is attached to the grid. Although manipulations are greatly facilitated, sectioning artifacts remain but the likelihood to investigate high quality sections is significantly increased due to the large number of sections that can be produced with the reported tool.
Resumo:
Was das Heilige ist und wie man darüber sprechen kann, ist eine offene Frage in der religionswissenschaftlichen und theologischen Forschung. Jenseits der klassischen Entwürfe von Durkheim, Otto oder Eliade kann Heiliges heute nur in multiperspektivischer Betrachtung angemessen untersucht werden. Die Beiträge zu diesem Band analysieren Diskurse über Heiliges in spätantiken Religionskulturen: griechisch-römische Religion, Judentum und Christentum. Terminologien, Handlungen und Reflexionen in Bezug auf Heiliges werden in ihrem jeweiligen religiösen Bezugssystem thematisiert, aber darüber hinaus auch miteinander ins Gespräch gebracht. Hierfür dienen Kategorien wie Zeit, Ort, Individuum und Gruppe der Zuordnung der Befunde. Besonderes Augenmerk liegt zudem auf quellensprachlichen und forschungsinternen Begrifflichkeiten von Heiligem sowie auf der geschichtlichen Dynamik von Heiligkeitsvorstellungen. Dieses interdisziplinäre Vorgehen macht Diskontinuitäten und Kontinuitäten des Diskurses über „das Heilige“ in der Vielfalt seiner Erscheinungsformen präziser als bisher identifizierbar.
Resumo:
Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.
Resumo:
Software architecture is the result of a design effort aimed at ensuring a certain set of quality attributes. As we show, quality requirements are commonly specified in practice but are rarely validated using automated techniques. In this paper we analyze and classify commonly specified quality requirements after interviewing professionals and running a survey. We report on tools used to validate those requirements and comment on the obstacles encountered by practitioners when performing such activity (e.g., insufficient tool-support; poor understanding of users needs). Finally we discuss opportunities for increasing the adoption of automated tools based on the information we collected during our study (e.g., using a business-readable notation for expressing quality requirements; increasing awareness by monitoring non-functional aspects of a system).
Resumo:
Software architecture consists of a set of design choices that can be partially expressed in form of rules that the implementation must conform to. Architectural rules are intended to ensure properties that fulfill fundamental non-functional requirements. Verifying architectural rules is often a non- trivial activity: available tools are often not very usable and support only a narrow subset of the rules that are commonly specified by practitioners. In this paper we present a new highly-readable declarative language for specifying architectural rules. With our approach, users can specify a wide variety of rules using a single uniform notation. Rules can get tested by third-party tools by conforming to pre-defined specification templates. Practitioners can take advantage of the capabilities of a growing number of testing tools without dealing with them directly.
Resumo:
Software corpora facilitate reproducibility of analyses, however, static analysis for an entire corpus still requires considerable effort, often duplicated unnecessarily by multiple users. Moreover, most corpora are designed for single languages increasing the effort for cross-language analysis. To address these aspects we propose Pangea, an infrastructure allowing fast development of static analyses on multi-language corpora. Pangea uses language-independent meta-models stored as object model snapshots that can be directly loaded into memory and queried without any parsing overhead. To reduce the effort of performing static analyses, Pangea provides out-of-the box support for: creating and refining analyses in a dedicated environment, deploying an analysis on an entire corpus, using a runner that supports parallel execution, and exporting results in various formats. In this tool demonstration we introduce Pangea and provide several usage scenarios that illustrate how it reduces the cost of analysis.
Resumo:
Dicto is a declarative language for specifying architectural rules using a single uniform notation. Once defined, rules can automatically be validated using adapted off-the-shelf tools.
Resumo:
We present the results of an investigation into the nature of information needs of software developers who work in projects that are part of larger ecosystems. This work is based on a quantitative survey of 75 professional software developers. We corroborate the results identified in the sur- vey with needs and motivations proposed in a previous sur- vey and discover that tool support for developers working in an ecosystem context is even more meager than we thought: mailing lists and internet search are the most popular tools developers use to satisfy their ecosystem-related information needs.
Resumo:
The domain of context-free languages has been extensively explored and there exist numerous techniques for parsing (all or a subset of) context-free languages. Unfortunately, some programming languages are not context-free. Using standard context-free parsing techniques to parse a context-sensitive programming language poses a considerable challenge. Im- plementors of programming language parsers have adopted various techniques, such as hand-written parsers, special lex- ers, or post-processing of an ambiguous parser output to deal with that challenge. In this paper we suggest a simple extension of a top-down parser with contextual information. Contrary to the tradi- tional approach that uses only the input stream as an input to a parsing function, we use a parsing context that provides ac- cess to a stream and possibly to other context-sensitive infor- mation. At a same time we keep the context-free formalism so a grammar definition stays simple without mind-blowing context-sensitive rules. We show that our approach can be used for various purposes such as indent-sensitive parsing, a high-precision island parsing or XML (with arbitrary el- ement names) parsing. We demonstrate our solution with PetitParser, a parsing-expression grammar based, top-down, parser combinator framework written in Smalltalk.