13 resultados para REUSE
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Immediate Search in the IDE as an Example of Socio-Technical Congruence in Search-Driven Development
Resumo:
Search-driven development is mainly concerned with code reuse but also with code navigation and debugging. In this essay we look at search-driven navigation in the IDE. We consider Smalltalk-80 as an example of a programming system with search-driven navigation capabilities and explore its human factors. We present how immediate search results lead to a user experience of code browsing rather than one of waiting for and clicking through search results. We explore the socio-technical congruence of immediate search, ie unification of tasks and breakpoints with method calls, which leads to simpler and more extensible development tools. Eventually we conclude with remarks on the socio-technical congruence of search-driven development.
Resumo:
Grammars for programming languages are traditionally specified statically. They are hard to compose and reuse due to ambiguities that inevitably arise. PetitParser combines ideas from scannerless parsing, parser combinators, parsing expression grammars and packrat parsers to model grammars and parsers as objects that can be reconfigured dynamically. Through examples and benchmarks we demonstrate that dynamic grammars are not only flexible but highly practical.
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. Moose is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. Moose accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from Moose and FAMIX as shared infrastructure.
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. MOOSE is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. MOOSE accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from MOOSE and FAMIX as shared infrastructure.
Resumo:
In rapidly evolving domains such as Computer Assisted Orthopaedic Surgery (CAOS) emphasis is often put first on innovation and new functionality, rather than in developing the common infrastructure needed to support integration and reuse of these innovations. In fact, developing such an infrastructure is often considered to be a high-risk venture given the volatility of such a domain. We present CompAS, a method that exploits the very evolution of innovations in the domain to carry out the necessary quantitative and qualitative commonality and variability analysis, especially in the case of scarce system documentation. We show how our technique applies to the CAOS domain by using conference proceedings as a key source of information about the evolution of features in CAOS systems over a period of several years. We detect and classify evolution patterns to determine functional commonality and variability. We also identify non-functional requirements to help capture domain variability. We have validated our approach by evaluating the degree to which representative test systems can be covered by the common and variable features produced by our analysis.
Resumo:
Integration of multiple languages into each other and into an existing development environment is a difficult task. As a consequence, developers often end up using only internal DSLs that strictly rely on the constraints imposed by the host language. Infrastructures do exist to mix languages, but they often do it at the price of losing the development tools of the host language. Instead of inventing a completely new infrastructure, our solution is to integrate new languages deeply into the existing host environment and reuse the infrastructure offered by it. In this paper we show why Smalltalk is the best practical choice for such a host language.
Resumo:
Digital technologies and the Internet in particular have transformed the ways we create, distribute, use, reuse and consume cultural content; have impacted on the workings of the cultural industries, and more generally on the processes of making, experiencing and remembering culture in local and global spaces. Yet, few of these, often profound, transformations have found reflection in law and institutional design. Cultural policy toolkits, in particular at the international level, are still very much offline/analogue and conceive of culture as static property linked to national sovereignty and state boundaries. The article describes this state of affairs and asks the key question of whether there is a need to reform global cultural law and policy and if yes, what the essential elements of such a reform should be.
Resumo:
These data result from an investigation examining the interplay between dyadic rapport and consequential behavior-mirroring. Participants responded to a variety of interpersonally-focused pretest measures prior to their engagement in videotaped interdependent tasks (coded for interactional synchrony using Motion Energy Analysis [17,18]). A post-task evaluation of rapport and other related constructs followed each exchange. Four studies shared these same dependent measures, but asked distinct questions: Study 1 (Ndyad = 38) explored the influence of perceived responsibility and gender-specificity of the task; Study 2 (Ndyad = 51) focused on dyad sex-makeup; Studies 3 (Ndyad = 41) and 4 (Ndyad = 63) examined cognitive load impacts on the interactions. Versions of the data are structured with both individual and dyad as the unit of analysis. Our data possess strong reuse potential for theorists interested in dyadic processes and are especially pertinent to questions about dyad agreement and interpersonal perception / behavior association relationships.
Resumo:
BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.
Resumo:
Research on open source software (OSS) projects often focuses on the SourceForge collaboration platform. We argue that a GNU/Linwr distribution, such as Debian, is better suited for the sampling ofprojects because it avoids biases and contains unique information only available in an integrated environment. Especially research on the reuse of components can build on dependency information inherent in the Debian GNU/Linux packaging system. This paper therefore contributes to the practice of sampling methods in OSS research and provides empirical data on reuse dependencies in Debian.