911 resultados para replication slippage
Resumo:
Retroviruses uniquely co-package two copies of their genomic RNA within each virion. The two copies are used as templates for synthesis of the proviral DNA during the process of reverse transcription. Two template switches are required to complete retroviral DNA synthesis by the retroviral enzyme, reverse transcriptase. With two RNA genomes present in the virion, reverse transcriptase can make template switches utilizing only one of the RNA templates (intramolecular) or utilizing both RNA templates (intermolecular) during the process of reverse transcription. The results presented in this study show that during a single cycle of Moloney murine leukemia virus replication, both nonrecombinant and recombinant proviruses predominantly underwent intramolecular minus- and plus-strand transfers during the process of reverse transcription. This is the first study to examine the nature of the required template switches occurring during MLV replication and these results support the previous findings for SNV, and the hypothesis that the required template switches are ordered events. This study also determined rates for deletion and a rate of recombination for a single cycle of MLV replication. The rates reported here are comparable to the rates previously reported for both SNV and MLV. ^
Resumo:
ATP-dependent chromatin remodeling has been shown to be critical for transcription and DNA repair. However, the involvement of ATP-dependent chromatin remodeling in DNA replication remains poorly defined. Interestingly, we found that the INO80 chromatin-remodeling complex is directly involved in the DNA damage tolerance pathways activated during DNA replication. DNA damage tolerance is important for genomic stability and is controlled by formation of either mono-ubiquitinated or multi-ubiquitinated PCNA, which respectively induce error prone or error-free replication bypass of the lesions. In addition, homologous recombination (HR) mediated by the Rad51 pathway is also involved in the DNA damage tolerance pathways. ^ We found that INO80 is specifically recruited to replication origins during S phase in a genome-wide fashion. In addition, DNA combing analysis shows INO80 is required for the resumption of replication at stalled forks induced by methyl methane-sulfonate (MMS). Mechanistically, we find that INO80 is required for PCNA ubiquitination as well as for Rad51 mediated processing of replication forks after MMS treatment. Furthermore, chromatin immunoprecipitation at specific ARSs indicates INO80 is necessary for Rad18 and Rad51 recruitment to replication forks after MMS treatment. Moreover, 2D gel analysis shows INO80 is necessary to process Rad51 mediated intermediates at impeded replication forks. ^ In conclusion, our findings establish a novel role of a chromatin-remodeling complex in DNA damage tolerance pathways and suggest that chromatin remodeling is fundamentally important to ensure faithful replication of DNA and genome stability in eukaryotes. ^
Resumo:
p53 plays a role in cell cycle arrest and apoptosis. p53 has also been shown to be involved in DNA replication. To study the effect of p53 on DNA replication, we utilized a SV40 based shuttle vector system. The pZ402 shuttle vector, was constructed with a mutated T-antigen unable to interact with p53 but able to support replication of the shuttle vector. When a transcriptional activation domain p53 mutant was tested for its ability to inhibit DNA replication no inhibition was observed. Competition assays with the DNA binding domain of p53 was also able to block the inhibition of DNA replication by p53 suggesting that p53 can inhibit DNA replication through the transcriptional activation of a target gene. One likely target gene, p21$\sp{\rm cip/waf}$ was tested to determine whether p53 inhibited DNA replication by transcriptionally activating p21$\sp{\rm cip/waf}$. Two independent approaches utilizing p21$\sp{\rm cip/waf}$ null cells or the expression of an anti-sense p21$\sp{\rm cip/waf}$ expression vector were utilized. p53 was able to inhibit pZ402 replication independently of p21$\sp{\rm cip/waf}$. p53 was also able to inhibit DNA replication independent of the p53 target genes Gadd45 and the replication processivity factor PCNA. The inhibition of DNA replication by p53 was also independent of direct DNA binding to a consensus site on the replicating plasmid. p53 mutants can be classified into two categories: conformational and DNA contact mutants. The two types of p53 mutants were tested for their effects on DNA replication. While all conformational mutants were unable to inhibit DNA replication three out of three DNA contact mutants tested were able to inhibit DNA replication. The work here studies the effect wild-type and mutant p53 has on DNA replication and demonstrated a possible mechanism by which wild-type p53 could inhibit DNA replication through the transcriptional activation of a target gene. ^
Resumo:
Replication of software engineering experiments is crucial for dealing with validity threats to experiments in this area. Even though the empirical software engineering community is aware of the importance of replication, the replication rate is still very low. The RESER'11 Joint Replication Project aims to tackle this problem by simultaneously running a series of several replications of the same experiment. In this article, we report the results of the replication run at the Universidad Politécnica de Madrid. Our results are inconsistent with the original experiment. However, we have identified possible causes for them. We also discuss our experiences (in terms of pros and cons) during the replication.
Resumo:
This research is concerned with the experimental software engineering area, specifically experiment replication. Replication has traditionally been viewed as a complex task in software engineering. This is possibly due to the present immaturity of the experimental paradigm applied to software development. Researchers usually use replication packages to replicate an experiment. However, replication packages are not the solution to all the information management problems that crop up when successive replications of an experiment accumulate. This research borrows ideas from the software configuration management and software product line paradigms to support the replication process. We believe that configuration management can help to manage and administer information from one replication to another: hypotheses, designs, data analysis, etc. The software product line paradigm can help to organize and manage any changes introduced into the experiment by each replication. We expect the union of the two paradigms in replication to improve the planning, design and execution of further replications and their alignment with existing replications. Additionally, this research work will contribute a web support environment for archiving information related to different experiment replications. Additionally, it will provide flexible enough information management support for running replications with different numbers and types of changes. Finally, it will afford massive storage of data from different replications. Experimenters working collaboratively on the same experiment must all have access to the different experiments.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013).Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Politécnica del Ejército Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program andgroup variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults.We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
Replication Data Management (RDM) aims at enabling the use of data collections from several iterations of an experiment. However, there are several major challenges to RDM from integrating data models and data from empirical study infrastructures that were not designed to cooperate, e.g., data model variation of local data sources. [Objective] In this paper we analyze RDM needs and evaluate conceptual RDM approaches to support replication researchers. [Method] We adapted the ATAM evaluation process to (a) analyze RDM use cases and needs of empirical replication study research groups and (b) compare three conceptual approaches to address these RDM needs: central data repositories with a fixed data model, heterogeneous local repositories, and an empirical ecosystem. [Results] While the central and local approaches have major issues that are hard to resolve in practice, the empirical ecosystem allows bridging current gaps in RDM from heterogeneous data sources. [Conclusions] The empirical ecosystem approach should be explored in diverse empirical environments.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Polite?cnica del Eje?rcito Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
One of the most demanding needs in cloud computing is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. In this paper, we examine three eager database replication systems available today: Middle-R, C-JDBC and MySQL Cluster using TPC-W benchmark. We analyze their architecture, replication protocols and compare the performance both in the absence of failures and when there are failures.
Resumo:
Context: The software engineering community is becoming more aware of the need for experimental replications. In spite of the importance of this topic, there is still much inconsistency in the terminology used to describe replications. Objective: Understand the perspectives of empirical researchers about various terms used to characterize replications and propose a consistent taxonomy of terms. Method: A survey followed by plenary discussion during the 2013 International Software Engineering Research Network meeting. Results: We propose a taxonomy which consolidates the disparate terminology. This taxonomy had a high level of agreement among workshop attendees. Conclusion: Consistent terminology is important for any field to progress. This work is the first step in that direction. Additional study and discussion is still necessary.
Resumo:
Context: Replication plays an important role in experimental disciplines. There are still many uncertain- ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment- ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi- fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera- ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment. Conclusion: The aim of replication is to verify results, but different types of replication serve special ver- ification purposes and afford different degrees of change. Each replication type helps to discover partic- ular experimental conditions that might influence the results. The proposed classification can be used to identify changes in a replication and, based on these, understand the level of verification.
Resumo:
One of the most demanding needs in cloud computing and big data is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. This thesis provides an in depth study of three eager database replication systems based on relational systems: Middle-R, C-JDBC and MySQL Cluster and three systems based on In-Memory Data Grids: JBoss Data Grid, Oracle Coherence and Terracotta Ehcache. Thesis explore these systems based on their architecture, replication protocols, fault tolerance and various other functionalities. It also provides experimental analysis of these systems using state-of-the art benchmarks: TPC-C and TPC-W (for relational systems) and Yahoo! Cloud Serving Benchmark (In- Memory Data Grids). Thesis also discusses three Graph Databases, Neo4j, Titan and Sparksee based on their architecture and transactional capabilities and highlights the weaker transactional consistencies provided by these systems. It discusses an implementation of snapshot isolation in Neo4j graph database to provide stronger isolation guarantees for transactions.