984 resultados para Machinery.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Saccharomyces cerevisiae RAD50, MRE11, and XRS2 genes are essential for telomere length maintenance, cell cycle checkpoint signaling, meiotic recombination, and DNA double-stranded break (DSB) repair via nonhomologous end joining and homologous recombination. The DSB repair pathways that draw upon Mre11-Rad50-Xrs2 subunits are complex, so their mechanistic features remain poorly understood. Moreover, the molecular basis of DSB end resection in yeast mre11-nuclease deficient mutants and Mre11 nuclease-independent activation of ATM in mammals remains unknown and adds a new dimension to many unanswered questions about the mechanism of DSB repair. Here, we demonstrate that S. cerevisiae Mre11 (ScMre11) exhibits higher binding affinity for single-over double-stranded DNA and intermediates of recombination and repair and catalyzes robust unwinding of substrates possessing a 3' single-stranded DNA overhang but not of 5' overhangs or blunt-ended DNA fragments. Additional evidence disclosed that ScMre11 nuclease activity is dispensable for its DNA binding and unwinding activity, thus uncovering the molecular basis underlying DSB end processing in mre11 nuclease deficient mutants. Significantly, Rad50, Xrs2, and Sae2 potentiate the DNA unwinding activity of Mre11, thus underscoring functional interaction among the components of DSB end repair machinery. Our results also show that ScMre11 by itself binds to DSB ends, then promotes end bridging of duplex DNA, and directly interacts with Sae2. We discuss the implications of these results in the context of an alternative mechanism for DSB end processing and the generation of single-stranded DNA for DNA repair and homologous recombination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The success of AAV2 mediated hepatic gene transfer in human trials for diseases such as hemophilia has been hampered by a combination of low transduction efficiency and a robust immune response directed against these vectors. We have previously shown that AAV2 is targeted for destruction in the cytoplasm by the host-cellular kinase/ubiquitination/proteasomal degradation machinery and modification of the serine(S)/threonine(T) kinase and lysine(K) targets on AAV capsid is beneficial. Thus targeted single mutations of S/T>A(S489A, S498A, T251A) and K>R (K532R) improved the efficiency of gene transfer in vivo as compared to wild type (WT)-AAV2 vectors (∼6-14 fold). In the present study, we evaluated if combined alteration of the phosphodegrons (PD), which are the phosphorylation sites recognized as degradation signals by ubiquitin ligases, improves further the gene transfer efficiency. Thus, we generated four multiple mutant vectors (PD: 1+3, S489A+K532R, PD: 1+3, S489A+K532R together with T251 residue which did not lie in any of the phosphodegrons but had shown increased transduction efficiency compared to the WT-AAV2 vector (∼6 fold) and was also conserved in 9 out of 10 AAV serotypes (AAV 1 to 10), PD: 1+3, S489A+K532R+S498A and a fourth combination of PD: 3, K532R+T251. We then evaluated them in vitro and in vivo and compared their gene transfer efficiency with either the WT-AAV2 or the best single mutant S489A-AAV2 vector. The novel multiple mutations on the AAV2 capsid did not affect the overall vector packaging efficiency. All the multiple AAV2 mutants showed superior transduction efficiency in HeLa cells in vitro when compared to either the WT (62-72% Vs 21%) or the single mutant S489A (62-72% Vs 50%) AAV2 vectors as demonstrated by FACS analysis (Fig. 1A). On hepatic gene transfer with 5x10^10 vgs per animal in C57BL/6 mice, all the multiple mutants showed increased transgene expression compared to either the WT-AAV2 (∼15-23 fold) or the S489A single mutant vector (∼2-3 fold) (Fig.1B and C). These novel multiple mutant AAV2 vectors also showed higher vector copy number in murine hepatocytes 4 weeks post transduction, as compared to the WT-AAV2 (∼5-6 Vs 1.4 vector copies/diploid genome) and further higher when compared to the single mutant S489A(∼5-6 fold Vs 3.8 fold) (Fig.1D). Further ongoing studies will demonstrate the therapeutic benefit of one or more of the multiple mutants vectors in preclinical models of hemophilia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recombinant AAV-8 vectors have shown significant promise for hepatic gene therapy of hemophilia B. However, the theme of AAV vector dose dependent immunotoxicity seen with AAV2 vectors earlier seem to re-emerge with AAV8 vectors as well. It is therefore important to develop novel AAV8 vectors that provide enhanced gene expression at significantly less vector doses. We hypothesized that AAV8 during its intracellular trafficking, are targeted for destruction in the cytoplasm by the host-cellular kinase/ubiquitination/proteasomal degradation machinery and modification of specific serine/threonine kinase or ubiquitination targets on AAV8 capsid (Fig.1A) may improve its transduction efficiency. To test this, point mutations at specific serine (S)/threonine (T) > alanine (A) or lysine (K)>arginine (R) residues were generated on AAV8 capsid. scAAV8-EGFP vectors containing the wild-type (WT) and each one of the 5 S/T/K-mutant(S276A, S501A, S671A, T251A and K137R) capsids were evaluated for their liver transduction efficiency at a dose of 5 X 1010 vgs/ animal in C57BL/6 mice in vivo. The best performing mutant was found to be the K137R vector in terms of either the gene expression (46-fold) or the vector copy numbers in the hepatocytes (22-fold) compared to WT-AAV8 (Fig.1B). The K137R-AAV8 vector that showed significantly decreased ubiquitination of the viral capsid had reduced activation of markers of innate immune response [IL-6, IL-12, tumor necrosis factor α, Kupffer cells and TLR-9]. In addition, animals injected with the K137R mutant also demonstrated decreased (2-fold) levels of cross-neutralizing antibodies when compared to animals that received the WT-AAV8 vector. To study further the utility of the novel AAV8-K137R mutant in a therapeutic setting, we delivered human coagulation factor IX (h.FIX) under the control of liver specific promoters (LP1 or hAAT) at two different doses (2.5x10^10 and 1x10^11 vgs per mouse) in 8-12 weeks old male C57BL/6 mice. As can be seen in Fig.1C/D, the circulating levels of h.FIX were higher in all the K137R-AAV8 treated groups as compared to the WT-AAV8 treated groups either at 2 weeks (62% vs 37% for hAAT constructs and 47% vs 21% for LP1 constructs) or 4 weeks (78% vs 56% for hAAT constructs and 64% vs 30% for LP1 constructs) post hepatic gene transfer. These studies demonstrate the feasibility of the use of this novel vector for potential gene therapy of hemophilia B.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Each new generation of GPUs vastly increases the resources available to GPGPU programs. GPU programming models (like CUDA) were designed to scale to use these resources. However, we find that CUDA programs actually do not scale to utilize all available resources, with over 30% of resources going unused on average for programs of the Parboil2 suite that we used in our work. Current GPUs therefore allow concurrent execution of kernels to improve utilization. In this work, we study concurrent execution of GPU kernels using multiprogram workloads on current NVIDIA Fermi GPUs. On two-program workloads from the Parboil2 benchmark suite we find concurrent execution is often no better than serialized execution. We identify that the lack of control over resource allocation to kernels is a major serialization bottleneck. We propose transformations that convert CUDA kernels into elastic kernels which permit fine-grained control over their resource usage. We then propose several elastic-kernel aware concurrency policies that offer significantly better performance and concurrency compared to the current CUDA policy. We evaluate our proposals on real hardware using multiprogrammed workloads constructed from benchmarks in the Parboil 2 suite. On average, our proposals increase system throughput (STP) by 1.21x and improve the average normalized turnaround time (ANTT) by 3.73x for two-program workloads when compared to the current CUDA concurrency implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The presence of software bloat in large flexible software systems can hurt energy efficiency. However, identifying and mitigating bloat is fairly effort intensive. To enable such efforts to be directed where there is a substantial potential for energy savings, we investigate the impact of bloat on power consumption under different situations. We conduct the first systematic experimental study of the joint power-performance implications of bloat across a range of hardware and software configurations on modern server platforms. The study employs controlled experiments to expose different effects of a common type of Java runtime bloat, excess temporary objects, in the context of the SPECPower_ssj2008 workload. We introduce the notion of equi-performance power reduction to characterize the impact, in addition to peak power comparisons. The results show a wide variation in energy savings from bloat reduction across these configurations. Energy efficiency benefits at peak performance tend to be most pronounced when bloat affects a performance bottleneck and non-bloated resources have low energy-proportionality. Equi-performance power savings are highest when bloated resources have a high degree of energy proportionality. We develop an analytical model that establishes a general relation between resource pressure caused by bloat and its energy efficiency impact under different conditions of resource bottlenecks and energy proportionality. Applying the model to different "what-if" scenarios, we predict the impact of bloat reduction and corroborate these predictions with empirical observations. Our work shows that the prevalent software-only view of bloat is inadequate for assessing its power-performance impact and instead provides a full systems approach for reasoning about its implications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of devising incentive strategies for viral marketing of a product. In particular, we assume that the seller can influence penetration of the product by offering two incentive programs: a) direct incentives to potential buyers (influence) and b) referral rewards for customers who influence potential buyers to make the purchase (exploit connections). The problem is to determine the optimal timing of these programs over a finite time horizon. In contrast to algorithmic perspective popular in the literature, we take a mean-field approach and formulate the problem as a continuous-time deterministic optimal control problem. We show that the optimal strategy for the seller has a simple structure and can take both forms, namely, influence-and-exploit and exploit-and-influence. We also show that in some cases it may optimal for the seller to deploy incentive programs mostly for low degree nodes. We support our theoretical results through numerical studies and provide practical insights by analyzing various scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions. In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Static analysis (aka offline analysis) of a model of an IP network is useful for understanding, debugging, and verifying packet flow properties of the network. Data-flow analysis is a method that has typically been applied to static analysis of programs. We propose a new, data-flow based approach for static analysis of packet flows in networks. We also investigate an application of our analysis to the problem of inferring a high-level policy from the network, which has been addressed in the past only for a single router.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The success of Mycobacterium tuberculosis as a deadly pathogen lies in its ability to survive under adverse conditions during pre- and post-infectious stages. The transcription process and the regulation of gene expression are central to the survival of the pathogen through the harsh conditions. Multiple sigma factors, transcription regulators, diverse two-component systems contribute in tailoring the events to meet the challenges faced by the pathogen. Although the machinery is conserved, many aspects of transcription and its regulation seem to be different in mycobacteria when compared to the other well-studied organisms. Here, we discuss salient aspects of transcription and its regulation in the context of distinct physiology of mycobacteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exploiting the performance potential of GPUs requires managing the data transfers to and from them efficiently which is an error-prone and tedious task. In this paper, we develop a software coherence mechanism to fully automate all data transfers between the CPU and GPU without any assistance from the programmer. Our mechanism uses compiler analysis to identify potential stale accesses and uses a runtime to initiate transfers as necessary. This allows us to avoid redundant transfers that are exhibited by all other existing automatic memory management proposals. We integrate our automatic memory manager into the X10 compiler and runtime, and find that it not only results in smaller and simpler programs, but also eliminates redundant memory transfers. Tested on eight programs ported from the Rodinia benchmark suite it achieves (i) a 1.06x speedup over hand-tuned manual memory management, and (ii) a 1.29x speedup over another recently proposed compiler--runtime automatic memory management system. Compared to other existing runtime-only and compiler-only proposals, it also transfers 2.2x to 13.3x less data on average.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software transactional memory(STM) is a promising programming paradigm for shared memory multithreaded programs. While STM offers the promise of being less error-prone and more programmer friendly compared to traditional lock-based synchronization, it also needs to be competitive in performance in order for it to be adopted in mainstream software. A major source of performance overheads in STM is transactional aborts. Conflict resolution and aborting a transaction typically happens at the transaction level which has the advantage that it is automatic and application agnostic. However it has a substantial disadvantage in that STM declares the entire transaction as conflicting and hence aborts it and re-executes it fully, instead of partially re-executing only those part(s) of the transaction, which have been affected due to the conflict. This "Re-execute Everything" approach has a significant adverse impact on STM performance. In order to mitigate the abort overheads, we propose a compiler aided Selective Reconciliation STM (SR-STM) scheme, wherein certain transactional conflicts can be reconciled by performing partial re-execution of the transaction. Ours is a selective hybrid approach which uses compiler analysis to identify those data accesses which are legal and profitable candidates for reconciliation and applies partial re-execution only to these candidates selectively while other conflicting data accesses are handled by the default STM approach of abort and full re-execution. We describe the compiler analysis and code transformations required for supporting selective reconciliation. We find that SR-STM is effective in reducing the transactional abort overheads by improving the performance for a set of five STAMP benchmarks by 12.58% on an average and up to 22.34%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The twin demands of energy-efficiency and higher performance on DRAM are highly emphasized in multicore architectures. A variety of schemes have been proposed to address either the latency or the energy consumption of DRAMs. These schemes typically require non-trivial hardware changes and end up improving latency at the cost of energy or vice-versa. One specific DRAM performance problem in multicores is that interleaved accesses from different cores can potentially degrade row-buffer locality. In this paper, based on the temporal and spatial locality characteristics of memory accesses, we propose a reorganization of the existing single large row-buffer in a DRAM bank into multiple sub-row buffers (MSRB). This re-organization not only improves row hit rates, and hence the average memory latency, but also brings down the energy consumed by the DRAM. The first major contribution of this work is proposing such a reorganization without requiring any significant changes to the existing widely accepted DRAM specifications. Our proposed reorganization improves weighted speedup by 35.8%, 14.5% and 21.6% in quad, eight and sixteen core workloads along with a 42%, 28% and 31% reduction in DRAM energy. The proposed MSRB organization enables opportunities for the management of multiple row-buffers at the memory controller level. As the memory controller is aware of the behaviour of individual cores it allows us to implement coordinated buffer allocation schemes for different cores that take into account program behaviour. We demonstrate two such schemes, namely Fairness Oriented Allocation and Performance Oriented Allocation, which show the flexibility that memory controllers can now exploit in our MSRB organization to improve overall performance and/or fairness. Further, the MSRB organization enables additional opportunities for DRAM intra-bank parallelism and selective early precharging of the LRU row-buffer to further improve memory access latencies. These two optimizations together provide an additional 5.9% performance improvement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analytically study the role played by the network topology in sustaining cooperation in a society of myopic agents in an evolutionary setting. In our model, each agent plays the Prisoner's Dilemma (PD) game with its neighbors, as specified by a network. Cooperation is the incumbent strategy, whereas defectors are the mutants. Starting with a population of cooperators, some agents are switched to defection. The agents then play the PD game with their neighbors and compute their fitness. After this, an evolutionary rule, or imitation dynamic is used to update the agent strategy. A defector switches back to cooperation if it has a cooperator neighbor with higher fitness. The network is said to sustain cooperation if almost all defectors switch to cooperation. Earlier work on the sustenance of cooperation has largely consisted of simulation studies, and we seek to complement this body of work by providing analytical insight for the same. We find that in order to sustain cooperation, a network should satisfy some properties such as small average diameter, densification, and irregularity. Real-world networks have been empirically shown to exhibit these properties, and are thus candidates for the sustenance of cooperation. We also analyze some specific graphs to determine whether or not they sustain cooperation. In particular, we find that scale-free graphs belonging to a certain family sustain cooperation, whereas Erdos-Renyi random graphs do not. To the best of our knowledge, ours is the first analytical attempt to determine which networks sustain cooperation in a population of myopic agents in an evolutionary setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider secrecy obtained when one transmits on a Gaussian Wiretap channel above the secrecy capacity. Instead of equivocation, we consider probability of error as the criterion of secrecy. The usual channel codes are considered for transmission. The rates obtained can reach the channel capacity. We show that the “confusion” caused to the Eve when the rate of transmission is above capacity of the Eve's channel is similar to the confusion caused by using the wiretap channel codes used below the secrecy capacity.