924 resultados para Milling machinery.
Resumo:
Saccharomyces cerevisiae RAD50, MRE11, and XRS2 genes are essential for telomere length maintenance, cell cycle checkpoint signaling, meiotic recombination, and DNA double-stranded break (DSB) repair via nonhomologous end joining and homologous recombination. The DSB repair pathways that draw upon Mre11-Rad50-Xrs2 subunits are complex, so their mechanistic features remain poorly understood. Moreover, the molecular basis of DSB end resection in yeast mre11-nuclease deficient mutants and Mre11 nuclease-independent activation of ATM in mammals remains unknown and adds a new dimension to many unanswered questions about the mechanism of DSB repair. Here, we demonstrate that S. cerevisiae Mre11 (ScMre11) exhibits higher binding affinity for single-over double-stranded DNA and intermediates of recombination and repair and catalyzes robust unwinding of substrates possessing a 3' single-stranded DNA overhang but not of 5' overhangs or blunt-ended DNA fragments. Additional evidence disclosed that ScMre11 nuclease activity is dispensable for its DNA binding and unwinding activity, thus uncovering the molecular basis underlying DSB end processing in mre11 nuclease deficient mutants. Significantly, Rad50, Xrs2, and Sae2 potentiate the DNA unwinding activity of Mre11, thus underscoring functional interaction among the components of DSB end repair machinery. Our results also show that ScMre11 by itself binds to DSB ends, then promotes end bridging of duplex DNA, and directly interacts with Sae2. We discuss the implications of these results in the context of an alternative mechanism for DSB end processing and the generation of single-stranded DNA for DNA repair and homologous recombination.
Resumo:
The identification of the damage mechanisms involved in the wear process demands the finer scale characterization of the surface, as well as the subsurface region of the wear scar region, and to this end, this article discusses the results obtained with Cu-10 wt% Pb-based metallic nanocomposites using a host of characterization techniques, including transmission electron microscopy and ion milling microscopy. Apart from finer scale characterization to understand deformation and cracking during the wear process, X-ray photoelectron spectroscopy analysis of wear debris confirms the occurrence of oxidation of Pb phase to Pb3O4. In order to understand the role of oxides on friction and wear, sliding wear tests in argon were also carried out and such tests did not result in the formation of any tribo-oxides, as confirmed using electron probe microanalysis. Conclusively, oxidative wear is attributed as the dominant wear mechanism in ambient conditions for Cu-10 wt% Pb composite.
Resumo:
The success of AAV2 mediated hepatic gene transfer in human trials for diseases such as hemophilia has been hampered by a combination of low transduction efficiency and a robust immune response directed against these vectors. We have previously shown that AAV2 is targeted for destruction in the cytoplasm by the host-cellular kinase/ubiquitination/proteasomal degradation machinery and modification of the serine(S)/threonine(T) kinase and lysine(K) targets on AAV capsid is beneficial. Thus targeted single mutations of S/T>A(S489A, S498A, T251A) and K>R (K532R) improved the efficiency of gene transfer in vivo as compared to wild type (WT)-AAV2 vectors (∼6-14 fold). In the present study, we evaluated if combined alteration of the phosphodegrons (PD), which are the phosphorylation sites recognized as degradation signals by ubiquitin ligases, improves further the gene transfer efficiency. Thus, we generated four multiple mutant vectors (PD: 1+3, S489A+K532R, PD: 1+3, S489A+K532R together with T251 residue which did not lie in any of the phosphodegrons but had shown increased transduction efficiency compared to the WT-AAV2 vector (∼6 fold) and was also conserved in 9 out of 10 AAV serotypes (AAV 1 to 10), PD: 1+3, S489A+K532R+S498A and a fourth combination of PD: 3, K532R+T251. We then evaluated them in vitro and in vivo and compared their gene transfer efficiency with either the WT-AAV2 or the best single mutant S489A-AAV2 vector. The novel multiple mutations on the AAV2 capsid did not affect the overall vector packaging efficiency. All the multiple AAV2 mutants showed superior transduction efficiency in HeLa cells in vitro when compared to either the WT (62-72% Vs 21%) or the single mutant S489A (62-72% Vs 50%) AAV2 vectors as demonstrated by FACS analysis (Fig. 1A). On hepatic gene transfer with 5x10^10 vgs per animal in C57BL/6 mice, all the multiple mutants showed increased transgene expression compared to either the WT-AAV2 (∼15-23 fold) or the S489A single mutant vector (∼2-3 fold) (Fig.1B and C). These novel multiple mutant AAV2 vectors also showed higher vector copy number in murine hepatocytes 4 weeks post transduction, as compared to the WT-AAV2 (∼5-6 Vs 1.4 vector copies/diploid genome) and further higher when compared to the single mutant S489A(∼5-6 fold Vs 3.8 fold) (Fig.1D). Further ongoing studies will demonstrate the therapeutic benefit of one or more of the multiple mutants vectors in preclinical models of hemophilia.
Resumo:
Recombinant AAV-8 vectors have shown significant promise for hepatic gene therapy of hemophilia B. However, the theme of AAV vector dose dependent immunotoxicity seen with AAV2 vectors earlier seem to re-emerge with AAV8 vectors as well. It is therefore important to develop novel AAV8 vectors that provide enhanced gene expression at significantly less vector doses. We hypothesized that AAV8 during its intracellular trafficking, are targeted for destruction in the cytoplasm by the host-cellular kinase/ubiquitination/proteasomal degradation machinery and modification of specific serine/threonine kinase or ubiquitination targets on AAV8 capsid (Fig.1A) may improve its transduction efficiency. To test this, point mutations at specific serine (S)/threonine (T) > alanine (A) or lysine (K)>arginine (R) residues were generated on AAV8 capsid. scAAV8-EGFP vectors containing the wild-type (WT) and each one of the 5 S/T/K-mutant(S276A, S501A, S671A, T251A and K137R) capsids were evaluated for their liver transduction efficiency at a dose of 5 X 1010 vgs/ animal in C57BL/6 mice in vivo. The best performing mutant was found to be the K137R vector in terms of either the gene expression (46-fold) or the vector copy numbers in the hepatocytes (22-fold) compared to WT-AAV8 (Fig.1B). The K137R-AAV8 vector that showed significantly decreased ubiquitination of the viral capsid had reduced activation of markers of innate immune response [IL-6, IL-12, tumor necrosis factor α, Kupffer cells and TLR-9]. In addition, animals injected with the K137R mutant also demonstrated decreased (2-fold) levels of cross-neutralizing antibodies when compared to animals that received the WT-AAV8 vector. To study further the utility of the novel AAV8-K137R mutant in a therapeutic setting, we delivered human coagulation factor IX (h.FIX) under the control of liver specific promoters (LP1 or hAAT) at two different doses (2.5x10^10 and 1x10^11 vgs per mouse) in 8-12 weeks old male C57BL/6 mice. As can be seen in Fig.1C/D, the circulating levels of h.FIX were higher in all the K137R-AAV8 treated groups as compared to the WT-AAV8 treated groups either at 2 weeks (62% vs 37% for hAAT constructs and 47% vs 21% for LP1 constructs) or 4 weeks (78% vs 56% for hAAT constructs and 64% vs 30% for LP1 constructs) post hepatic gene transfer. These studies demonstrate the feasibility of the use of this novel vector for potential gene therapy of hemophilia B.
Resumo:
CrSi and Cr1-x Fe (x) Si particles embedded in a CrSi2 matrix have been prepared by hot pressing from CrSi1.9, CrSi2, and CrSi2.1 powders produced by ball milling using either WC or stainless steel milling media. The samples were characterized by powder X-ray diffraction, scanning, and transmission electron microscopy and electron microprobe analysis. The final crystallite size of CrSi2 obtained from the XRD patterns is about 40 and 80 nm for SS- and WC-milled powders, respectively, whereas the size of the second phase inclusions in the hot pressed samples is about 1-5 mu m. The temperature dependence of the electrical resistivity, Seebeck coefficient, thermal conductivity, and figure of merit (ZT) were analyzed in the temperature range from 300 to 800 K. While the ball-milling process results in a lower electrical resistivity and thermal conductivity due to the presence of the inclusions and the refinement of the matrix microstructure, respectively, the Seebeck coefficient is negatively affected by the formation of the inclusions which leads to a modest improvement of ZT.
Resumo:
Each new generation of GPUs vastly increases the resources available to GPGPU programs. GPU programming models (like CUDA) were designed to scale to use these resources. However, we find that CUDA programs actually do not scale to utilize all available resources, with over 30% of resources going unused on average for programs of the Parboil2 suite that we used in our work. Current GPUs therefore allow concurrent execution of kernels to improve utilization. In this work, we study concurrent execution of GPU kernels using multiprogram workloads on current NVIDIA Fermi GPUs. On two-program workloads from the Parboil2 benchmark suite we find concurrent execution is often no better than serialized execution. We identify that the lack of control over resource allocation to kernels is a major serialization bottleneck. We propose transformations that convert CUDA kernels into elastic kernels which permit fine-grained control over their resource usage. We then propose several elastic-kernel aware concurrency policies that offer significantly better performance and concurrency compared to the current CUDA policy. We evaluate our proposals on real hardware using multiprogrammed workloads constructed from benchmarks in the Parboil 2 suite. On average, our proposals increase system throughput (STP) by 1.21x and improve the average normalized turnaround time (ANTT) by 3.73x for two-program workloads when compared to the current CUDA concurrency implementation.
Resumo:
Free nanoparticles of iron (Fe) and their colloids with high saturation magnetization are in demand for medical and microfluidic applications. However, the oxide layer that forms during processing has made such synthesis a formidable challenge. Lowering the synthesis temperature decreases rate of oxidation and hence provides a new way of producing pure metallic nanoparticles prone to oxidation in bulk amount (large quantity). In this paper we have proposed a methodology that is designed with the knowledge of thermodynamic imperatives of oxidation to obtain almost oxygen-free iron nanoparticles, with or without any organic capping by controlled milling at low temperatures in a specially designed high-energy ball mill with the possibility of bulk production. The particles can be ultrasonicated to produce colloids and can be bio-capped to produce transparent solution. The magnetic properties of these nanoparticles confirm their superiority for possible biomedical and other applications.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
The presence of software bloat in large flexible software systems can hurt energy efficiency. However, identifying and mitigating bloat is fairly effort intensive. To enable such efforts to be directed where there is a substantial potential for energy savings, we investigate the impact of bloat on power consumption under different situations. We conduct the first systematic experimental study of the joint power-performance implications of bloat across a range of hardware and software configurations on modern server platforms. The study employs controlled experiments to expose different effects of a common type of Java runtime bloat, excess temporary objects, in the context of the SPECPower_ssj2008 workload. We introduce the notion of equi-performance power reduction to characterize the impact, in addition to peak power comparisons. The results show a wide variation in energy savings from bloat reduction across these configurations. Energy efficiency benefits at peak performance tend to be most pronounced when bloat affects a performance bottleneck and non-bloated resources have low energy-proportionality. Equi-performance power savings are highest when bloated resources have a high degree of energy proportionality. We develop an analytical model that establishes a general relation between resource pressure caused by bloat and its energy efficiency impact under different conditions of resource bottlenecks and energy proportionality. Applying the model to different "what-if" scenarios, we predict the impact of bloat reduction and corroborate these predictions with empirical observations. Our work shows that the prevalent software-only view of bloat is inadequate for assessing its power-performance impact and instead provides a full systems approach for reasoning about its implications.
Resumo:
We consider the problem of devising incentive strategies for viral marketing of a product. In particular, we assume that the seller can influence penetration of the product by offering two incentive programs: a) direct incentives to potential buyers (influence) and b) referral rewards for customers who influence potential buyers to make the purchase (exploit connections). The problem is to determine the optimal timing of these programs over a finite time horizon. In contrast to algorithmic perspective popular in the literature, we take a mean-field approach and formulate the problem as a continuous-time deterministic optimal control problem. We show that the optimal strategy for the seller has a simple structure and can take both forms, namely, influence-and-exploit and exploit-and-influence. We also show that in some cases it may optimal for the seller to deploy incentive programs mostly for low degree nodes. We support our theoretical results through numerical studies and provide practical insights by analyzing various scenarios.
Resumo:
Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions. In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.
Resumo:
Static analysis (aka offline analysis) of a model of an IP network is useful for understanding, debugging, and verifying packet flow properties of the network. Data-flow analysis is a method that has typically been applied to static analysis of programs. We propose a new, data-flow based approach for static analysis of packet flows in networks. We also investigate an application of our analysis to the problem of inferring a high-level policy from the network, which has been addressed in the past only for a single router.
Resumo:
The success of Mycobacterium tuberculosis as a deadly pathogen lies in its ability to survive under adverse conditions during pre- and post-infectious stages. The transcription process and the regulation of gene expression are central to the survival of the pathogen through the harsh conditions. Multiple sigma factors, transcription regulators, diverse two-component systems contribute in tailoring the events to meet the challenges faced by the pathogen. Although the machinery is conserved, many aspects of transcription and its regulation seem to be different in mycobacteria when compared to the other well-studied organisms. Here, we discuss salient aspects of transcription and its regulation in the context of distinct physiology of mycobacteria.
Resumo:
Exploiting the performance potential of GPUs requires managing the data transfers to and from them efficiently which is an error-prone and tedious task. In this paper, we develop a software coherence mechanism to fully automate all data transfers between the CPU and GPU without any assistance from the programmer. Our mechanism uses compiler analysis to identify potential stale accesses and uses a runtime to initiate transfers as necessary. This allows us to avoid redundant transfers that are exhibited by all other existing automatic memory management proposals. We integrate our automatic memory manager into the X10 compiler and runtime, and find that it not only results in smaller and simpler programs, but also eliminates redundant memory transfers. Tested on eight programs ported from the Rodinia benchmark suite it achieves (i) a 1.06x speedup over hand-tuned manual memory management, and (ii) a 1.29x speedup over another recently proposed compiler--runtime automatic memory management system. Compared to other existing runtime-only and compiler-only proposals, it also transfers 2.2x to 13.3x less data on average.
Resumo:
Software transactional memory(STM) is a promising programming paradigm for shared memory multithreaded programs. While STM offers the promise of being less error-prone and more programmer friendly compared to traditional lock-based synchronization, it also needs to be competitive in performance in order for it to be adopted in mainstream software. A major source of performance overheads in STM is transactional aborts. Conflict resolution and aborting a transaction typically happens at the transaction level which has the advantage that it is automatic and application agnostic. However it has a substantial disadvantage in that STM declares the entire transaction as conflicting and hence aborts it and re-executes it fully, instead of partially re-executing only those part(s) of the transaction, which have been affected due to the conflict. This "Re-execute Everything" approach has a significant adverse impact on STM performance. In order to mitigate the abort overheads, we propose a compiler aided Selective Reconciliation STM (SR-STM) scheme, wherein certain transactional conflicts can be reconciled by performing partial re-execution of the transaction. Ours is a selective hybrid approach which uses compiler analysis to identify those data accesses which are legal and profitable candidates for reconciliation and applies partial re-execution only to these candidates selectively while other conflicting data accesses are handled by the default STM approach of abort and full re-execution. We describe the compiler analysis and code transformations required for supporting selective reconciliation. We find that SR-STM is effective in reducing the transactional abort overheads by improving the performance for a set of five STAMP benchmarks by 12.58% on an average and up to 22.34%.