55 resultados para Refrigeration and refrigerating machinery


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earlier version of an indigenously developed Pressure Wave Generator (PWG) could not develop the necessary pressure ratio to satisfactorily operate a pulse tube cooler, largely due to high blow by losses in the piston cylinder seal gap and due to a few design deficiencies. Effect of different parameters like seal gap, piston diameter, piston stroke, moving mass and the piston back volume on the performance is studied analytically. Modifications were done to the PWG based on analysis and the performance is experimentally measured. A significant improvement in PWG performance is seen as a result of the modifications. The improved PWG is tested with the same pulse tube cooler but with different inertance tube configurations. A no load temperature of 130 K is achieved with an inertance tube configuration designed using Sage software. The delivered PV power is estimated to be 28.4 W which can produce a refrigeration of about 1 W at 80 K.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-GPU machines are being increasingly used in high-performance computing. Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and manage data on each GPU. Existing works that propose to automate data allocations for GPUs have limitations and inefficiencies in terms of allocation sizes, exploiting reuse, transfer costs, and scalability. We propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding-Box-based Memory Manager (BBMM). BBMM can perform at runtime, during standard set operations like union, intersection, and difference, finding subset and superset relations on hyperrectangular regions of array data (bounding boxes). It uses these operations along with some compiler assistance to identify, allocate, and manage data required by applications in terms of disjoint bounding boxes. This allows it to (1) allocate exactly or nearly as much data as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence maximize data reuse across tiles and minimize data transfer overhead, and (3) and as a result, maximize utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a four-GPU machine with various scientific programs showed that BBMM reduces data allocations on each GPU by up to 75% compared to current allocation schemes, yields performance of at least 88% of manually written code, and allows excellent weak scaling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cancer has always been a dreadful disease and continues to attract extensive research investigations. Various targets have been identified to restrain cancer. Among these DNA happens to be the most explored one. A wide variety of small molecules, often referred to as `ligands', has been synthesized to target numerous structural features of DNA. The sole purpose of such molecular design has been to interfere with the transcriptional machinery in order to drive the cancer cell toward apoptosis. The mode of action of the DNA targeting ligands focuses either on the sequence-specificity by groove binding and strand cleavage, or by identifying the morphologically distinct higher order structures like that of the G-quadruplex DNA. However, in spite of the extensive research, only a tiny fraction of the molecules have been able to reach clinical trials and only a handful are used in chemotherapy. This review attempts to record the journey of the DNA binding small molecules from its inception to cancer therapy via various modifications at the molecular level. Nevertheless, factors like limited bioavailability, severe toxicities, unfavorable pharmacokinetics etc. still prove to be the major impediments in the field which warrant considerable scope for further research investigations. (C) 2014 Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High temperature, high pressure transcritical condensing CO2 cycle (TC-CO2) is compared with transcritical steam (TC-steam) cycle. Performance indicators such as thermal efficiency, volumetric flow rates and entropy generation are used to analyze the power cycle wherein, irreversibilities in turbo-machinery and heat exchangers are taken into account. Although, both cycles yield comparable thermal efficiencies under identical operating conditions, TC-CO2 plant is significantly compact compared to a TC-steam plant. Large specific volume of steam is responsible for a bulky system. It is also found that the performance of a TC-CO2 cycle is less sensitive to source temperature variations, which is an important requirement of a solar thermal system. In addition, issues like wet expansion in turbine and vacuum in condenser are absent in case of a TC-CO2 cycle. External heat addition to working fluid is assumed to take place through a heat transfer fluid (HTF) which receives heat from a solar receiver. A TC-CO2 system receives heat though a single HTF loop, whereas, for TC-steam cycle two HTF loops in series are proposed to avoid high temperature differential between the steam and HTF. (C) 2013 P. Garg. Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Redox imbalance generates multiple cellular damages leading to oxidative stress-mediated pathological conditions such as neurodegenerative diseases and cancer progression. Therefore, maintenance of reactive oxygen species (ROS) homeostasis is most important that involves well-defined antioxidant machinery. In the present study, we have identified for the first time a component of mammalian protein translocation machinery Magmas to perform a critical ROS regulatory function. Magmas overexpression has been reported in highly metabolically active tissues and cancer cells that are prone to oxidative damage. We found that Magmas regulates cellular ROS levels by controlling its production as well as scavenging. Magmas promotes cellular tolerance toward oxidative stress by enhancing antioxidant enzyme activity, thus preventing induction of apoptosis and damage to cellular components. Magmas enhances the activity of electron transport chain (ETC) complexes, causing reduced ROS production. Our results suggest that J-like domain of Magmas is essential for maintenance of redox balance. The function of Magmas as a ROS sensor was found to be independent of its role in protein import. The unique ROS modulatory role of Magmas is highlighted by its ability to increase cell tolerance to oxidative stress even in yeast model organism. The cytoprotective capability of Magmas against oxidative damage makes it an important candidate for future investigation in therapeutics of oxidative stress-related diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic analysis techniques have been proposed to detect potential deadlocks. Analyzing and comprehending each potential deadlock to determine whether the deadlock is feasible in a real execution requires significant programmer effort. Moreover, empirical evidence shows that existing analyses are quite imprecise. This imprecision of the analyses further void the manual effort invested in reasoning about non-existent defects. In this paper, we address the problems of imprecision of existing analyses and the subsequent manual effort necessary to reason about deadlocks. We propose a novel approach for deadlock detection by designing a dynamic analysis that intelligently leverages execution traces. To reduce the manual effort, we replay the program by making the execution follow a schedule derived based on the observed trace. For a real deadlock, its feasibility is automatically verified if the replay causes the execution to deadlock. We have implemented our approach as part of WOLF and have analyzed many large (upto 160KLoC) Java programs. Our experimental results show that we are able to identify 74% of the reported defects as true (or false) positives automatically leaving very few defects for manual analysis. The overhead of our approach is negligible making it a compelling tool for practical adoption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonhomologous DNA end joining (NHEJ) is one of the major double-strand break (DSB) repair pathways in higher eukaryotes. Recently, it has been shown that alternative NHEJ (A-NHEJ) occurs in the absence of classical NHEJ and is implicated in chromosomal translocations leading to cancer. In the present study, we have developed a novel biochemical assay system utilizing DSBs flanked by varying lengths of microhomology to study microhomology-mediated alternative end joining (MMEJ). We show that MMEJ can operate in normal cells, when microhomology is present, irrespective of occurrence of robust classical NHEJ. Length of the microhomology determines the efficiency of MMEJ, 5 nt being obligatory. Using this biochemical approach, we show that products obtained are due to MMEJ, which is dependent on MRE11, NBS1, LIGASE III, XRCC1, FEN1 and PARP1. Thus, we define the enzymatic machinery and microhomology requirements of alternative NHEJ using a well-defined biochemical system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We hypothesized that the AAV2 vector is targeted for destruction in the cytoplasm by the host cellular kinase/ubiquitination/proteasomal machinery and that modification of their targets on AAV2 capsid may improve its transduction efficiency. In vitro analysis with pharmacological inhibitors of cellular serine/threonine kinases (protein kinase A, protein kinase C, casein kinase II) showed an increase (20-90%) on AAV2-mediated gene expression. The three-dimensional structure of AAV2 capsid was then analyzed to predict the sites of ubiquitination and phosphorylation. Three phosphodegrons, which are the phosphorylation sites recognized as degradation signals by ubiquitin ligases, were identified. Mutation targets comprising eight serine (S) or seven threonine (T) or nine lysine (K) residues were selected in and around phosphodegrons on the basis of their solvent accessibility, overlap with the receptor binding regions, overlap with interaction interfaces of capsid proteins, and their evolutionary conservation across AAV serotypes. AAV2-EGFP vectors with the wild-type (WT) capsid or mutant capsids (15 S/T -> alanine A] or 9 K -> arginine R] single mutant or 2 double K -> R mutants) were then evaluated in vitro. The transduction efficiencies of 11 S/T -> A and 7 K -> R vectors were significantly higher (similar to 63-90%) than the AAV2-WT vectors (similar to 30-40%). Further, hepatic gene transfer of these mutant vectors in vivo resulted in higher vector copy numbers (up to 4.9-fold) and transgene expression (up to 14-fold) than observed from the AAV2-WT vector. One of the mutant vectors, S489A, generated similar to 8-fold fewer antibodies that could be cross-neutralized by AAV2-WT. This study thus demonstrates the feasibility of the use of these novel AAV2 capsid mutant vectors in hepatic gene therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Affine transformations have proven to be very powerful for loop restructuring due to their ability to model a very wide range of transformations. A single multi-dimensional affine function can represent a long and complex sequence of simpler transformations. Existing affine transformation frameworks like the Pluto algorithm, that include a cost function for modern multicore architectures where coarse-grained parallelism and locality are crucial, consider only a sub-space of transformations to avoid a combinatorial explosion in finding the transformations. The ensuing practical tradeoffs lead to the exclusion of certain useful transformations, in particular, transformation compositions involving loop reversals and loop skewing by negative factors. In this paper, we propose an approach to address this limitation by modeling a much larger space of affine transformations in conjunction with the Pluto algorithm's cost function. We perform an experimental evaluation of both, the effect on compilation time, and performance of generated codes. The evaluation shows that our new framework, Pluto+, provides no degradation in performance in any of the Polybench benchmarks. For Lattice Boltzmann Method (LBM) codes with periodic boundary conditions, it provides a mean speedup of 1.33x over Pluto. We also show that Pluto+ does not increase compile times significantly. Experimental results on Polybench show that Pluto+ increases overall polyhedral source-to-source optimization time only by 15%. In cases where it improves execution time significantly, it increased polyhedral optimization time only by 2.04x.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The polyhedral model provides an expressive intermediate representation that is convenient for the analysis and subsequent transformation of affine loop nests. Several heuristics exist for achieving complex program transformations in this model. However, there is also considerable scope to utilize this model to tackle the problem of automatic memory footprint optimization. In this paper, we present a new automatic storage optimization technique which can be used to achieve both intra-array as well as inter-array storage reuse with a pre-determined schedule for the computation. Our approach works by finding statement-wise storage partitioning hyper planes that partition a unified global array space so that values with overlapping live ranges are not mapped to the same partition. Our heuristic is driven by a fourfold objective function which not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter statement storage reuse. The storage mappings obtained using our heuristic can be asymptotically better than those obtained by any existing technique. We implement our technique and demonstrate its practical impact by evaluating its effectiveness on several benchmarks chosen from the domains of image processing, stencil computations, and high-performance computing.