918 resultados para Cleaning machinery and appliances
Resumo:
The twin demands of energy-efficiency and higher performance on DRAM are highly emphasized in multicore architectures. A variety of schemes have been proposed to address either the latency or the energy consumption of DRAMs. These schemes typically require non-trivial hardware changes and end up improving latency at the cost of energy or vice-versa. One specific DRAM performance problem in multicores is that interleaved accesses from different cores can potentially degrade row-buffer locality. In this paper, based on the temporal and spatial locality characteristics of memory accesses, we propose a reorganization of the existing single large row-buffer in a DRAM bank into multiple sub-row buffers (MSRB). This re-organization not only improves row hit rates, and hence the average memory latency, but also brings down the energy consumed by the DRAM. The first major contribution of this work is proposing such a reorganization without requiring any significant changes to the existing widely accepted DRAM specifications. Our proposed reorganization improves weighted speedup by 35.8%, 14.5% and 21.6% in quad, eight and sixteen core workloads along with a 42%, 28% and 31% reduction in DRAM energy. The proposed MSRB organization enables opportunities for the management of multiple row-buffers at the memory controller level. As the memory controller is aware of the behaviour of individual cores it allows us to implement coordinated buffer allocation schemes for different cores that take into account program behaviour. We demonstrate two such schemes, namely Fairness Oriented Allocation and Performance Oriented Allocation, which show the flexibility that memory controllers can now exploit in our MSRB organization to improve overall performance and/or fairness. Further, the MSRB organization enables additional opportunities for DRAM intra-bank parallelism and selective early precharging of the LRU row-buffer to further improve memory access latencies. These two optimizations together provide an additional 5.9% performance improvement.
Resumo:
Abrin from Abrus precatorius plant is a potent protein synthesis inhibitor and induces apoptosis in cells. However, the relationship between inhibition of protein synthesis and apoptosis is not well understood. Inhibition of protein synthesis by abrin can lead to accumulation of unfolded protein in the endoplasmic reticulum causing ER stress. The observation of phosphorylation of eukaryotic initiation factor 2 alpha and upregulation of CHOP (CAAT/enhancer binding protein (C/EBP) homologous protein), important players involved in ER stress signaling by abrin, suggested activation of ER stress in the cells. ER stress is also known to induce apoptosis via stress kinases such as p38 MAPK and JNK. Activation of both the pathways was observed upon abrin treatment and found to be upstream of the activation of caspases. Moreover, abrin-induced apoptosis was found to be dependent on p38 MAPK but not JNK. We also observed that abrin induced the activation of caspase-2 and caspase-8 and triggered Bid cleavage leading to mitochondrial membrane potential loss and thus connecting the signaling events from ER stress to mitochondrial death machinery.
Resumo:
Multi-GPU machines are being increasingly used in high-performance computing. Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and manage data on each GPU. Existing works that propose to automate data allocations for GPUs have limitations and inefficiencies in terms of allocation sizes, exploiting reuse, transfer costs, and scalability. We propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding-Box-based Memory Manager (BBMM). BBMM can perform at runtime, during standard set operations like union, intersection, and difference, finding subset and superset relations on hyperrectangular regions of array data (bounding boxes). It uses these operations along with some compiler assistance to identify, allocate, and manage data required by applications in terms of disjoint bounding boxes. This allows it to (1) allocate exactly or nearly as much data as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence maximize data reuse across tiles and minimize data transfer overhead, and (3) and as a result, maximize utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a four-GPU machine with various scientific programs showed that BBMM reduces data allocations on each GPU by up to 75% compared to current allocation schemes, yields performance of at least 88% of manually written code, and allows excellent weak scaling.
Resumo:
A controllable synthesis of phase pure wurtzite (WZ) ZnS nanostructures has been reported in this work at a low temperature of similar to 220 degrees C using ethylenediamine as the soft template and by varying the molar concentration of zinc to sulphur precursors as well as by using different precursors. A significant reduction in the formation temperature required for the synthesis of phase pure WZ ZnS has been observed. A strong correlation has been observed between the morphology of the synthesized ZnS nanostructures and the precursors used during synthesis. It has been found from Scanning Electron Microscope (SEM) and Transmission Electron Microscope (TEM) image analyses that the morphology of the ZnS nanocrystals changes from a block-like to a belt-like structure having an average length of similar to 450 nm when the molar ratio of zinc to sulphur source is increased from 1 : 1 to 1 : 3. An oriented attachment (OA) growth mechanism has been used to explain the observed shape evolution of the synthesized nanostructures. The synthesized nanostructures have been characterized by the X-ray diffraction technique as well as by UV-Vis absorption and photoluminescence (PL) emission spectroscopy. The as-synthesized nanobelts exhibit defect related visible PL emission. On isochronal annealing of the nanobelts in air in the temperature range of 100-600 degrees C, it has been found that white light emission with a Commission Internationale de I'Eclairage 1931 (CIE) chromaticity coordinate of (0.30, 0.34), close to that of white light (0.33, 0.33), can be obtained from the ZnO nanostructures obtained at an annealing temperature of 600 degrees C. UV light driven degradation of methylene blue (MB) dye aqueous solution has also been demonstrated using as-synthesized nanobelts and similar to 98% dye degradation has been observed within only 40 min of light irradiation. The synthesized nanobelts with visible light emission and having dye degradation activity can be used effectively in future optoelectronic devices and in water purification for cleaning of dyes.
Resumo:
Cancer has always been a dreadful disease and continues to attract extensive research investigations. Various targets have been identified to restrain cancer. Among these DNA happens to be the most explored one. A wide variety of small molecules, often referred to as `ligands', has been synthesized to target numerous structural features of DNA. The sole purpose of such molecular design has been to interfere with the transcriptional machinery in order to drive the cancer cell toward apoptosis. The mode of action of the DNA targeting ligands focuses either on the sequence-specificity by groove binding and strand cleavage, or by identifying the morphologically distinct higher order structures like that of the G-quadruplex DNA. However, in spite of the extensive research, only a tiny fraction of the molecules have been able to reach clinical trials and only a handful are used in chemotherapy. This review attempts to record the journey of the DNA binding small molecules from its inception to cancer therapy via various modifications at the molecular level. Nevertheless, factors like limited bioavailability, severe toxicities, unfavorable pharmacokinetics etc. still prove to be the major impediments in the field which warrant considerable scope for further research investigations. (C) 2014 Published by Elsevier Ltd.
Resumo:
Redox imbalance generates multiple cellular damages leading to oxidative stress-mediated pathological conditions such as neurodegenerative diseases and cancer progression. Therefore, maintenance of reactive oxygen species (ROS) homeostasis is most important that involves well-defined antioxidant machinery. In the present study, we have identified for the first time a component of mammalian protein translocation machinery Magmas to perform a critical ROS regulatory function. Magmas overexpression has been reported in highly metabolically active tissues and cancer cells that are prone to oxidative damage. We found that Magmas regulates cellular ROS levels by controlling its production as well as scavenging. Magmas promotes cellular tolerance toward oxidative stress by enhancing antioxidant enzyme activity, thus preventing induction of apoptosis and damage to cellular components. Magmas enhances the activity of electron transport chain (ETC) complexes, causing reduced ROS production. Our results suggest that J-like domain of Magmas is essential for maintenance of redox balance. The function of Magmas as a ROS sensor was found to be independent of its role in protein import. The unique ROS modulatory role of Magmas is highlighted by its ability to increase cell tolerance to oxidative stress even in yeast model organism. The cytoprotective capability of Magmas against oxidative damage makes it an important candidate for future investigation in therapeutics of oxidative stress-related diseases.
Resumo:
Dynamic analysis techniques have been proposed to detect potential deadlocks. Analyzing and comprehending each potential deadlock to determine whether the deadlock is feasible in a real execution requires significant programmer effort. Moreover, empirical evidence shows that existing analyses are quite imprecise. This imprecision of the analyses further void the manual effort invested in reasoning about non-existent defects. In this paper, we address the problems of imprecision of existing analyses and the subsequent manual effort necessary to reason about deadlocks. We propose a novel approach for deadlock detection by designing a dynamic analysis that intelligently leverages execution traces. To reduce the manual effort, we replay the program by making the execution follow a schedule derived based on the observed trace. For a real deadlock, its feasibility is automatically verified if the replay causes the execution to deadlock. We have implemented our approach as part of WOLF and have analyzed many large (upto 160KLoC) Java programs. Our experimental results show that we are able to identify 74% of the reported defects as true (or false) positives automatically leaving very few defects for manual analysis. The overhead of our approach is negligible making it a compelling tool for practical adoption.
Resumo:
There are multiple goals of a technology transfer office (TTO) based in a university system. Whilst commercialization is a critical goal, maintenance and cleaning of the TTO's database needs detailing. Literature in the area is scarce and only some researchers make reference to TTO data cleaning. During an attempt to understand the commercial strategy of a university TTO in Bangalore the challenge of data cleaning was encountered. This paper describes a case study of data cleaning at an Indian university based TTO. 382 patent records were analyzed in the study. The case study first describes the back ground of the university system. Second, the method to clean the data and the experiences encountered are highlighted. Insights drawn indicate that patent data cleaning in a TTO is a specialized area which needs attention. Overlooking this activity can have legal implications and may result in an inability to commercialize the patent. Two levels of patent data cleaning are discussed in this case study. Best practices of data cleaning in academic TTOs are discussed.
Resumo:
Experiments were conducted at laboratory level to treat the oxides of nitrogen (NOx) present in raw and dry biodiesel exhaust utilizing a combination of electric discharge plasma and bauxite residue, i. e., red mud, an industrial waste byproduct from the aluminum industry. In this paper, the adsorption and a possible catalytic property of bauxite residue are discussed. Nonthermal plasma was generated using dielectric barrier discharges initiated by ac/repetitive pulse energization. The effect of corona electrodes on the plasma generation was qualitatively studied through NOx cleaning. The plasma reactor and adsorbent reactors were connected in cascade while treating the exhaust. The diesel generator, running on biodiesel fuel, was electrically loaded to study the effectiveness of the cascade system in cleaning the exhaust. Interestingly, under the laboratory conditions studied, plasma-bauxite residue combination has shown good synergistic properties and enhanced the NOx removal up to about 90%. With proper scaling up, the suggested cascade system may become an economically feasible option to treat the exhaust in larger installations. The results were discussed emphasizing the role of bauxite residue as an adsorbent and as a room temperature catalyst.
Resumo:
Studies were carried out to estimate the power input to Dielectric Barrier Discharge (DBD) reactors powered by AC high voltage in the context of their application in non-thermal plasma cleaning of exhaust gases. Power input to the reactors was determined both theoretically and experimentally. Four different reactor geometries energized with 50 Hz and 1.5 kHz AC excitation were considered for the study. The theoretically estimated power using Manley's equation was found to agree with the experimental results. Results show that the analytically computed capacitance, without including the electrode edge effects, gives sufficiently good results that are matching with the measured values. For complex geometries where analytical calculation of capacitance is often difficult, a novel method of estimating the reactor capacitance, and hence the power input to the reactor, was introduced in this paper. The predicted results were validated with experiments.
Resumo:
Various cellular processes including the pathogen-specific immune responses, host-pathogen interactions and the related evasion mechanisms rely on the ability of the immune cells to be reprogrammed accurately and in many cases instantaneously. In this context, the exact functions of epigenetic and miRNA-mediated regulation of genes, coupled with recent advent in techniques that aid such studies, make it an attractive field for research. Here, we review examples that involve the epigenetic and miRNA control of the host immune system during infection with bacteria. Interestingly, many pathogens utilize the epigenetic and miRNA machinery to modify and evade the host immune responses. Thus, we believe that global epigenetic and miRNA mapping of such host-pathogen interactions would provide key insights into their cellular functions and help to identify various determinants for therapeutic value.
Resumo:
Affine transformations have proven to be very powerful for loop restructuring due to their ability to model a very wide range of transformations. A single multi-dimensional affine function can represent a long and complex sequence of simpler transformations. Existing affine transformation frameworks like the Pluto algorithm, that include a cost function for modern multicore architectures where coarse-grained parallelism and locality are crucial, consider only a sub-space of transformations to avoid a combinatorial explosion in finding the transformations. The ensuing practical tradeoffs lead to the exclusion of certain useful transformations, in particular, transformation compositions involving loop reversals and loop skewing by negative factors. In this paper, we propose an approach to address this limitation by modeling a much larger space of affine transformations in conjunction with the Pluto algorithm's cost function. We perform an experimental evaluation of both, the effect on compilation time, and performance of generated codes. The evaluation shows that our new framework, Pluto+, provides no degradation in performance in any of the Polybench benchmarks. For Lattice Boltzmann Method (LBM) codes with periodic boundary conditions, it provides a mean speedup of 1.33x over Pluto. We also show that Pluto+ does not increase compile times significantly. Experimental results on Polybench show that Pluto+ increases overall polyhedral source-to-source optimization time only by 15%. In cases where it improves execution time significantly, it increased polyhedral optimization time only by 2.04x.
Resumo:
The polyhedral model provides an expressive intermediate representation that is convenient for the analysis and subsequent transformation of affine loop nests. Several heuristics exist for achieving complex program transformations in this model. However, there is also considerable scope to utilize this model to tackle the problem of automatic memory footprint optimization. In this paper, we present a new automatic storage optimization technique which can be used to achieve both intra-array as well as inter-array storage reuse with a pre-determined schedule for the computation. Our approach works by finding statement-wise storage partitioning hyper planes that partition a unified global array space so that values with overlapping live ranges are not mapped to the same partition. Our heuristic is driven by a fourfold objective function which not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter statement storage reuse. The storage mappings obtained using our heuristic can be asymptotically better than those obtained by any existing technique. We implement our technique and demonstrate its practical impact by evaluating its effectiveness on several benchmarks chosen from the domains of image processing, stencil computations, and high-performance computing.