901 resultados para Machinery, Kinematics of.
Resumo:
Regulation of the transcription machinery is one of the many ways to achieve control of gene expression. This has been done either at the transcription initiation stage or at the elongation stage. Different methodologies are known to inhibit transcription initiation via targeting of double-stranded (ds) DNA by: (i) synthetic oligonucleotides, (ii) ds-DNA-specific, sequenceselective minor-groove binders (distamycin A), intercalators (daunomycin) combilexins and (iii) small molecule (peptide or intercalator)-oligonucleotide conjugates. In some cases, instead of ds-DNA, higher order G-quadruplex structures are formed at the start site of transcription. In this regard G-quadruplex DNA-specific small molecules play a significant role towards inhibition of the transcription machinery. Different types of designer DNA-binding agents act as powerful sequence-specific gene modulators, by exerting their effect from transcription regulation to gene modification. But most of these chemotherapeutic agents have serious side effects. Accordingly, there is always a challenge to design such DNA-binding molecules that should not only achieve maximum specific DNA-binding affinity, and cellular and nuclear transport activity, but also would not interfere with the functions of normal cells.
Resumo:
Parkinsons disease (PD) is the second most prevalent progressive neurological disorder commonly associated with impaired mitochondrial function in dopaminergic neurons. Although familial PD is multifactorial in nature, a recent genetic screen involving PD patients identified two mitochondrial Hsp70 variants (P509S and R126W) that are suggested in PD pathogenesis. However, molecular mechanisms underlying how mtHsp70 PD variants are centrally involved in PD progression is totally elusive. In this article, we provide mechanistic insights into the mitochondrial dysfunction associated with human mtHsp70 PD variants. Biochemically, the R126W variant showed severely compromised protein stability and was found highly susceptible to aggregation at physiological conditions. Strikingly, on the other hand, the P509S variant exhibits significantly enhanced interaction with J-protein cochaperones involved in folding and import machinery, thus altering the overall regulation of chaperone-mediated folding cycle and protein homeostasis. To assess the impact of mtHsp70 PD mutations at the cellular level, we developed yeast as a model system by making analogous mutations in Ssc1 ortholog. Interestingly, PD mutations in yeast (R103W and P486S) exhibit multiple in vivo phenotypes, which are associated with omitochondrial dysfunction', including compromised growth, impairment in protein translocation, reduced functional mitochondrial mass, mitochondrial DNA loss, respiratory incompetency and increased susceptibility to oxidative stress. In addition to that, R103W protein is prone to aggregate in vivo due to reduced stability, whereas P486S showed enhanced interaction with J-proteins, thus remarkably recapitulating the cellular defects that are observed in human PD variants. Taken together, our findings provide evidence in favor of direct involvement of mtHsp70 as a susceptibility factor in PD.
Resumo:
Lack of supervision in clustering algorithms often leads to clusters that are not useful or interesting to human reviewers. We investigate if supervision can be automatically transferred for clustering a target task, by providing a relevant supervised partitioning of a dataset from a different source task. The target clustering is made more meaningful for the human user by trading-off intrinsic clustering goodness on the target task for alignment with relevant supervised partitions in the source task, wherever possible. We propose a cross-guided clustering algorithm that builds on traditional k-means by aligning the target clusters with source partitions. The alignment process makes use of a cross-task similarity measure that discovers hidden relationships across tasks. When the source and target tasks correspond to different domains with potentially different vocabularies, we propose a projection approach using pivot vocabularies for the cross-domain similarity measure. Using multiple real-world and synthetic datasets, we show that our approach improves clustering accuracy significantly over traditional k-means and state-of-the-art semi-supervised clustering baselines, over a wide range of data characteristics and parameter settings.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.
Resumo:
Solid lubricant nanoparticles in suspension in oil are good lubricating options for practical machinery. In this article, we select a range of dispersants, based on their polar moieties, to suspend 50-nm molybdenum disulfide particles in an industrial base oil. The suspension is used to lubricate a steel on steel sliding contact. A nitrogen-based polymeric dispersant (aminopropyl trimethoxy silane) with a free amine group and an oxygen-based polymeric dispersant (sorbital monooleate) when grafted on the particle charge the particle negatively and yield an agglomerate size which is almost the same as that of the original particle. Lubrication of the contact by these suspensions gives a coefficient of friction in the similar to 0.03 range. The grafting of these surfactants on the particle is shown here to be of a chemical nature and strong as the grafts survive mechanical shear stress in tribology. Such grafts are superior to those of other silane-based test surfactants which have weak functional groups. In the latter case, the particles bereft of strong grafts agglomerate easily in the lubricant and give a coefficient of friction in the 0.08-0.12 range. This article investigates the mechanism of frictional energy dissipation as influenced by the chemistry of the surfactant molecule.
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
In large flexible software systems, bloat occurs in many forms, causing excess resource utilization and resource bottlenecks. This results in lost throughput and wasted joules. However, mitigating bloat is not easy; efforts are best applied where savings would be substantial. To aid this we develop an analytical model establishing the relation between bottleneck in resources, bloat, performance and power. Analyses with the model places into perspective results from the first experimental study of the power-performance implications of bloat. In the experiments we find that while bloat reduction can provide as much as 40% energy savings, the degree of impact depends on hardware and software characteristics. We confirm predictions from our model with selected results from our experimental study. Our findings show that a software-only view is inadequate when assessing the effects of bloat. The impact of bloat on physical resource usage and power should be understood for a full systems perspective to properly deploy bloat reduction solutions and reap their power-performance benefits.
Resumo:
Comments constitute an important part of Web 2.0. In this paper, we consider comments on news articles. To simplify the task of relating the comment content to the article content the comments are about, we propose the idea of showing comments alongside article segments and explore automatic mapping of comments to article segments. This task is challenging because of the vocabulary mismatch between the articles and the comments. We present supervised and unsupervised techniques for aligning comments to segments the of article the comments are about. More specifically, we provide a novel formulation of supervised alignment problem using the framework of structured classification. Our experimental results show that structured classification model performs better than unsupervised matching and binary classification model.
Resumo:
The paper identified and characterized a special multi-degree of freedom toggle behavior, called double toggle, observed in a typical MCCB switching mechanism. For an idealized system, the condition of toggle sequence is derived geometrically. The existing tools available in a multi-body dynamics package are used for exploring the dynamic behavior of such systems parametrically. The double toggle mechanism is found to make the system insensitive to the operator's behavior; however, the system is vulnerable under extreme usage. The linkage kinematics and stopper locations are found to have dominant role on the behavior of the system. It is revealed that the operating time is immune to the inertial property of the input link and sensitive to that of the output link. Novel designs exploiting this observation, in terms of spring and toggle placements, to enhance switching performance have also been reported in the paper. Detailed study revealed that strategic placement of the spring helps in selective alteration of system performance. Thus, the study establishes the critical importance of the kinematic design of MCCB over the dynamic parameters. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
There have been several studies on the performance of TCP controlled transfers over an infrastructure IEEE 802.11 WLAN, assuming perfect channel conditions. In this paper, we develop an analytical model for the throughput of TCP controlled file transfers over the IEEE 802.11 DCF with different packet error probabilities for the stations, accounting for the effect of packet drops on the TCP window. Our analysis proceeds by combining two models: one is an extension of the usual TCP-over-DCF model for an infrastructure WLAN, where the throughput of a station depends on the probability that the head-of-the-line packet at the Access Point belongs to that station; the second is a model for the TCP window process for connections with different drop probabilities. Iterative calculations between these models yields the head-of-the-line probabilities, and then, performance measures such as the throughputs and packet failure probabilities can be derived. We find that, due to MAC layer retransmissions, packet losses are rare even with high channel error probabilities and the stations obtain fair throughputs even when some of them have packet error probabilities as high as 0.1 or 0.2. For some restricted settings we are also able to model tail-drop loss at the AP. Although involving many approximations, the model captures the system behavior quite accurately, as compared with simulations.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
Saccharomyces cerevisiae RAD50, MRE11, and XRS2 genes are essential for telomere length maintenance, cell cycle checkpoint signaling, meiotic recombination, and DNA double-stranded break (DSB) repair via nonhomologous end joining and homologous recombination. The DSB repair pathways that draw upon Mre11-Rad50-Xrs2 subunits are complex, so their mechanistic features remain poorly understood. Moreover, the molecular basis of DSB end resection in yeast mre11-nuclease deficient mutants and Mre11 nuclease-independent activation of ATM in mammals remains unknown and adds a new dimension to many unanswered questions about the mechanism of DSB repair. Here, we demonstrate that S. cerevisiae Mre11 (ScMre11) exhibits higher binding affinity for single-over double-stranded DNA and intermediates of recombination and repair and catalyzes robust unwinding of substrates possessing a 3' single-stranded DNA overhang but not of 5' overhangs or blunt-ended DNA fragments. Additional evidence disclosed that ScMre11 nuclease activity is dispensable for its DNA binding and unwinding activity, thus uncovering the molecular basis underlying DSB end processing in mre11 nuclease deficient mutants. Significantly, Rad50, Xrs2, and Sae2 potentiate the DNA unwinding activity of Mre11, thus underscoring functional interaction among the components of DSB end repair machinery. Our results also show that ScMre11 by itself binds to DSB ends, then promotes end bridging of duplex DNA, and directly interacts with Sae2. We discuss the implications of these results in the context of an alternative mechanism for DSB end processing and the generation of single-stranded DNA for DNA repair and homologous recombination.
Resumo:
This report summarizes the presentations and discussions conducted during the symposium, which was held under the aegis of the International Union of Theoretical and Applied Mechanics during 23-27 January 2012 in Bangalore, India. (C) 2013 AIP Publishing LLC.
Resumo:
We experimentally study the effect of having hinged leaflets at the jet exit on the formation of a two-dimensional counter-rotating vortex pair. A piston-cylinder mechanism is used to generate a starting jet from a high-aspect-ratio channel into a quiescent medium. For a rigid exit, with no leaflets at the channel exit, the measurements at a central plane show that the trailing jet in the present case is never detached from the vortex pair, and keeps feeding into the latter, unlike in the axisymmetric case. Passive flexibility is introduced in the form of rigid leaflets or flaps that are hinged at the exit of the channel, with the flaps initially parallel to the channel walls. The experimental arrangement closely approximates the limiting case of a free-to-rotate rigid flap with negligible structural stiffness, damping and flap inertia, as these limiting structural properties permit the largest flap openings. Using this arrangement, we start the flow and measure the flap kinematics and the vorticity fields for different flap lengths and piston velocity programs. The typical motion of the flaps involves a rapid opening and a subsequent more gradual return to its initial position, both of which occur when the piston is still moving. The initial opening of the flaps can be attributed to an excess pressure that develops in the channel when the flow starts, due to the acceleration that has to be imparted to the fluid slug between the flaps. In the case with flaps, two additional pairs of vortices are formed because of the motion of the flaps, leading to the ejection of a total of up to three vortex pairs from the hinged exit. The flaps' length (L-f) is found to significantly affect flap motions when plotted using the conventional time scale L/d, where L is the piston stroke and d is the channel width. However, with a newly defined time scale based on the flap length (L/L-f), we find a good collapse of all the measured flap motions irrespective of flap length and piston velocity for an impulsively started piston motion. The maximum opening angle in all these impulsive velocity program cases, irrespective of the flap length, is found to be close to 15 degrees. Even though the flap kinematics collapses well with L/L-f, there are differences in the distribution of the ejected vorticity even for the same L/L-f. Such a redistribution of vorticity can lead to important changes in the overall properties of the flow, and it gives us a better understanding of the importance of exit flexibility in such flows.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.