4 resultados para DIFFERENT GENETIC MODELS

em DRUM (Digital Repository at the University of Maryland)


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Duchenne muscular dystrophy (DMD) is a neuromuscular disease caused by mutations in the dystrophin gene. DMD is clinically characterized by severe, progressive and irreversible loss of muscle function, in which most patients lose the ability to walk by their early teens and die by their early 20’s. Impaired intracellular calcium (Ca2+) regulation and activation of cell degradation pathways have been proposed as key contributors to DMD disease progression. This dissertation research consists of three studies investigating the role of intracellular Ca2+ in skeletal muscle dysfunction in different mouse models of DMD. Study one evaluated the role of Ca2+-activated enzymes (proteases) that activate protein degradation in excitation-contraction (E-C) coupling failure following repeated contractions in mdx and dystrophin-utrophin null (mdx/utr-/-) mice. Single muscle fibers from mdx/utr-/- mice had greater E-C coupling failure following repeated contractions compared to fibers from mdx mice. Moreover, protease inhibition during these contractions was sufficient to attenuate E-C coupling failure in muscle fibers from both mdx and mdx/utr-/- mice. Study two evaluated the effects of overexpressing the Ca2+ buffering protein sarcoplasmic/endoplasmic reticulum Ca2+-ATPase 1 (SERCA1) in skeletal muscles from mdx and mdx/utr-/- mice. Overall, SERCA1 overexpression decreased muscle damage and protected the muscle from contraction-induced injury in mdx and mdx/utr-/- mice. In study three, the cellular mechanisms underlying the beneficial effects of SERCA1 overexpression in mdx and mdx/utr-/- mice were investigated. SERCA1 overexpression attenuated calpain activation in mdx muscle only, while partially attenuating the degradation of the calpain target desmin in mdx/utr-/- mice. Additionally, SERCA1 overexpression decreased the SERCA-inhibitory protein sarcolipin in mdx muscle but did not alter levels of Ca2+ regulatory proteins (parvalbumin and calsequestrin) in either dystrophic model. Lastly, SERCA1 overexpression blunted the increase in endoplasmic reticulum stress markers Grp78/BiP in mdx mice and C/EBP homologous protein (CHOP) in mdx and mdx/utr-/- mice. Overall, findings from the studies presented in this dissertation provide new insight into the role of Ca2+ in muscle dysfunction and damage in different dystrophic mouse models. Further, these findings support the overall strategy for improving intracellular Ca2+ control for the development of novel therapies for DMD.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heterogeneous computing systems have become common in modern processor architectures. These systems, such as those released by AMD, Intel, and Nvidia, include both CPU and GPU cores on a single die available with reduced communication overhead compared to their discrete predecessors. Currently, discrete CPU/GPU systems are limited, requiring larger, regular, highly-parallel workloads to overcome the communication costs of the system. Without the traditional communication delay assumed between GPUs and CPUs, we believe non-traditional workloads could be targeted for GPU execution. Specifically, this thesis focuses on the execution model of nested parallel workloads on heterogeneous systems. We have designed a simulation flow which utilizes widely used CPU and GPU simulators to model heterogeneous computing architectures. We then applied this simulator to non-traditional GPU workloads using different execution models. We also have proposed a new execution model for nested parallelism allowing users to exploit these heterogeneous systems to reduce execution time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.