655 resultados para McManus
Resumo:
Abstract not available
Resumo:
Résumé : La maladie osseuse de Paget (MP) est un désordre squelettique caractérisé par une augmentation focale et désorganisée du remodelage osseux. Les ostéoclastes (OCs) de MP sont plus larges, actifs et nombreux, en plus d’être résistants à l’apoptose. Même si la cause précise de la MP demeure inconnue, des mutations du gène SQSTM1, codant pour la protéine p62, ont été décrites dans une proportion importante de patients avec MP. Parmi ces mutations, la substitution P392L est la plus fréquente, et la surexpression de p62P392L dans les OCs génère un phénotype pagétique partiel. La protéine p62 est impliquée dans de multiples processus, allant du contrôle de la signalisation NF-κB à l’autophagie. Dans les OCs humains, un complexe multiprotéique composé de p62 et des kinases PKCζ et PDK1 est formé en réponse à une stimulation par Receptor Activator of Nuclear factor Kappa-B Ligand (RANKL), principale cytokine impliquée dans la formation et l'activation des OCs. Nous avons démontré que PKCζ est impliquée dans l’activation de NF-κB induite par RANKL dans les OCs, et dans son activation constitutive en présence de p62P392L. Nous avons également observé une augmentation de phosphorylation de Ser536 de p65 par PKCζ, qui est indépendante d’IκB et qui pourrait représenter une voie alternative d'activation de NF-κB en présence de la mutation de p62. Nous avons démontré que les niveaux de phosphorylation des régulateurs de survie ERK et Akt sont augmentés dans les OCs MP, et réduits suite à l'inhibition de PDK1. La phosphorylation des substrats de mTOR, 4EBP1 et la protéine régulatrice Raptor, a été évaluée, et une augmentation des deux a été observée dans les OCs pagétiques, et est régulée par l'inhibition de PDK1. Également, l'augmentation des niveaux de base de LC3II (associée aux structures autophagiques) observée dans les OCs pagétiques a été associée à un défaut de dégradation des autophagosomes, indépendante de la mutation p62P392L. Il existe aussi une réduction de sensibilité à l’induction de l'autophagie dépendante de PDK1. De plus, l’inhibition de PDK1 induit l’apoptose autant dans les OCs contrôles que pagétiques, et mène à une réduction significative de la résorption osseuse. La signalisation PDK1/Akt pourrait donc représenter un point de contrôle important dans l’activation des OCs pagétiques. Ces résultats démontrent l’importance de plusieurs kinases associées à p62 dans la sur-activation des OCs pagétiques, dont la signalisation converge vers une augmentation de leur survie et de leur fonction de résorption, et affecte également le processus autophagique.
Resumo:
Abstract not available
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
Abstract not available
Resumo:
Abstract not available
Resumo:
The use of unstructured mesh codes on parallel machines is one of the most effective ways to solve large computational mechanics problems. Completely general geometries and complex behaviour can be modelled and, in principle, the inherent sparsity of many such problems can be exploited to obtain excellent parallel efficiencies. However, unlike their structured counterparts, the problem of distributing the mesh across the memory of the machine, whilst minimising the amount of interprocessor communication, must be carefully addressed. This process is an overhead that is not incurred by a serial code, but is shown to rapidly computable at turn time and tailored for the machine being used.
Resumo:
Unstructured mesh based codes for the modelling of continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Such codes have the potential to provide a high performance on parallel platforms for a small investment in programming. The critical parameters for success are to minimise changes to the code to allow for maintenance while providing high parallel efficiency, scalability to large numbers of processors and portability to a wide range of platforms. The paradigm of domain decomposition with message passing has for some time been demonstrated to provide a high level of efficiency, scalability and portability across shared and distributed memory systems without the need to re-author the code into a new language. This paper addresses these issues in the parallelisation of a complex three dimensional unstructured mesh Finite Volume multiphysics code and discusses the implications of automating the parallelisation process.
Resumo:
Abstract not available
Resumo:
The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.
Resumo:
One of the core tasks of the virtual-manufacturing environment is to characterise the transformation of the state of material during each of the unit processes. This transformation in shape, material properties, etc. can only be reliably achieved through the use of models in a simulation context. Unfortunately, many manufacturing processes involve the material being treated in both the liquid and solid state, the trans-formation of which may be achieved by heat transfer and/or electro-magnetic fields. The computational modelling of such processes, involving the interactions amongst various interacting phenomena, is a consider-able challenge. However, it must be addressed effectively if Virtual Manufacturing Environments are to become a reality! This contribution focuses upon one attempt to develop such a multi-physics computational toolkit. The approach uses a single discretisation procedure and provides for direct interaction amongst the component phenomena. The need to exploit parallel high performance hardware is addressed so that simulation elapsed times can be brought within the realms of practicality. Examples of Multiphysics modelling in relation to shape casting, and solder joint formation reinforce the motivation for this work.
Resumo:
Abstract not available
Resumo:
The availability of CFD software that can easily be used and produce high efficiency on a wide range of parallel computers is extremely limited. The investment and expertise required to parallelise a code can be enormous. In addition, the cost of supercomputers forces high utilisation to justify their purchase, requiring a wide range of software. To break this impasse, tools are urgently required to assist in the parallelisation process that dramatically reduce the parallelisation time but do not degrade the performance of the resulting parallel software. In this paper we discuss enhancements to the Computer Aided Parallelisation Tools (CAPTools) to assist in the parallelisation of complex unstructured mesh-based computational mechanics codes.
Resumo:
Unstructured mesh codes for modelling continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Parallelisation of such codes using single Program Multi Data (SPMD) domain decomposition techniques implemented with message passing has been demonstrated to provide high parallel efficiency, scalability to large numbers of processors P and portability across a wide range of parallel platforms. High efficiency, especially for large P requires that load balance is achieved in each parallel loop. For a code in which loops span a variety of mesh entity types, for example, elements, faces and vertices, some compromise is required between load balance for each entity type and the quantity of inter-processor communication required to satisfy data dependence between processors.
Resumo:
As the efficiency of parallel software increases it is becoming common to measure near linear speedup for many applications. For a problem size N on P processors then with software running at O(N=P ) the performance restrictions due to file i/o systems and mesh decomposition running at O(N) become increasingly apparent especially for large P . For distributed memory parallel systems an additional limit to scalability results from the finite memory size available for i/o scatter/gather operations. Simple strategies developed to address the scalability of scatter/gather operations for unstructured mesh based applications have been extended to provide scalable mesh decomposition through the development of a parallel graph partitioning code, JOSTLE [8]. The focus of this work is directed towards the development of generic strategies that can be incorporated into the Computer Aided Parallelisation Tools (CAPTools) project.