951 resultados para Mechanics, analytic
Resumo:
A number of research groups are now developing and using finite volume (FV) methods for computational solid mechanics (CSM). These methods are proving to be equivalent and in some cases superior to their finite element (FE) counterparts. In this paper we will describe a vertex-based FV method with arbitrarily structured meshes, for modelling the elasto-plastic deformation of solid materials undergoing small strains in complex geometries. Comparisons with rational FE methods will be given.
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.
Resumo:
It is now clear that the concept of a HPC compiler which automatically produces highly efficient parallel implementations is a pipe-dream. Another route is to recognise from the outset that user information is required and to develop tools that embed user interaction in the transformation of code from scalar to parallel form, and then use conventional compilers with a set of communication calls. This represents the key idea underlying the development of the CAPTools software environment. The initial version of CAPTools is focused upon single block structured mesh computational mechanics codes. The capability for unstructured mesh codes is under test now and block structured meshes will be included next. The parallelisation process can be completed rapidly for modest codes and the parallel performance approaches that which is delivered by hand parallelisations.
Resumo:
As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.
Resumo:
This paper presents a new dynamic load balancing technique for structured mesh computational mechanics codes in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to using coincidental limits in all of the partitioned dimensions. The partition range limits are 'staggered', allowing greater flexibility in obtaining a balanced load distribution in comparison to when the limits are changed 'globally'. as the load increase/decrease on one processor no longer restricts the load decrease/increase on a neighbouring processor. The automatic implementation of this 'staggered' load balancing strategy within an existing parallel code is presented in this paper, along with some preliminary results.
Resumo:
SILVA, Flávio César Bezerra da ; COSTA, Francisca Marta de Lima; ANDRADE, Hamilton Leandro Pinto de; FREIRE, Lúcia de Fátima; MACIEL, Patrícia Suerda de Oliveira; ENDERS, Bertha Cruz ; MENEZES, Rejane Maria Paiva de. Paradigms that guide the models of attention to the health in Brazil: an analytic essay. Revista de Enfermagem UFPE On Line., Recife, v.3,n.4, p.460-65. out/dez. 2009. Disponível em < http://www.ufpe.br/revistaenfermagem/index.php/revista/search/results >.
Resumo:
We present solutions of the Yang–Mills equation on cylinders R×G/HR×G/H over coset spaces of odd dimension 2m+12m+1 with Sasakian structure. The gauge potential is assumed to be SU(m)SU(m)-equivariant, parameterized by two real, scalar-valued functions. Yang–Mills theory with torsion in this setup reduces to the Newtonian mechanics of a point particle moving in R2R2 under the influence of an inverted potential. We analyze the critical points of this potential and present an analytic as well as several numerical finite-action solutions. Apart from the Yang–Mills solutions that constitute SU(m)SU(m)-equivariant instanton configurations, we construct periodic sphaleron solutions on S1×G/HS1×G/H and dyon solutions on iR×G/HiR×G/H.
Resumo:
In a microscopic setting, humans behave in rich and unexpected ways. In a macroscopic setting, however, distinctive patterns of group behavior emerge, leading statistical physicists to search for an underlying mechanism. The aim of this dissertation is to analyze the macroscopic patterns of competing ideas in order to discern the mechanics of how group opinions form at the microscopic level. First, we explore the competition of answers in online Q&A (question and answer) boards. We find that a simple individual-level model can capture important features of user behavior, especially as the number of answers to a question grows. Our model further suggests that the wisdom of crowds may be constrained by information overload, in which users are unable to thoroughly evaluate each answer and therefore tend to use heuristics to pick what they believe is the best answer. Next, we explore models of opinion spread among voters to explain observed universal statistical patterns such as rescaled vote distributions and logarithmic vote correlations. We introduce a simple model that can explain both properties, as well as why it takes so long for large groups to reach consensus. An important feature of the model that facilitates agreement with data is that individuals become more stubborn (unwilling to change their opinion) over time. Finally, we explore potential underlying mechanisms for opinion formation in juries, by comparing data to various types of models. We find that different null hypotheses in which jurors do not interact when reaching a decision are in strong disagreement with data compared to a simple interaction model. These findings provide conceptual and mechanistic support for previous work that has found mutual influence can play a large role in group decisions. In addition, by matching our models to data, we are able to infer the time scales over which individuals change their opinions for different jury contexts. We find that these values increase as a function of the trial time, suggesting that jurors and judicial panels exhibit a kind of stubbornness similar to what we include in our model of voting behavior.
Resumo:
Abstract not available
Resumo:
BACKGROUND: Decision-analytic modelling (DAM) has become a widespread method in health technology assessments (HTA), but the extent to which modelling is used differs among international HTA institutions. In Germany, the use of DAM is optional within HTAs of the German Institute of Medical Documentation and Information (DIMDI). Our study examines the use of DAM in DIMDI HTA reports and its effect on the quality of information provided for health policies. METHODS: A review of all DIMDI HTA reports (from 1998 to September 2012) incorporating an economic assessment was performed. All included reports were divided into two groups: HTAs with DAM and HTAs without DAM. In both groups, reports were categorized according to the quality of information provided for healthcare decision making. RESULTS: Of the sample of 107 DIMDI HTA reports, 17 (15.9%) used DAM for economic assessment. In the group without DAM, conclusions were limited by the quality of economic information in 51.1% of the reports, whereas we did not find limited conclusions in the group with DAM. Furthermore, 24 reports without DAM (26.7%) stated that using DAM would likely improve the quality of information of the economic assessment. CONCLUSION: The use of DAM techniques can improve the quality of HTAs in Germany. When, after a systematic review of existing literature within a HTA, it is clear that DAM is likely to positively affect the quality of the economic assessment DAM should be used.
Resumo:
Abstract not available
Resumo:
This document provides supporting materials for a paper submitted for review to the Physics Education Research Conference proceedings in July 2016, "Sense-making with Inscriptions in Quantum Mechanics."