5 resultados para Multi-run welding
em CentAUR: Central Archive University of Reading - UK
Resumo:
The simulated annealing approach to structure solution from powder diffraction data, as implemented in the DASH program, is easily amenable to parallelization at the individual run level. Modest increases in speed of execution can therefore be achieved by executing individual DASH runs on the individual cores of CPUs.
Resumo:
Historic analysis of the inflation hedging properties of stocks produced anomalous results, with equities often appearing to offer a perverse hedge against inflation. This has been attributed to the impact of real and monetary shocks to the economy, which influence both inflation and asset returns. It has been argued that real estate should provide a better hedge: however, empirical results have been mixed. This paper explores the relationship between commercial real estate returns (from both private and public markets) and economic, fiscal and monetary factors and inflation for US and UK markets. Comparative analysis of general equity and small capitalisation stock returns in both markets is carried out. Inflation is subdivided into expected and unexpected components using different estimation techniques. The analyses are undertaken using long-run error correction techniques. In the long-run, once real and monetary variables are included, asset returns are positively linked to anticipated inflation but not to inflation shocks. Adjustment processes are, however, gradual and not within period. Real estate returns, particularly direct market returns, exhibit characteristics that differ from equities.
Resumo:
Hybrid multiprocessor architectures which combine re-configurable computing and multiprocessors on a chip are being proposed to transcend the performance of standard multi-core parallel systems. Both fine-grained and coarse-grained parallel algorithm implementations are feasible in such hybrid frameworks. A compositional strategy for designing fine-grained multi-phase regular processor arrays to target hybrid architectures is presented in this paper. The method is based on deriving component designs using classical regular array techniques and composing the components into a unified global design. Effective designs with phase-changes and data routing at run-time are characteristics of these designs. In order to describe the data transfer between phases, the concept of communication domain is introduced so that the producer–consumer relationship arising from multi-phase computation can be treated in a unified way as a data routing phase. This technique is applied to derive new designs of multi-phase regular arrays with different dataflow between phases of computation.
Resumo:
Fresh water hosing simulations, in which a fresh water flux is imposed in the North Atlantic to force fluctuations of the Atlantic Meridional Overturning Circulation, have been routinely performed, first to study the climatic signature of different states of this circulation, then, under present or future conditions, to investigate the potential impact of a partial melting of the Greenland ice sheet. The most compelling examples of climatic changes potentially related to AMOC abrupt variations, however, are found in high resolution palaeo-records from around the globe for the last glacial period. To study those more specifically, more and more fresh water hosing experiments have been performed under glacial conditions in the recent years. Here we compare an ensemble constituted by 11 such simulations run with 6 different climate models. All simulations follow a slightly different design, but are sufficiently close in their design to be compared. They all study the impact of a fresh water hosing imposed in the extra-tropical North Atlantic. Common features in the model responses to hosing are the cooling over the North Atlantic, extending along the sub-tropical gyre in the tropical North Atlantic, the southward shift of the Atlantic ITCZ and the weakening of the African and Indian monsoons. On the other hand, the expression of the bipolar see-saw, i.e., warming in the Southern Hemisphere, differs from model to model, with some restricting it to the South Atlantic and specific regions of the southern ocean while others simulate a widespread southern ocean warming. The relationships between the features common to most models, i.e., climate changes over the north and tropical Atlantic, African and Asian monsoon regions, are further quantified. These suggest a tight correlation between the temperature and precipitation changes over the extra-tropical North Atlantic, but different pathways for the teleconnections between the AMOC/North Atlantic region and the African and Indian monsoon regions.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.