996 resultados para Deasley, Bryan


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are three key components for developing a metadata system: a container structure laying out the key semantic issues of interest and their relationships; an extensible controlled vocabulary providing possible content; and tools to create and manipulate that content. While metadata systems must allow users to enter their own information, the use of a controlled vocabulary both imposes consistency of definition and ensures comparability of the objects described. Here we describe the controlled vocabulary (CV) and metadata creation tool built by the METAFOR project for use in the context of describing the climate models, simulations and experiments of the fifth Coupled Model Intercomparison Project (CMIP5). The CV and resulting tool chain introduced here is designed for extensibility and reuse and should find applicability in many more projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is compelling evidence that more diverse ecosystems deliver greater benefits to people, and these ecosystem services have become a key argument for biodiversity conservation. However, it is unclear how much biodiversity is needed to deliver ecosystem services in a cost-effective way. Here we show that, while the contribution of wild bees to crop production is significant, service delivery is restricted to a limited subset of all known bee species. Across crops, years and biogeographical regions, crop-visiting wild bee communities are dominated by a small number of common species, and threatened species are rarely observed on crops. Dominant crop pollinators persist under agricultural expansion and many are easily enhanced by simple conservation measures, suggesting that cost-effective management strategies to promote crop pollination should target a different set of species than management strategies to promote threatened bees. Conserving the biological diversity of bees therefore requires more than just ecosystem-service-based arguments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virus capsids are primed for disassembly, yet capsid integrity is key to generating a protective immune response. Foot-and-mouth disease virus (FMDV) capsids comprise identical pentameric protein subunits held together by tenuous noncovalent interactions and are often unstable. Chemically inactivated or recombinant empty capsids, which could form the basis of future vaccines, are even less stable than live virus. Here we devised a computational method to assess the relative stability of protein-protein interfaces and used it to design improved candidate vaccines for two poorly stable, but globally important, serotypes of FMDV: O and SAT2. We used a restrained molecular dynamics strategy to rank mutations predicted to strengthen the pentamer interfaces and applied the results to produce stabilized capsids. Structural analyses and stability assays confirmed the predictions, and vaccinated animals generated improved neutralizing-antibody responses to stabilized particles compared to parental viruses and wild-type capsids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper examines the process of bank internationalisation and explores how banks become international organisations and what this involves. It also makes an assessment of the significance of their international operations and determines whether banks are truly global organisations. The empirical data are based on the 60 largest banks in the world and content analysis is used to categorise the information into the eight international strategies of Atamer, Calori, Gustavsson, and Menguzzato-Boulard [Internationalisation strategies. In R. Calori, T. Atamer, & P. Nunes (Eds.), The dynamics of international competition – from practice to theory, strategy series (pp. 162–206). London: Sage (2000)] and Bryan, Fraser, Oppenheim, and Rall [Race for the World strategies to build a great global firm. Boston, MA: Harvard Business School Press (1999)]. The findings suggest that the majority of banks focus on countries or geographic regions in which they have some sort of cultural or economic affinity. Moreover, apart from a relatively small number of very large banks, they are international rather than truly global organisations.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ISO19156 Observations and Measurements (O&M) provides a standardised framework for organising information about the collection of information about the environment. Here we describe the implementation of a specialisation of O&M for environmental data, the Metadata Objects for Linking Environmental Sciences (MOLES3). MOLES3 provides support for organising information about data, and for user navigation around data holdings. The implementation described here, “CEDA-MOLES”, also supports data management functions for the Centre for Environmental Data Archival, CEDA. The previous iteration of MOLES (MOLES2) saw active use over five years, being replaced by CEDA-MOLES in late 2014. During that period important lessons were learnt both about the information needed, as well as how to design and maintain the necessary information systems. In this paper we review the problems encountered in MOLES2; how and why CEDA-MOLES was developed and engineered; the migration of information holdings from MOLES2 to CEDA-MOLES; and, finally, provide an early assessment of MOLES3 (as implemented in CEDA-MOLES) and its limitations. Key drivers for the MOLES3 development included the necessity for improved data provenance, for further structured information to support ISO19115 discovery metadata export (for EU INSPIRE compliance), and to provide appropriate fixed landing pages for Digital Object Identifiers (DOIs) in the presence of evolving datasets. Key lessons learned included the importance of minimising information structure in free text fields, and the necessity to support as much agility in the information infrastructure as possible without compromising on maintainability both by those using the systems internally and externally (e.g. citing in to the information infrastructure), and those responsible for the systems themselves. The migration itself needed to ensure continuity of service and traceability of archived assets.