10 resultados para Multiprocessor scheduling with resource sharing
em CentAUR: Central Archive University of Reading - UK
Sustained monitoring of the Southern Ocean at Drake Passage: past achievements and future priorities
Resumo:
Drake Passage is the narrowest constriction of the Antarctic Circumpolar Current (ACC) in the Southern Ocean, with implications for global ocean circulation and climate. We review the long-term sustained monitoring programmes that have been conducted at Drake Passage, dating back to the early part of the twentieth century. Attention is drawn to numerous breakthroughs that have been made from these programmes, including (a) the first determinations of the complex ACC structure and early quantifications of its transport; (b) realization that the ACC transport is remarkably steady over interannual and longer periods, and a growing understanding of the processes responsible for this; (c) recognition of the role of coupled climate modes in dictating the horizontal transport, and the role of anthropogenic processes in this; (d) understanding of mechanisms driving changes in both the upper and lower limbs of the Southern Ocean overturning circulation, and their impacts. It is argued that monitoring of this passage remains a high priority for oceanographic and climate research, but that strategic improvements could be made concerning how this is conducted. In particular, long-term programmes should concentrate on delivering quantifications of key variables of direct relevance to large-scale environmental issues: in this context, the time-varying overturning circulation is, if anything, even more compelling a target than the ACC flow. Further, there is a need for better international resource-sharing, and improved spatio-temporal coordination of the measurements. If achieved, the improvements in understanding of important climatic issues deriving from Drake Passage monitoring can be sustained into the future.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Distributed computing paradigms for sharing resources such as Clouds, Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. While there are some success stories such as PlanetLab, OneLab, BOINC, BitTorrent, and SETI@home, a widespread use of these technologies for business applications has not yet been achieved. In a business environment, mechanisms are needed to provide incentives to potential users for participating in such networks. These mechanisms may range from simple non-monetary access rights, monetary payments to specific policies for sharing. Although a few models for a framework have been discussed (in the general area of a "Grid Economy"), none of these models has yet been realised in practice. This book attempts to fill this gap by discussing the reasons for such limited take-up and exploring incentive mechanisms for resource sharing in distributed systems. The purpose of this book is to identify research challenges in successfully using and deploying resource sharing strategies in open-source and commercial distributed systems.
Resumo:
The controls on aboveground community composition and diversity have been extensively studied, but our understanding of the drivers of belowground microbial communities is relatively lacking, despite their importance for ecosystem functioning. In this study, we fitted statistical models to explain landscape-scale variation in soil microbial community composition using data from 180 sites covering a broad range of grassland types, soil and climatic conditions in England. We found that variation in soil microbial communities was explained by abiotic factors like climate, pH and soil properties. Biotic factors, namely community- weighted means (CWM) of plant functional traits, also explained variation in soil microbial communities. In particular, more bacterial-dominated microbial communities were associated with exploitative plant traits versus fungal-dominated communities with resource-conservative traits, showing that plant functional traits and soil microbial communities are closely related at the landscape scale.
Resumo:
Objective: To clarify how infection control requirements are represented, communicated, and understood in work interactions through the medical facility construction project life cycle. To assist project participants with effective infection control management by highlighting the nature of such requirements and presenting recommendations to aid practice. Background: A 4-year study regarding client requirement representation and use on National Health Service construction projects in the United Kingdom provided empirical evidence of infection control requirement communication and understanding through design and construction work interactions. Methods: An analysis of construction project resources (e.g., infection control regulations and room data sheets) was combined with semi-structured interviews with hospital client employees and design and construction professionals to provide valuable insights into the management of infection control issues. Results: Infection control requirements are representationally indistinct but also omnipresent through all phases of the construction project life cycle: Failure to recognize their nature, relevance, and significance can result in delays, stoppages, and redesign work. Construction project resources (e.g., regulatory guidance and room data sheets) can mask or obscure the meaning of infection control issues. Conclusions: A preemptive identification of issues combined with knowledge sharing activities among project stakeholders can enable infection control requirements to be properly understood and addressed. Such initiatives should also reference existing infection control regulatory guidance and advice.
Resumo:
In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental trade off between energy and spectral-efficient transmission designs.