54 resultados para Contracts of execution

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Unregulated apoptosis can be due to a disruption in the balance and control of both intra- and inter-cellular proteolytic activities leading to various disease states. Many proteases involved in apoptotic processes are yet to be identified; however, several are already well characterized. Caspases traditionally held the predominant role as prime mediators of execution. However, latterly, evidence has accumulated that non-caspases, including calpains, cathepsins, granzymes and the proteasome have roles in mediating and promoting cell death. Increasingly, research is implicating serine proteases within apoptotic processing, particularly in the generation of nuclear events such as condensation, fragmentation and DNA degradation observed in late-stage apoptosis. Serine proteases therefore are emerging as providing additional or alternative therapeutic targets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To help the building of new low-carbon housing, recent years have seen the widespread demolition of Victorian housing in UK cities. In this regard, Belfast is no different from its counterparts on the British mainland, where Compulsory Purchase Orders force people to sell and vacate their terraced homes to make way for newly constructed 'sustainable' housing. The global economic downturn has temporarily slowed down this process leaving many Belfast terraces now blocked up awaiting future demolition. This stay of execution is an unlikely but welcome opportunity to review and assess the true value to owner, streetscape and city of this important and common house-type. Important questions need to be asked. Should sound Victorian terraces be demolished? What is the genuine cost of demolition and replacement in terms of community and environment? With reference to case studies in a Belfast context, the argument will be made that new is not necessarily better, that the existing Victorian terrace is an important and valuable resource and one that, with intelligent intervention, offers a genuinely sustainable alternative to new-build housing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Task-based dataflow programming models and runtimes emerge as promising candidates for programming multicore and manycore architectures. These programming models analyze dynamically task dependencies at runtime and schedule independent tasks concurrently to the processing elements. In such models, cache locality, which is critical for performance, becomes more challenging in the presence of fine-grain tasks, and in architectures with many simple cores.

This paper presents a combined hardware-software approach to improve cache locality and offer better performance is terms of execution time and energy in the memory system. We propose the explicit bulk prefetcher (EBP) and epoch-based cache management (ECM) to help runtimes prefetch task data and guide the replacement decisions in caches. The runtimem software can use this hardware support to expose its internal knowledge about the tasks to the architecture and achieve more efficient task-based execution. Our combined scheme outperforms HW-only prefetchers and state-of-the-art replacement policies, improves performance by an average of 17%, generates on average 26% fewer L2 misses, and consumes on average 28% less energy in the components of the memory system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article examines operational Private Finance Initiative (PFI) school projects and reports the experiences of UK headteachers. It considers the impact of project size on value for money (VFM). Headteachers involved in small projects are more satisfied with costs than those involved in large projects, but headteachers involved in larger projects are more satisfied with affordability. Generally, heads are more satisfied with the buildings than with the services. The authors question the government’s recent policy changes to increase the size of PFI projects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose The UK government argues that the benefits of public private partnership (PPP) in delivering public infrastructure stem from: transferring risks to the private sector within a structure in which financiers put their own capital at risk; and, the performance based payment mechanism, reinforced by the due diligence requirements imposed by the lenders financing the projects (HM Treasury, 2010). Prior studies of risk in PPPs have investigated ‘what’ risks are allocated and to ‘whom’, that is to the public or the private sector. The purpose of this study is to examine ‘how’ and ‘why’ PPP risks are diffused by their financiers. Design/methodology/approach This study focuses on the financial structure of PPPs and on their financiers. Empirical evidence comes from interviews conducted with equity and debt financiers. Findings The findings show that the financial structure of the deals generates risk aversion in both debt and equity financiers and that the need to attract affordable finance leads to risk diffusion through a network of companies using various means that include contractual mitigation through insurance, performance support guarantees, interest rate swaps and inflation hedges. Because of the complexity this process generates, both procurers and suppliers need expensive expert advice. The risk aversion and diffusion and the consequent need for advice add cost to the projects impacting on the government’s economic argument for risk transfer. Limitations and implications The empirical work covers the private finance initiative (PFI) type of PPP arrangements and therefore the risk diffusion mechanisms may not be generalisable to other forms of PPP, especially those that do not involve the use of high leverage or private finance. Moreover, the scope of this research is limited to exploring the diffusion of risk in the private sector. Further research is needed on how risk is diffused in other settings and on the value for money implication of risk diffusion in PPP contracts. Originality/value The expectation inherent in PPP is that the private sector will better manage those risks allocated to it and because private capital is at risk, financiers will perform due diligence with the ultimate outcome that only viable projects will proceed. This paper presents empirical evidence that raises questions about these expectations. Key words: public private partnership, risk management, diffusion, private finance initiative, financiers

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Following major reforms of the British National Health Service (NHS) in 1990, the roles of purchasing and providing health services were separated, with the relationship between purchasers and providers governed by contracts. Using a mixed multinomial logit analysis, we show how this policy shift led to a selection of contracts that is consistent with the predictions of a simple model, based on contract theory, in which the characteristics of the health services being purchased and of the contracting parties influence the choice of contract form. The paper thus provides evidence in support of the practical relevance of theory in understanding health care market reform. © 2008 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.

Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.

Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dynamic Voltage and Frequency Scaling (DVFS) exhibits fundamental limitations as a method to reduce energy consumption in computing systems. In the HPC domain, where performance is of highest priority and codes are heavily optimized to minimize idle time, DVFS has limited opportunity to achieve substantial energy savings. This paper explores if operating processors Near the transistor Threshold Volt- age (NTV) is a better alternative to DVFS for break- ing the power wall in HPC. NTV presents challenges, since it compromises both performance and reliability to reduce power consumption. We present a first of its kind study of a significance-driven execution paradigm that selectively uses NTV and algorithmic error tolerance to reduce energy consumption in performance- constrained HPC environments. Using an iterative algorithm as a use case, we present an adaptive execution scheme that switches between near-threshold execution on many cores and above-threshold execution on one core, as the computational significance of iterations in the algorithm evolves over time. Using this scheme on state-of-the-art hardware, we demonstrate energy savings ranging between 35% to 67%, while compromising neither correctness nor performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent years concerns over litigation and the trend towards close monitoring of academic activity has seen the effective hijacking of research ethics by university managers and bureaucrats. This can effectively curtail cutting edge research as perceived ‘safe’ research strategies are encouraged. However, ethics is about more than research governance. Ultimately, it seeks to avoid harm and to increase benefits to society. Rural development debate is fairly quiet on the question of ethics, leaving guidance to professional bodies. This study draws on empirical research that examined the lives of migrant communities in Northern Ireland. This context of increasingly diverse rural development actors provides a backdrop for the way in which the researcher navigates through ethical issues as they unfold in the field. The analysis seeks to relocate ethics from being an annoying bureaucratic requirement to one where it is inherent to rigorous and professional research and practice. It reveals how attention to professional ethics can contribute to effective, situated and reflexive practice, thus transforming ethics to become an asset to professional researchers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Scheduling jobs with deadlines, each of which defines the latest time that a job must be completed, can be challenging on the cloud due to incurred costs and unpredictable performance. This problem is further complicated when there is not enough information to effectively schedule a job such that its deadline is satisfied, and the cost is minimised. In this paper, we present an approach to schedule jobs, whose performance are unknown before execution, with deadlines on the cloud. By performing a sampling phase to collect the necessary information about those jobs, our approach delivers the scheduling decision within 10% cost and 16% violation rate when compared to the ideal setting, which has complete knowledge about each of the jobs from the beginning. It is noted that our proposed algorithm outperforms existing approaches, which use a fixed amount of resources by reducing the violation cost by at least two times.