2 resultados para test case optimization
em Glasgow Theses Service
Resumo:
The aim of this thesis was to investigate, using the real-time test case of the 2014 Commonwealth Games, whether the realist synthesis methodology could contribute to the making of health policy in a meaningful way. This was done by looking at two distinct research questions: first, whether realist synthesis could contribute new insights to the health policymaking process, and second, whether the 2014 Commonwealth Games volunteer programme was likely to have any significant, measurable, impact on health inequalities experienced by large sections of the host population. The 2014 Commonwealth Games legacy laid out ambitious plans for the event, in which it was anticipated that it would provide explicit opportunities to impact positively on health inequalities. By using realist synthesis to unpick the theories underpinning the volunteer programme, the review identifies the population subgroups for whom the programme was likely to be successful, how this could be achieved and in what contexts. In answer to the first research question, the review found that while realist methods were able to provide a more nuanced exposition of the impacts of the Games volunteer programme on health inequalities than previous traditional reviews had been able to provide, there were several drawbacks to using the method. It was found to be resource-intensive and complex, encouraging the exploration of a much wider set of literatures at the expense of an in-depth grasp of the complexities of those literatures. In answer to the second research question, the review found that the Games were, if anything, likely to exacerbate health inequalities because the programme was designed in such a way that individuals recruited to it were most likely to be those in least need of the additional mental and physical health benefits that Games volunteering was designed to provide. The following thesis details the approach taken to investigate both the realist approach to evidence synthesis and the likelihood that the 2014 Games volunteer programme would yield the expected results.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.