44 resultados para Benchmarks


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The satisfiability problem is known to be NP-Complete; therefore, there should be relatively small problem instances that take a very long time to solve. However, most of the smaller benchmarks that were once thought challenging, especially the satisfiable ones, can be processed quickly by modern SAT-solvers. We describe and make available a generator that produces both unsatisfiable and, more significantly, satisfiable formulae that take longer to solve than any others known. At the two most recent international SAT Competitions, the smallest unsolved benchmarks were created by this generator. We analyze the results of all solvers in the most recent competition when applied to these benchmarks and also present our own more focused experiments.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For some time, the satisfiability formulae that have been the most difficult to solve for their size have been crafted to be unsatisfiable by the use of cardinality constraints. Recent solvers have introduced explicit checking of such constraints, rendering previously difficult formulae trivial to solve. A family of unsatisfiable formulae is described that is derived from the sgen4 family but cannot be solved using cardinality constraints detection and reasoning alone. These formulae were found to be the most difficult during the SAT2014 competition by a significant margin and include the shortest unsolved benchmark in the competition, sgen6-1200-5-1.cnf.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Ternary Tree Solver (tts) is a complete solver for propositional satisfiability which was designed to have good performance on the most difficult small instances. It uses a static ternary tree data structure to represent the simplified proposition under all permissible partial assignments and maintains a database of derived propositions known to be unsatisfiable. In the SAT2007 competition version 4.0 won the silver medal for the category handmade, speciality UNSAT solvers and was the top qualifier for the second stage for handmade benchmarks, solving 11 benchmarks which were not solved by any other entrant. We describe the methods used by the solver and analyse the competition Phase 1 results on small benchmarks. We propose a first version of a comprehensive suite of small difficult satisfiability benchmarks (sdsb) and compare the worst-case performance of the competition medallists on these benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first part of the paper explores some of the reasons why contemporary border studies understate the full significance of state borders and their global primacy. It is argued that this failing is rooted in a much wider lack of historical reflexivity-a reluctance to acknowledge the historical positioning (the 'where and when') of contemporary border studies themselves. This reluctance encourages a form of pseudohistory, or 'epochal thinking', which disfigures perspective on the present. Among the consequences are (a) exaggerated claims of the novelty of contemporary border change, propped up by poorly substantiated benchmarks in the past; (b) an incapacity to recognise the 'past in the present' as in the various historical deposits of state formation processes; and (c) a failure to recognise the distinctiveness of contemporary state borders and how they differ from other borders in their complexity and globality. The second part of the paper argues for a recalibration of border studies aimed at balancing their spatial emphases with a much greater, and more critical, historical sensitivity. It insists that 'boundedness', and state boundedness in particular, is a variable that must be understood historically. This is illustrated by arguing that a better analysis of contemporary border change means rethinking the crude periodisation which distinguishes the age of empire from the age of nation-states and, in turn, from some putative contemporary era 'beyond nation-states' and their borders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of paper is to shine light on the under-theorised relationship between old age and victmisation. In classical criminological studies, the relationship between “age”, victimisation and crime has been dominated by analysis of younger people's experiences. This paper aims to address this knowledge deficit by exploring older people's experiences by linking it to the social construction of vulnerability.

Design/methodology/approach – The paper explores both historical and contemporary narratives relating to the diverse experiences of older people as victims in the UK. In particular, from 1945 to the present, statistical context and theoretical advancement illuminates that older people as a social group have a deep “fear of crime” to their relative victimisation.

Findings – A careful survey of the criminological literature highlights a paucity of research relating to older people's views and experiences of crime and victimisation. The conceptual issue of vulnerability in different contexts is important in understanding ageing and victimisation in UK. The paper's findings illustrate that their experiences have remained marginalised in the debates around social policy, and how the criminal justice system responds to these changes remains yet to be seen.

Research limitations/implications – Any research attempt at theorising “age” should take into consideration not just younger people, but also the diverse experiences of older people. Policy makers may care to ponder that benchmarks be written that takes into full consideration of older people's experiences as vulnerability.

Practical implications – For criminal justice scholars and practitioners, there is a need to listen to the narratives of older people that should help shape and frame debate about their lived experiences. There should be an examination of existing formal and informal practices regarding elders, as the first step in developing an explicit and integrated set of policies and programmes to address the special needs of this group.

Originality/value – This is an original paper in highlighting how important old age is in construction of “victims” in modern society. By theorising age, victimisation and crime it is hoped to dispel and challenge some of the myths surrounding later life, crime and the older victim.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the design, application, and evaluation of a user friendly, flexible, scalable and inexpensive Advanced Educational Parallel (AdEPar) digital signal processing (DSP) system based on TMS320C25 digital processors to implement DSP algorithms. This system will be used in the DSP laboratory by graduate students to work on advanced topics such as developing parallel DSP algorithms. The graduating senior students who have gained some experience in DSP can also use the system. The DSP laboratory has proved to be a useful tool in the hands of the instructor to teach the mathematically oriented topics of DSP that are often difficult for students to grasp. The DSP laboratory with assigned projects has greatly improved the ability of the students to understand such complex topics as the fast Fourier transform algorithm, linear and circular convolution, the theory and design of infinite impulse response (IIR) and finite impulse response (FIR) filters. The user friendly PC software support of the AdEPar system makes it easy to develop DSP programs for students. This paper gives the architecture of the AdEPar DSP system. The communication between processors and the PC-DSP processor communication are explained. The parallel debugger kernels and the restrictions of the system are described. The programming in the AdEPar is explained, and two benchmarks (parallel FFT and DES) are presented to show the system performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speeding up sequential programs on multicores is a challenging problem that is in urgent need of a solution. Automatic parallelization of irregular pointer-intensive codes, exempli?ed by the SPECint codes, is a very hard problem. This paper shows that, with a helping hand, such auto-parallelization is possible and fruitful. This paper makes the following contributions: (i) A compiler framework for extracting pipeline-like parallelism from outer program loops is presented. (ii) Using a light-weight programming model based on annotations, the programmer helps the compiler to ?nd thread-level parallelism. Each of the annotations speci?es only a small piece of semantic information that compiler analysis misses, e.g. stating that a variable is dead at a certain program point. The annotations are designed such that correctness is easily veri?ed. Furthermore, we present a tool for suggesting annotations to the programmer. (iii) The methodology is applied to autoparallelize several SPECint benchmarks. For the benchmark with most parallelism (hmmer), we obtain a scalable 7-fold speedup on an AMD quad-core dual processor. The annotations constitute a parallel programming model that relies extensively on a sequential program representation. Hereby, the complexity of debugging is not increased and it does not obscure the source code. These properties could prove valuable to increase the ef?ciency of parallel programming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Randomising set index functions can reduce the number of conflict misses in data caches by spreading the cache blocks uniformly over all sets. Typically, the randomisation functions compute the exclusive ors of several address bits. Not all randomising set index functions perform equally well, which calls for the evaluation of many set index functions. This paper discusses and improves a technique that tackles this problem by predicting the miss rate incurred by a randomisation function, based on profiling information. A new way of looking at randomisation functions is used, namely the null space of the randomisation function. The members of the null space describe pairs of cache blocks that are mapped to the same set. This paper presents an analytical model of the error made by the technique and uses this to propose several optimisations to the technique. The technique is then applied to generate a conflict-free randomisation function for the SPEC benchmarks. (C) 2003 Elsevier Science B.V. All rights reserved.