995 resultados para SAT-solvers, Small Hard Benchmarks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

For some time, the satisfiability formulae that have been the most difficult to solve for their size have been crafted to be unsatisfiable by the use of cardinality constraints. Recent solvers have introduced explicit checking of such constraints, rendering previously difficult formulae trivial to solve. A family of unsatisfiable formulae is described that is derived from the sgen4 family but cannot be solved using cardinality constraints detection and reasoning alone. These formulae were found to be the most difficult during the SAT2014 competition by a significant margin and include the shortest unsolved benchmark in the competition, sgen6-1200-5-1.cnf.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some basics of combinatorial block design are combined with certain constraint satisfaction problems of interest to the satisfiability community. The paper shows how such combinations lead to satisfiability problems, and shows empirically that these are some of the smallest very hard satisfiability problems ever constructed. Partially balanced (0,1) designs (PB01Ds) are introduced as an extension of balanced incomplete block designs (BIBDs) and (0,1) designs. Also, (0,1) difference sets are introduced as an extension of certain cyclical difference sets. Constructions based on (0,1) difference sets enable generation of PB01Ds over a much wider range of parameters than is possible for BIBDs. Building upon previous work of Spence, it is shown how PB01Ds lead to small, very hard, unsatisfiable formulas. A new three-dimensional form of combinatorial block design is introduced, and leads to small, very hard, satisfiable formulas. The methods are validated on solvers that performed well in the SAT 2009 and earlier competitions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The satisfiability problem is known to be NP-Complete; therefore, there should be relatively small problem instances that take a very long time to solve. However, most of the smaller benchmarks that were once thought challenging, especially the satisfiable ones, can be processed quickly by modern SAT-solvers. We describe and make available a generator that produces both unsatisfiable and, more significantly, satisfiable formulae that take longer to solve than any others known. At the two most recent international SAT Competitions, the smallest unsolved benchmarks were created by this generator. We analyze the results of all solvers in the most recent competition when applied to these benchmarks and also present our own more focused experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model has been developed to predict the erosive wear behaviour of elastomers under conditions of glancing impact by small hard particles. Previous work has shown the erosive wear mechanism of elastomers under these conditions to be similar in nature to that of abrasive wear by a sharp blade. The model presented here was developed from the model of Southern and Thomas for sliding abrasion, by combining their treatment of the growth of surface cracks with a model for particle impact in which the force - displacement relationship for an idealized flat-ended punch on a semi-infinite elastic solid was assumed. In this way an expression for the erosive wear rate was developed, and compared with experimental measurements of wear rate for natural rubber, styrene - butadiene rubber and a highly crosslinked polybutadiene rubber. Good qualitative agreement was found between the predictions of the model and the experimental measurements. The variation of erosion rate with impact velocity, impact angle, particle size, elastic modulus of the material, coefficient of friction and fatigue properties were all well accounted for. Quantitative agreement was less good, and the effects of erosive particle shape could not be accounted for. The reasons for these discrepancies are discussed. © 1992 IOP Publishing Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La crittografia ha sempre rivestito un ruolo primario nella storia del genere umano, dagli albori ai giorni nostri, e il periodo in cui viviamo non fa certo eccezione. Al giorno d'oggi, molti dei gesti che vengono compiuti anche solo come abitudine (operazioni bancarie, apertura automatica dell'auto, accedere a Facebook, ecc.), celano al loro interno la costante presenza di sofisticati sistemi crittografici. Proprio a causa di questo fatto, è importante che gli algoritmi utilizzati siano in qualche modo certificati come ragionevolmente sicuri e che la ricerca in questo campo proceda costantemente, sia dal punto di vista dei possibili nuovi exploit per forzare gli algoritmi usati, sia introducendo nuovi e sempre più complessi sistemi di sicurezza. In questa tesi viene proposto una possibile implementazione di un particolare tipo di attacco crittoanalitico, introdotto nel 2000 da due ricercatori dell'Università "La Sapienza" di Roma, e conosciuto come "Crittoanalisi Logica". L'algoritmo su cui è incentrato il lavoro è il Data Encryption Standard (DES), ostico standard crittografico caduto in disuso nel 1999 a causa delle dimensioni ridotte della chiave, seppur tuttora sia algebricamente inviolato. Il testo è strutturato nel seguente modo: il primo capitolo è dedicato ad una breve descrizione di DES e della sua storia, introducendo i concetti fondamentali con cui si avrà a che fare per l'intera dissertazione Nel secondo capitolo viene introdotta la Crittoanalisi Logica e viene fornita una definizione della stessa, accennando ai concetti matematici necessari alla comprensione dei capitoli seguenti. Nel capitolo 3 viene presentato il primo dei due software sviluppati per rendere possibile l'attuazione di questo attacco crittoanalitico, una libreria per la rappresentazione e la manipolazione di formule logiche scritta in Java. Il quarto ed ultimo capitolo descrive il programma che, utilizzando la libreria descritta nel capitolo 3, elabora in maniera automatica un insieme di proposizioni logiche semanticamente equivalenti a DES, la cui verifica di soddisfacibilità, effettuata tramite appositi tools (SAT solvers) equivale ad effettuare un attacco di tipo known-plaintext su tale algoritmo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attempts to strengthen a chromium-modified titanium trialuminide by a combination of grain size refinement and dispersoid strengthening led to a new means to synthesize such materials. This Reactive Mechanical Alloying/Milling process uses in situ reactions between the metallic powders and elements from a process control agent and/or a gaseous environment to assemble a dispersed small hard particle phase within the matrix by a bottom-up approach. In the current research milled powders of the trialuminide alloy along with titanium carbide were produced. The amount of the carbide can be varied widely with simple processing changes and in this case the milling process created trialuminide grain sizes and carbide particles that are the smallest known from such a process. Characterization of these materials required the development of x-ray diffraction means to determine particle sizes by deconvoluting and synthesizing components of the complex multiphase diffraction patterns and to carry out whole pattern analysis to analyze the diffuse scattering that developed from larger than usual highly defective grain boundary regions. These identified regions provide an important mass transport capability in the processing and not only facilitate the alloy development, but add to the understanding of the mechanical alloying process. Consolidation of the milled powder that consisted of small crystallites of the alloy and dispersed carbide particles two nanometers in size formed a unique, somewhat coarsened, microstructure producing an ultra-high strength solid material composed of the chromium-modified titanium trialuminide alloy matrix with small platelets of the complex carbides Ti2AlC and Ti3AlC2. This synthesis process provides the unique ability to nano-engineer a wide variety of composite materials, or special alloys, and has shown the ability to be extended to a wide variety of metallic materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a new technique referred to as watched subgraphs which improves the performance of BBMC, a leading state of the art exact maximum clique solver (MCP). It is based on watched literals employed by modern SAT solvers for boolean constraint propagation. In efficient SAT algorithms, a list of clauses is kept for each literal (it is said that the clauses watch the literal) so that only those in the list are checked for constraint propagation when a (watched) literal is assigned during search. BBMC encodes vertex sets as bit strings, a bit block representing a subset of vertices (and the corresponding induced subgraph) the size of the CPU register word. The paper proposes to watch two subgraphs of critical sets during MCP search to efficiently compute a number of basic operations. Reported results validate the approach as the size and density of problem instances rise, while achieving comparable performance in the general case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wild mungbean, Vigna radiata ssp. sublobata, is an 'old world' tropical species indigenous throughout the better watered areas of northern Australia. Variation among 115 accessions, mainly from Australia, West Timor, and Papua New Guinea, was evaluated for several diverse traits. The plants were cultivated in the field at 2 sowing dates, at both a tropical and a subtropical location, with 6 accessions from India and a mungbean cultivar for comparison. Substantial variation was identified for traits of potential agronomic, adaptive, or taxonomic interest. For some traits, like phenology, the variation appeared to be systematic, with plausible underlying physiological and/or adaptive explanation. Among accessions, wild type traits, like prostrate habit, more gracile morphology, twining form, and small hard seeds, tended to be associated. There was a general geographic trend for lines collected from locations more remote from where mungbean has historically been cultivated to show greater expression of wild type traits, with few 'traits of domestication' evident in the Australian accessions. Some of the identified variation, e. g. higher seed protein content, hardseededness, and putative disease resistance, may be of value in mungbean variety improvement. A more targetted evaluation of the collection would likely reveal other adaptations, especially tolerance to environmental stresses. As such, the wild accessions are a potentially valuable if under-utilised germplasm resource.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Ternary Tree Solver (tts) is a complete solver for propositional satisfiability which was designed to have good performance on the most difficult small instances. It uses a static ternary tree data structure to represent the simplified proposition under all permissible partial assignments and maintains a database of derived propositions known to be unsatisfiable. In the SAT2007 competition version 4.0 won the silver medal for the category handmade, speciality UNSAT solvers and was the top qualifier for the second stage for handmade benchmarks, solving 11 benchmarks which were not solved by any other entrant. We describe the methods used by the solver and analyse the competition Phase 1 results on small benchmarks. We propose a first version of a comprehensive suite of small difficult satisfiability benchmarks (sdsb) and compare the worst-case performance of the competition medallists on these benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this chapter, we draw out the relevant themes from a range of critical scholarship from the small body of digital media and software studies work that has focused on the politics of Twitter data and the sociotechnical means by which access is regulated. We highlight in particular the contested relationships between social media research (in both academic and non-academic contexts) and the data wholesale, retail, and analytics industries that feed on them. In the second major section of the chapter we discuss in detail the pragmatic edge of these politics in terms of what kinds of scientific research is and is not possible in the current political economy of Twitter data access. Finally, at the end of the chapter we return to the much broader implications of these issues for the politics of knowledge, demonstrating how the apparently microscopic level of how the Twitter API mediates access to Twitter data actually inscribes and influences the macro level of the global political economy of science itself, through re-inscribing institutional and traditional disciplinary privilege We conclude with some speculations about future developments in data rights and data philanthropy that may at least mitigate some of these negative impacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver (LBV). We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This series of research vignettes is aimed at sharing current and interesting research findings from our team of international Entrepreneurship researchers. This vignette, written by Professor Beth Webster at Swinburne University of Technology, examines how innovation in small and medium size businesses affect their productivity.