196 resultados para Polynomially solvable
Resumo:
LEX is a stream cipher that progressed to Phase 3 of the eSTREAM stream cipher project. In this paper, we show that the security of LEX against algebraic attacks relies on a small equation system not being solvable faster than exhaustive search. We use the byte leakage in LEX to construct a system of 21 equa- tions in 17 variables. This is very close to the require- ment for an efficient attack, i.e. a system containing 16 variables. The system requires only 36 bytes of keystream, which is very low.
Resumo:
This paper examines the algebraic cryptanalysis of small scale variants of the LEX-BES. LEX-BES is a stream cipher based on the Advanced Encryption Standard (AES) block cipher. LEX is a generic method proposed for constructing a stream cipher from a block cipher, initially introduced by Biryukov at eSTREAM, the ECRYPT Stream Cipher project in 2005. The Big Encryption System (BES) is a block cipher introduced at CRYPTO 2002 which facilitates the algebraic analysis of the AES block cipher. In this paper, experiments were conducted to find solution of the equation system describing small scale LEX-BES using Gröbner Basis computations. This follows a similar approach to the work by Cid, Murphy and Robshaw at FSE 2005 that investigated algebraic cryptanalysis on small scale variants of the BES. The difference between LEX-BES and BES is that due to the way the keystream is extracted, the number of unknowns in LEX-BES equations is fewer than the number in BES. As far as the author knows, this attempt is the first at creating solvable equation systems for stream ciphers based on the LEX method using Gröbner Basis computations.
Resumo:
This work examines the algebraic cryptanalysis of small scale variants of the LEX-BES. LEX-BES is a stream cipher based on the Advanced Encryption Standard (AES) block cipher. LEX is a generic method proposed for constructing a stream cipher from a block cipher, initially introduced by Biryukov at eSTREAM, the ECRYPT Stream Cipher project in 2005. The Big Encryption System (BES) is a block cipher introduced at CRYPTO 2002 which facilitates the algebraic analysis of the AES block cipher. In this article, experiments were conducted to find solutions of equation systems describing small scale LEX-BES using Gröbner Basis computations. This follows a similar approach to the work by Cid, Murphy and Robshaw at FSE 2005 that investigated algebraic cryptanalysis on small scale variants of the BES. The difference between LEX-BES and BES is that due to the way the keystream is extracted, the number of unknowns in LEX-BES equations is fewer than the number in BES. As far as the authors know, this attempt is the first at creating solvable equation systems for stream ciphers based on the LEX method using Gröbner Basis computations.
Resumo:
It is a basis of darwinian evolution that the microevolutionary mechanisms that can be studied in the present are sufficient to account for macroevolution. However, this idea needs to be tested explicitly, as highlighted here by the example of the superceding of dinosaurs and pterosaurs by birds and placental mammals that occurred near the Cretaceous/Tertiary boundary approximately 65 million years ago. A major problem for testing the sufficiency of microevolutionary processes is that independent ideas (such as the existence of an extraterrestrial impact, and the extinction of dinosaurs) were linked without the evidence for each idea being evaluated separately. Here, we suggest and discuss five testable models for the times and divergences of modern mammals and birds. Determination of the model that best represents these events will enable the role of microevolutionary mechanisms to be evaluated. The question of the sufficiency of microevolutionary processes for macroevolution is solvable, and available evidence supports an important role for biological processes in the initial decline of dinosaurs and pterosaurs.
Resumo:
When a puzzle game is created, its design parameters must be chosen to allow solvable and interesting challenges to be created for the player. We investigate the use of random sampling as a computationally inexpensive means of automated game analysis, to evaluate the BoxOff family of puzzle games. This analysis reveals useful insights into the game, such as the surprising fact that almost 100% of randomly generated challenges have a solution, but less than 10% will be solved using strictly random play, validating the inventor’s design choices. We show the 1D game to be trivial and the 3D game to be viable.
Resumo:
We propose an exactly solvable model for the two-state curve-crossing problem. Our model assumes the coupling to be a delta function. It is used to calculate the effect of curve crossing on the electronic absorption spectrum and the resonance Raman excitation profile.
Resumo:
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.
Resumo:
The Finite Element Method (FEM) has made a number of otherwise intractable problems solvable. An important aspect for achieving an economical and accurate solution through FEM is matching the formulation and the computational organisation to the problem. This was realised forcefully in the present case of the solution of a class of moving contact boundary value problems of fastener joints. This paper deals with the problem of changing contact at the pin-hole interface of a fastener joint. Due to moving contact, the stresses and displacements are nonlinear with load. This would, in general, need an interactive-incremental approach for solution. However, by posing the problem in an inverse way, a solution is sought for obtaining loads to suit given contact configuration. Numerical results are given for typical isotropic and composite plates with rigid pins. Two cases of loading are considered: (i) load applied only at the edges of the plate and (ii) load applied at the pin and reacted at a part of the edge of the plate. Load-contact relationships, compliance and stress-patterns are investigated. This paper clearly demonstrates the simplification achieved by a suitable formulation of the problem. The results are of significance to the design and analysis of fastener joints.
Resumo:
The Finite Element Method (FEM) has made a number of otherwise intractable problems solvable. An important aspect for achieving an economical and accurate solution through FEM is matching the formulation and the computational organisation to the problem. This was realised forcefully in the present case of the solution of a class of moving contact boundary value problems of fastener joints. This paper deals with the problem of changing contact at the pin-hole interface of a fastener joint. Due to moving contact, the stresses and displacements are nonlinear with load. This would, in general, need an interactive-incremental approach for solution. However, by posing the problem in an inverse way, a solution is sought for obtaining loads to suit given contact configuration. Numerical results are given for typical isotropic and composite plates with rigid pins. Two cases of loading are considered: (i) load applied only at the edges of the plate and (ii) load applied at the pin and reacted at a part of the edge of the plate. Load-contact relationships, compliance and stress-patterns are investigated. This paper clearly demonstrates the simplification achieved by a suitable formulation of the problem. The results are of significance to the design and analysis of fastener joints.
Resumo:
An axis-parallel k-dimensional box is a Cartesian product R-1 x R-2 x...x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), b(i)] on the real line. For a graph G, its boxicity box(G) is the minimum dimension k, such that G is representable as the intersection graph of (axis-parallel) boxes in k-dimensional space. The concept of boxicity finds applications in various areas such as ecology, operations research etc. A number of NP-hard problems are either polynomial time solvable or have much better approximation ratio on low boxicity graphs. For example, the max-clique problem is polynomial time solvable on bounded boxicity graphs and the maximum independent set problem for boxicity d graphs, given a box representation, has a left perpendicular1 + 1/c log n right perpendicular(d-1) approximation ratio for any constant c >= 1 when d >= 2. In most cases, the first step usually is computing a low dimensional box representation of the given graph. Deciding whether the boxicity of a graph is at most 2 itself is NP-hard. We give an efficient randomized algorithm to construct a box representation of any graph G on n vertices in left perpendicular(Delta + 2) ln nright perpendicular dimensions, where Delta is the maximum degree of G. This algorithm implies that box(G) <= left perpendicular(Delta + 2) ln nright perpendicular for any graph G. Our bound is tight up to a factor of ln n. We also show that our randomized algorithm can be derandomized to get a polynomial time deterministic algorithm. Though our general upper bound is in terms of maximum degree Delta, we show that for almost all graphs on n vertices, their boxicity is O(d(av) ln n) where d(av) is the average degree.
Resumo:
The study examines the personnel training and research activities carried out by the Organization and Methods Division of the Ministry of Finance and their becoming a part and parcel of the state administration in 1943-1971. The study is a combination of institutional and ideological historical research in recent history on adult education, using a constructionist approach. Material salient to the study comes from the files of the Organization and Methods Division in the National Archives, parliamentary documents, committee reports, and the magazines. The concentrated training and research activities arranged by the Organization and Methods Division, became a part and parcel of the state administration in the midst of controversial challenges and opportunities. They served to solve social problems which beset the state administration as well as contextual challenges besetting rationalization measures, and organizational challenges. The activities were also affected by a dependence on decision-makers, administrative units, and civil servants organizations, by different views on rationalization and the holistic nature of reforms, as well as by the formal theories that served as resources. It chose long-term projects which extended to the political decision-makers and administrative units turf, and which were intended to reform the structures of the state administration and to rationalize the practices of the administrative units. The crucial questions emerged in opposite pairs (a constitutional state vs. the ideology of an administratively governed state, a system of national boards vs. a system of government through ministries, efficiency of work vs. pleasantness of work, centralized vs. decentralized rationalization activities) which were not solvable problems but impossible questions with no ultimate answers. The aim and intent of the rationalization of the state administration (the reform of the central, provincial, and local governments) was to facilitate integrated management and to render a greater amount of work by approaching management procedures scientifically and by clarifying administrative instances and their respon-sibilities in regards to each other. The means resorted to were organizational studies and committee work. In the rationalization of office work and finance control, the idea was to effect savings in administrative costs and to pare down those costs as well as to rationalize and heighten those functions by developing the institution of work study practitioners in order to coordinate employer and employee relationships and benefits (the training of work study practitioners, work study, and a two-tier work study practitioner organization). A major part of the training meant teaching and implementing leadership skills in practice, which, in turn, meant that the learning environment was the genuine work community and efforts to change it. In office rationalization, the solution to regulate the relations between the employer and the employees was the co-existence of the technical and biological rationalization and the human resource administration and the accounting and planning systems at the turn of the 1960s and 1970s. The former were based on the school of scientific management and human relations, the latter on system thinking, which was a combination of the former two. In the rationalization of the state administration, efforts were made to find solutions to stabilize management ideologies and to arrange the relationships of administrative systems in administrative science - among other things, in the Hoover Committee and the Simon decision making theory, and, in the 1960s, in system thinking. Despite the development-related vocabulary, the practical work was advanced rationalization. It was said that the practical activities of both the state administration and the administrative units depended on professional managers who saw to production results and human relations. The pedagogic experts hired to develop training came up with a training system, based on the training-technological model where the training was made a function of its own. The State Training Center was established and the training office of the Organization and Methods Division became the leader and coordinator of personnel training.
Resumo:
The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.
Resumo:
A spanning tree T of a graph G is said to be a tree t-spanner if the distance between any two vertices in T is at most t times their distance in G. A graph that has a tree t-spanner is called a tree t-spanner admissible graph. The problem of deciding whether a graph is tree t-spanner admissible is NP-complete for any fixed t >= 4 and is linearly solvable for t <= 2. The case t = 3 still remains open. A chordal graph is called a 2-sep chordal graph if all of its minimal a - b vertex separators for every pair of non-adjacent vertices a and b are of size two. It is known that not all 2-sep chordal graphs admit tree 3-spanners This paper presents a structural characterization and a linear time recognition algorithm of tree 3-spanner admissible 2-sep chordal graphs. Finally, a linear time algorithm to construct a tree 3-spanner of a tree 3-spanner admissible 2-sep chordal graph is proposed. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We consider the following question: Let S (1) and S (2) be two smooth, totally-real surfaces in C-2 that contain the origin. If the union of their tangent planes is locally polynomially convex at the origin, then is S-1 boolean OR S-2 locally polynomially convex at the origin? If T (0) S (1) a (c) T (0) S (2) = {0}, then it is a folk result that the answer is yes. We discuss an obstruction to the presumed proof, and provide a different approach. When dim(R)(T0S1 boolean AND T0S2) = 1, we present a geometric condition under which no consistent answer to the above question exists. We then discuss conditions under which we can expect local polynomial convexity.
Resumo:
We discuss a many-body Hamiltonian with two- and three-body interactions in two dimensions introduced recently by Murthy, Bhaduri and Sen. Apart from an analysis of some exact solutions in the many-body system, we analyse in detail the two-body problem which is completely solvable. We show that the solution of the two-body problem reduces to solving a known differential equation due to Heun. We show that the two-body spectrum becomes remarkably simple for large interaction strengths and the level structure resembles that of the Landau levels. We also clarify the 'ultraviolet' regularization which is needed to define an inverse-square potential properly and discuss its implications for our model.