981 resultados para Polynomial time hierarchy
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
The capacitor placement (replacement) problem for radial distribution networks determines capacitor types, sizes, locations and control schemes. Optimal capacitor placement is a hard combinatorial problem that can be formulated as a mixed integer nonlinear program. Since this is a NP complete problem (Non Polynomial time) the solution approach uses a combinatorial search algorithm. The paper proposes a hybrid method drawn upon the Tabu Search approach, extended with features taken from other combinatorial approaches such as genetic algorithms and simulated annealing, and from practical heuristic approaches. The proposed method has been tested in a range of networks available in the literature with superior results regarding both quality and cost of solutions.
Resumo:
We analyze the average performance of a general class of learning algorithms for the nondeterministic polynomial time complete problem of rule extraction by a binary perceptron. The examples are generated by a rule implemented by a teacher network of similar architecture. A variational approach is used in trying to identify the potential energy that leads to the largest generalization in the thermodynamic limit. We restrict our search to algorithms that always satisfy the binary constraints. A replica symmetric ansatz leads to a learning algorithm which presents a phase transition in violation of an information theoretical bound. Stability analysis shows that this is due to a failure of the replica symmetric ansatz and the first step of replica symmetry breaking (RSB) is studied. The variational method does not determine a unique potential but it allows construction of a class with a unique minimum within each first order valley. Members of this class improve on the performance of Gibbs algorithm but fail to reach the Bayesian limit in the low generalization phase. They even fail to reach the performance of the best binary, an optimal clipping of the barycenter of version space. We find a trade-off between a good low performance and early onset of perfect generalization. Although the RSB may be locally stable we discuss the possibility that it fails to be the correct saddle point globally. ©2000 The American Physical Society.
Resumo:
Multicommodity flow (MF) problems have a wide variety of applications in areas such as VLSI circuit design, network design, etc., and are therefore very well studied. The fractional MF problems are polynomial time solvable while integer versions are NP-complete. However, exact algorithms to solve the fractional MF problems have high computational complexity. Therefore approximation algorithms to solve the fractional MF problems have been explored in the literature to reduce their computational complexity. Using these approximation algorithms and the randomized rounding technique, polynomial time approximation algorithms have been explored in the literature. In the design of high-speed networks, such as optical wavelength division multiplexing (WDM) networks, providing survivability carries great significance. Survivability is the ability of the network to recover from failures. It further increases the complexity of network design and presents network designers with more formidable challenges. In this work we formulate the survivable versions of the MF problems. We build approximation algorithms for the survivable multicommodity flow (SMF) problems based on the framework of the approximation algorithms for the MF problems presented in [1] and [2]. We discuss applications of the SMF problems to solve survivable routing in capacitated networks.
Resumo:
In this paper, we consider the problem of topology design for optical networks. We investigate the problem of selecting switching sites to minimize total cost of the optical network. The cost of an optical network can be expressed as a sum of three main factors: the site cost, the link cost, and the switch cost. To the best of our knowledge, this problem has not been studied in its general form as investigated in this paper. We present a mixed integer quadratic programming (MIQP) formulation of the problem to find the optimal value of the total network cost. We also present an efficient heuristic to approximate the solution in polynomial time. The experimental results show good performance of the heuristic. The value of the total network cost computed by the heuristic varies within 2% to 21% of its optimal value in the experiments with 10 nodes. The total network cost computed by the heuristic for 51% of the experiments with 10 node network topologies varies within 8% of its optimal value. We also discuss the insight gained from our experiments.
Resumo:
We deal with the optimization of the production of branched sheet metal products. New forming techniques for sheet metal give rise to a wide variety of possible profiles and possible ways of production. In particular, we show how the problem of producing a given profile geometry can be modeled as a discrete optimization problem. We provide a theoretical analysis of the model in order to improve its solution time. In this context we give the complete convex hull description of some substructures of the underlying polyhedron. Moreover, we introduce a new class of facet-defining inequalities that represent connectivity constraints for the profile and show how these inequalities can be separated in polynomial time. Finally, we present numerical results for various test instances, both real-world and academic examples.
Resumo:
This paper presents preliminary results to determine small displacements of a global positioning system (GPS) antenna fastened to a structure using only one L1 GPS receiver. Vibrations, periodic or not, are common in large structures, such as bridges, footbridges, tall buildings, and towers under dynamic loads. The behavior in time and frequency leads to structural analysis studies. The hypothesis of this article is that any large structure that presents vibrations in the centimeter-to-millimeter range can be monitored by phase measurements of a single L1 receiver with a high data rate, as long as the direction of the displacement is pointing to a particular satellite. Within this scenario, the carrier phase will be modulated by antenna displacement. During a period of a few dozen seconds, the relative displacement to the satellite, the satellite clock, and the atmospheric phase delays can be assumed as a polynomial time function. The residuals from a polynomial adjustment contain the phase modulation owing to small displacements, random noise, receiver clock short time instabilities, and multipath. The results showed that it is possible to detect displacements of centimeters in the phase data of a single satellite and millimeters in the difference between the phases of two satellites. After applying a periodic nonsinusoidal displacement of 10 m to the antenna, it is clearly recovered in the difference of the residuals. The time domain spectrum obtained by the fast Fourier transform (FFT) exhibited a defined peak of the third harmonic much more than the random noise using the proposed third-degree polynomial model. DOI: 10.1061/(ASCE)SU.1943-5428.0000070. (C) 2012 American Society of Civil Engineers.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.
Resumo:
In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.
Resumo:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
Resumo:
Proof nets provide abstract counterparts to sequent proofs modulo rule permutations; the idea being that if two proofs have the same underlying proof-net, they are in essence the same proof. Providing a convincing proof-net counterpart to proofs in the classical sequent calculus is thus an important step in understanding classical sequent calculus proofs. By convincing, we mean that (a) there should be a canonical function from sequent proofs to proof nets, (b) it should be possible to check the correctness of a net in polynomial time, (c) every correct net should be obtainable from a sequent calculus proof, and (d) there should be a cut-elimination procedure which preserves correctness. Previous attempts to give proof-net-like objects for propositional classical logic have failed at least one of the above conditions. In Richard McKinley (2010) [22], the author presented a calculus of proof nets (expansion nets) satisfying (a) and (b); the paper defined a sequent calculus corresponding to expansion nets but gave no explicit demonstration of (c). That sequent calculus, called LK∗ in this paper, is a novel one-sided sequent calculus with both additively and multiplicatively formulated disjunction rules. In this paper (a self-contained extended version of Richard McKinley (2010) [22]), we give a full proof of (c) for expansion nets with respect to LK∗, and in addition give a cut-elimination procedure internal to expansion nets – this makes expansion nets the first notion of proof-net for classical logic satisfying all four criteria.
Resumo:
We define an applicative theory of truth TPT which proves totality exactly for the polynomial time computable functions. TPT has natural and simple axioms since nearly all its truth axioms are standard for truth theories over an applicative framework. The only exception is the axiom dealing with the word predicate. The truth predicate can only reflect elementhood in the words for terms that have smaller length than a given word. This makes it possible to achieve the very low proof-theoretic strength. Truth induction can be allowed without any constraints. For these reasons the system TPT has the high expressive power one expects from truth theories. It allows embeddings of feasible systems of explicit mathematics and bounded arithmetic. The proof that the theory TPT is feasible is not easy. It is not possible to apply a standard realisation approach. For this reason we develop a new realisation approach whose realisation functions work on directed acyclic graphs. In this way, we can express and manipulate realisation information more efficiently.
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).
Resumo:
A single-issue spatial election is a voter preference profile derived from an arrangement of candidates and voters on a line, with each voter preferring the nearer of each pair of candidates. We provide a polynomial-time algorithm that determines whether a given preference profile is a single-issue spatial election and, if so, constructs such an election. This result also has preference representation and mechanism design applications.