970 resultados para Exact constraint
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
This work studies the combination of safe and probabilistic reasoning through the hybridization of Monte Carlo integration techniques with continuous constraint programming. In continuous constraint programming there are variables ranging over continuous domains (represented as intervals) together with constraints over them (relations between variables) and the goal is to find values for those variables that satisfy all the constraints (consistent scenarios). Constraint programming “branch-and-prune” algorithms produce safe enclosures of all consistent scenarios. Special proposed algorithms for probabilistic constraint reasoning compute the probability of sets of consistent scenarios which imply the calculation of an integral over these sets (quadrature). In this work we propose to extend the “branch-and-prune” algorithms with Monte Carlo integration techniques to compute such probabilities. This approach can be useful in robotics for localization problems. Traditional approaches are based on probabilistic techniques that search the most likely scenario, which may not satisfy the model constraints. We show how to apply our approach in order to cope with this problem and provide functionality in real time.
Resumo:
Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed.
Resumo:
This chapter presents a general methodology for the formulation of the kinematic constraint equations at position, velocity and acceleration levels. Also a brief characterization of the different type of constraints is offered, namely the holonomic and nonholonomic constraints. The kinematic constraints described here are formulated using generalized coordinates. The chapter ends with a general approach to deal with the kinematic analysis of multibody systems.
Resumo:
El batolito de Achala es uno de los macizos graníticos más grandes de las Sierras Pampeanas, el cual se localiza en las Sierras Grandes de Córdoba. Si bien el batolito de Achala ha sido objeto de diversos estudios geológicos, principalmente debido a sus yacimientos de uranio, el mismo todavía no posee un inequívoco modelo petrogéntico. Tampoco existe, en la actualidad, un inequívoco modelo que explique la preconcentración de uranio en las rocas graníticas portadores de este elemento. Este Proyecto tiene como objetivo general realizar estudios petrológicos y geoquímicos en la región conocida como CAÑADA del PUERTO, un lugar estratégicamente definido debido a la abundancia de granitos equigranulares de grano fino y/o grano medio biotíticos, emplazados durante el desarrollo de cizallas magmáticas tardías, y que constituirían las rocas fuentes de uranio. El objetivo específico requiere estudios detallados de las diferentes facies del batolito de Achala en el área seleccionada, incluyendo investigaciones petrológicas, geoquímicas de roca total, geoquímica de isótopos radiactivos y química mineral, con el fin de definir un MODELO PETROGENÉTICO que permita explicar: (a) el origen del magma padre y el subsiguiente proceso de cristalización de las diferentes facies graníticas aflorantes en el área de estudio, (b) identificar el proceso principal que condujo a la PRECONCENTRACIÓN uranífera de los magmas graníticos canalizados en las cizallas magmáticas tardías. Ambos objetivos se complementan y no son compartimentos estancos, ya que el logro combinado de estos objetivos permitirá comprender de mejor manera el proceso geoquímico que gobernó la distribución y concentración del U. De esta manera, se intentará definir un MODELO de PRECONCENTRACIÓN URANÍFERA EXTRAPOLABLE a otras áreas graníticas enriquecidas en uranio, constituyendo una poderosa herramienta de investigación aplicada a la exploración uranífera. En particular, el conocimiento de los recursos uraníferos es parte de una estrategia nacional con vistas a triplicar antes del 2025 la disponibilidad energética actual, en cuyo caso, el uranio constituye la materia prima de las centrales nucleares que se están planificando y en construcción. Por otro lado, la Argentina adhirió al Protocolo de Kioto y, junto a los países adherentes, deben disminuir de manera progresiva el uso de combustibles fósiles (que producen gases de efecto invernadero), reemplazándola por otras fuentes de energía, entre ellas, la ENERGÍA NUCLEAR. Este Proyecto, si bien NO es un Proyecto de exploración y/o prospección minera, es totalmente consistente con la política energética nacional promocionada desde el Ministerio de Planificación Federal, Inversión Pública y Servicios (v. sitio WEB CNEA), que ha invertido, desde 2006, importantes sumas de dinero, en el marco del Programa de Reactivación de la Actividad Nuclear.Los estudios referidos serán conducidos por los Drs. Dahlquist (CONICET-UNC) y Zarco (CNEA) quienes integrarán sus experiencias desarrolladas en el campo de las Ciencias Básicas con aquel logrado en el campo de las Ciencias Aplicadas, respectivamente. Se pretende, por tanto, aplicar conocimientos académicos-científicos a un problema de geología con potencial significado económico-energético, vinculando las instituciones referidas, esto es, CONICET-UNC y CNEA, con el fin de contribuir a la actividad socioeconómica de la provincia de Córdoba en particular y de Argentina en general.Finalmente, convencidos de que el progreso de la Ciencia y el Desarrollo Tecnológico está íntimamente vinculada con la sólida Formación de Recursos Humanos se pretende que este Proyecto contribuya SIGNIFICATIVAMENTE a las investigaciones de Doctorado que iniciará la Geóloga Carina Bello, actual Becaria de la CNEA.
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2013
Resumo:
n.s. no.43(1988)
Resumo:
I consider the problem of assigning agents to objects where each agent must pay the price of the object he gets and prices must sum to a given number. The objective is to select an assignment-price pair that is envy-free with respect to the true preferences. I prove that the proposed mechanism will implement both in Nash and strong Nash the set of envy-free allocations. The distinguishing feature of the mechanism is that it treats the announced preferences as the true ones and selects an envy-free allocation with respect to the announced preferences.
Resumo:
This position paper considers the devolution of further fiscal powers to the Scottish Parliament in the context of the objectives and remit of the Smith Commission. The argument builds on our discussion of fiscal decentralization made in our previous published work on this topic. We ask what sort of budget constraint the Scottish Parliament should operate with. A soft budget constraint (SBC) allows the Scottish Parliament to spend without having to consider all of the tax and, therefore, political consequences, of that spending, which is effectively the position at the moment. The incentives to promote economic growth through fiscal policy – on both the tax and spending sides are weak to non-existent. This is what the Scotland Act, 1998, and the continuing use of the Barnett block grant, gave Scotland. Now other budget constraints are being discussed – those of the Calman Commission (2009) and the Scotland Act (2012), as well as the ones offered in 2014 by the various political parties – Scottish Conservatives, Scottish Greens, Scottish Labour, the Scottish Liberal Democrats and the Scottish Government. There is also the budget constraint designed by the Holtham Commission (2010) for Wales that could just as well be used in Scotland. We examine to what extent these offer the hard budget constraint (HBC) that would bring tax policy firmly into the realm of Scottish politics, asking the Scottish electorate and Parliament to consider the costs to them of increasing spending in terms of higher taxes; or the benefits to them of using public spending to grow the tax base and own-sourced taxes? The hardest budget constraint of all is offered by independence but, as is now known, a clear majority of those who voted in the referendum did not vote for this form of budget constraint. Rather they voted for a significant further devolution of fiscal powers while remaining within a political and monetary union with the rest of the UK, with the risk pooling and revenue sharing that this implies. It is not surprising therefore that none of the budget constraints on offer, apart from the SNP’s, come close to the HBC of independence. However, the almost 25% fall in the price of oil since the referendum, a resource stream so central to the SNP’s economic policy making, underscores why there is a need for a trade off between a HBC and risk pooling and revenue sharing. Ranked according to the desirable characteristic of offering something approaching a HBC the least desirable are those of the Calman Commission, the Scotland Act, 2012, and Scottish Labour. In all of these the ‘elasticity’ of the block grant in the face of failure to grow the Scottish tax base is either not defined or is very elastic – meaning that the risk of failure is shuffled off to taxpayers outside of Scotland. The degree of HBC in the Scottish Conservative, Scottish Greens and Scottish Liberal Democrats proposals are much more desirable from an economic growth point of view, the latter even embracing the HBC proposed by the Holtham Commission that combines serious tax policy with welfare support in the long-run. We judge that the budget constraint in the SNP’s proposals is too hard as it does not allow for continuation of the ‘welfare union’ in the UK. We also consider that in the case of a generalized UK economic slow requiring a fiscal stimulus that the Scottish Parliament be allowed increased borrowing to be repaid in the next economic upturn.
Resumo:
The use of Geographic Information Systems has revolutionalized the handling and the visualization of geo-referenced data and has underlined the critic role of spatial analysis. The usual tools for such a purpose are geostatistics which are widely used in Earth science. Geostatistics are based upon several hypothesis which are not always verified in practice. On the other hand, Artificial Neural Network (ANN) a priori can be used without special assumptions and are known to be flexible. This paper proposes to discuss the application of ANN in the case of the interpolation of a geo-referenced variable.
Resumo:
Hypergraph width measures are a class of hypergraph invariants important in studying the complexity of constraint satisfaction problems (CSPs). We present a general exact exponential algorithm for a large variety of these measures. A connection between these and tree decompositions is established. This enables us to almost seamlessly adapt the combinatorial and algorithmic results known for tree decompositions of graphs to the case of hypergraphs and obtain fast exact algorithms. As a consequence, we provide algorithms which, given a hypergraph H on n vertices and m hyperedges, compute the generalized hypertree-width of H in time O*(2n) and compute the fractional hypertree-width of H in time O(1.734601n.m).1
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certainassumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur inthe proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p.There are many statistical tests being used to check whether empirical marker data obeys theHardy-Weinberg principle. Among these are the classical xi-square test (with or withoutcontinuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combinationwith Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE)are numerical in nature, requiring the computation of a test statistic and a p-value.There is however, ample space for the use of graphics in HWE tests, in particular for the ternaryplot. Nowadays, many genetical studies are using genetical markers known as SingleNucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the countsone typically computes genotype frequencies and allele frequencies. These frequencies satisfythe unit-sum constraint, and their analysis therefore falls within the realm of compositional dataanalysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotypefrequencies can be adequately represented in a ternary plot. Compositions that are in exactHWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected ina statistical test are typically “close" to the parabola, whereas compositions that differsignificantly from HWE are “far". By rewriting the statistics used to test for HWE in terms ofheterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted inthe ternary plot. This way, compositions can be tested for HWE purely on the basis of theirposition in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphicalrepresentations where large numbers of SNPs can be tested for HWE in a single graph. Severalexamples of graphical tests for HWE (implemented in R software), will be shown, using SNPdata from different human populations
Resumo:
The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given