940 resultados para Convex infinite programming
Resumo:
Polymers which can respond to externally applied stimuli have found much application in the biomedical field due to their (reversible) coil–globule transitions. Polymers displaying a lower critical solution temperature are the most commonly used, but for blood-borne (i.e., soluble) biomedical applications the application of heat is not always possible, nor practical. Here we report the design and synthesis of poly(oligoethylene glycol methacrylate)-based polymers whose cloud points are easily varied by alkaline phosphatase-mediated dephosphorylation. By fine-tuning the density of phosphate groups on the backbone, it was possible to induce an isothermal transition: A change in solubility triggered by removal of a small number of phosphate esters from the side chains activating the LCST-type response. As there was no temperature change involved, this serves as a model of a cell-instructed polymer response. Finally, it was found that both polymers were non cytotoxic against MCF-7 cells (at 1 mg·mL–1), which confirms promise for biomedical applications.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Resumo:
Low birth weight has been associated with increased obesity in adulthood. It has been shown that dietary salt restriction during intrauterine life induces low birth weight and insulin resistance in adult Wistar rats. The present study had a two-fold objective: to evaluate the effects that low salt intake during pregnancy and lactation has on the amount and distribution of adipose tissue; and to determine whether the phenotypic changes in fat mass in this model are associated with alterations in the activity of the renin-angiotensin system. Maternal salt restriction was found to reduce birth weight in male and female offspring. In adulthood, the female offspring of dams fed the low-salt diet presented higher adiposity indices than those seen in the offspring of dams fed a normal-salt diet. This was attributed to the fact that adipose tissue mass (retroperitoneal but not gonadal, mesenteric or inguinal) was greater in those rats than in the offspring of dams fed a normal diet. The adult offspring of dams fed the low-salt diet, compared to those dams fed a normal-salt diet, presented the following: plasma leptin levels higher in males and lower in females; plasma renin activity higher in males but not in females; and no differences in body weight, mean arterial blood pressure or serum angiotensin-converting enzyme activity. Therefore, low salt intake during pregnancy might lead to the programming of obesity in adult female offspring. (c) 2009 Elsevier Inc. All rights reserved.
Resumo:
This paper deals with semi-global C(k)-solvability of complex vector fields of the form L = partial derivative/partial derivative t + x(r) (a(x) + ib(x))partial derivative/partial derivative x, r >= 1, defined on Omega(epsilon) = (-epsilon, epsilon) x S(1), epsilon > 0, where a and b are C(infinity) real-valued functions in (-epsilon, epsilon). It is shown that the interplay between the order of vanishing of the functions a and b at x = 0 influences the C(k)-solvability at Sigma = {0} x S(1). When r = 1, it is permitted that the functions a and b of L depend on the x and t variables, that is, L = partial derivative/partial derivative t + x(a(x, t) + ib(x, t))partial derivative/partial derivative x, where (x, t) is an element of Omega(epsilon).
Resumo:
We study the Gevrey solvability of a class of complex vector fields, defined on Omega(epsilon) = (-epsilon, epsilon) x S(1), given by L = partial derivative/partial derivative t + (a(x) + ib(x))partial derivative/partial derivative x, b not equivalent to 0, near the characteristic set Sigma = {0} x S(1). We show that the interplay between the order of vanishing of the functions a and b at x = 0 plays a role in the Gevrey solvability. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
We introduce a problem called maximum common characters in blocks (MCCB), which arises in applications of approximate string comparison, particularly in the unification of possibly erroneous textual data coming from different sources. We show that this problem is NP-complete, but can nevertheless be solved satisfactorily using integer linear programming for instances of practical interest. Two integer linear formulations are proposed and compared in terms of their linear relaxations. We also compare the results of the approximate matching with other known measures such as the Levenshtein (edit) distance. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We consider random generalizations of a quantum model of infinite range introduced by Emch and Radin. The generalizations allow a neat extension from the class l (1) of absolutely summable lattice potentials to the optimal class l (2) of square summable potentials first considered by Khanin and Sinai and generalised by van Enter and van Hemmen. The approach to equilibrium in the case of a Gaussian distribution is proved to be faster than for a Bernoulli distribution for both short-range and long-range lattice potentials. While exponential decay to equilibrium is excluded in the nonrandom l (1) case, it is proved to occur for both short and long range potentials for Gaussian distributions, and for potentials of class l (2) in the Bernoulli case. Open problems are discussed.
Resumo:
The focus of study in this paper is the class of packing problems. More specifically, it deals with the placement of a set of N circular items of unitary radius inside an object with the aim of minimizing its dimensions. Differently shaped containers are considered, namely circles, squares, rectangles, strips and triangles. By means of the resolution of non-linear equations systems through the Newton-Raphson method, the herein presented algorithm succeeds in improving the accuracy of previous results attained by continuous optimization approaches up to numerical machine precision. The computer implementation and the data sets are available at http://www.ime.usp.br/similar to egbirgin/packing/. (C) 2009 Elsevier Ltd, All rights reserved.
Resumo:
A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.
Resumo:
A bipartite graph G = (V, W, E) is convex if there exists an ordering of the vertices of W such that, for each v. V, the neighbors of v are consecutive in W. We describe both a sequential and a BSP/CGM algorithm to find a maximum independent set in a convex bipartite graph. The sequential algorithm improves over the running time of the previously known algorithm and the BSP/CGM algorithm is a parallel version of the sequential one. The complexity of the algorithms does not depend on |W|.
Resumo:
We investigate several two-dimensional guillotine cutting stock problems and their variants in which orthogonal rotations are allowed. We first present two dynamic programming based algorithms for the Rectangular Knapsack (RK) problem and its variants in which the patterns must be staged. The first algorithm solves the recurrence formula proposed by Beasley; the second algorithm - for staged patterns - also uses a recurrence formula. We show that if the items are not so small compared to the dimensions of the bin, then these algorithms require polynomial time. Using these algorithms we solved all instances of the RK problem found at the OR-LIBRARY, including one for which no optimal solution was known. We also consider the Two-dimensional Cutting Stock problem. We present a column generation based algorithm for this problem that uses the first algorithm above mentioned to generate the columns. We propose two strategies to tackle the residual instances. We also investigate a variant of this problem where the bins have different sizes. At last, we study the Two-dimensional Strip Packing problem. We also present a column generation based algorithm for this problem that uses the second algorithm above mentioned where staged patterns are imposed. In this case we solve instances for two-, three- and four-staged patterns. We report on some computational experiments with the various algorithms we propose in this paper. The results indicate that these algorithms seem to be suitable for solving real-world instances. We give a detailed description (a pseudo-code) of all the algorithms presented here, so that the reader may easily implement these algorithms. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Given a fixed set of identical or different-sized circular items, the problem we deal with consists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and 3D problems are treated. Twice-differentiable models for all these problems are presented. A strategy to reduce the complexity of evaluating the models is employed and, as a consequence, instances with a large number of items can be considered. Numerical experiments show the flexibility and reliability of the new unified approach. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell-Hestenes-Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.