943 resultados para Branch and bounds
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.
Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Abstract not available
Resumo:
Les jeux de policiers et voleurs sont étudiés depuis une trentaine d’années en informatique et en mathématiques. Comme dans les jeux de poursuite en général, des poursuivants (les policiers) cherchent à capturer des évadés (les voleurs), cependant ici les joueurs agissent tour à tour et sont contraints de se déplacer sur une structure discrète. On suppose toujours que les joueurs connaissent les positions exactes de leurs opposants, autrement dit le jeu se déroule à information parfaite. La première définition d’un jeu de policiers-voleurs remonte à celle de Nowakowski et Winkler [39] et, indépendamment, Quilliot [46]. Cette première définition présente un jeu opposant un seul policier et un seul voleur avec des contraintes sur leurs vitesses de déplacement. Des extensions furent graduellement proposées telles que l’ajout de policiers et l’augmentation des vitesses de mouvement. En 2014, Bonato et MacGillivray [6] proposèrent une généralisation des jeux de policiers-voleurs pour permettre l’étude de ceux-ci dans leur globalité. Cependant, leur modèle ne couvre aucunement les jeux possédant des composantes stochastiques tels que ceux dans lesquels les voleurs peuvent bouger de manière aléatoire. Dans ce mémoire est donc présenté un nouveau modèle incluant des aspects stochastiques. En second lieu, on présente dans ce mémoire une application concrète de l’utilisation de ces jeux sous la forme d’une méthode de résolution d’un problème provenant de la théorie de la recherche. Alors que les jeux de policiers et voleurs utilisent l’hypothèse de l’information parfaite, les problèmes de recherches ne peuvent faire cette supposition. Il appert cependant que le jeu de policiers et voleurs peut être analysé comme une relaxation de contraintes d’un problème de recherche. Ce nouvel angle de vue est exploité pour la conception d’une borne supérieure sur la fonction objectif d’un problème de recherche pouvant être mise à contribution dans une méthode dite de branch and bound.
Resumo:
El panorama que se tiene es difícil de enfrentar en un mundo de pasos gigantescos en donde el avance de la tecnología se ve todos los días. Los problemas que presentan las unidades de información documental como presupuesto, personal y espacio físico, además de un usuario líder en el manejo de Internet y la cantidad de información que está disponible para todos en la red mundial, es un reto que debe desafiar el bibliotecólogo actual, en un mundo de cambios constantes y que son difíciles de alcanzar, pero que deben ser la meta de todo profesional en el área.El bibliotecólogo como un profesional destacado en el manejo de la información, debe estar al día en los últimos adelantos tecnológicos para manipular la inmensa cantidad de documentación que está en los centros en forma impresa, así como aquella que está en el mundo viajando por autopistas electrónicas y algunas veces sin control.Hoy todavía no se ha olvidado la impresión en papel, pero sí se está viviendo un cambio fuerte en la forma de enviar los productos y servicios que se deben brindar en una unidad de información, sea ésta de ente público o privado, especializada o general.El mundo nos pide un cambio y un acelerado proceso en nuestras mentes que asimile parte de este desarrollo globalizado que la sociedad está viviendo a pasos gigantescos.
Resumo:
Despite all intentions in the course of the Bologna Process and decades of investment into improving the social dimension, results in many national and international studies show that inequity remains stubbornly persistent, and that inequity based on socio-economic status, parental education, gender, country-of-origin, rural background and more continues to prevail in our Higher Education systems and at the labour market. While improvement has been shown, extrapolation of the gains of the last 40 years in the field show that it could take over 100 years for disadvantaged groups to catch up with their more advantaged peers, should the current rate of improvement be maintained. Many of the traditional approaches to improving equity have also necessitated large-scale public investments, in the form of direct support to underrepresented groups. In an age of austerity, many countries in Europe are finding it necessary to revisit and scale down these policies, so as to accommodate other priorities, such as balanced budgets or dealing with an aging population. An analysis of the current situation indicates that the time is ripe for disruptive innovations to mobilise the cause forward by leaps and bounds, instead of through incrementalist approaches. Despite the list of programmes in this analysis there is very little evidence as to the causal link between programmes, methodologies for their use and increases/improvements in equity in institutions. This creates a significant information gap for institutions and public authorities seeking for indicators to allocate limited resources to equity improving initiatives, without adequate evidence of effectiveness. The IDEAS project and this publication aims at addressing and improving this information gap. (DIPF/Orig.)
Resumo:
Tem sido relatado que as estacas de Camellia sinensis possuem baixa capacidade de emitir raízes, motivando assim a realização de estudos básicos para otimização do processo de propagação por estacas. Assim sendo, o presente trabalho objetivou quantificar o potencial rizogênico de diferentes genótipos e o efeito da posição da estaca no ramo e incisão na base, do substrato, tamanho do recipiente e ácido indolbutírico no enraizamento de estacas semi-lenhosas dessa espécie. Para tal, foram coletados ramos dos genótipos IAC 259, F15 e Comum, em Pariquera-Açu-SP, no inverno de 2010. em seguida, preparadas as estacas, contendo uma gema e uma folha, foram mantidas em viveiro com 70% de sombreamento. Estacas da posição basal e mediana dos ramos são as mais adequadas para estaquia devido a menor mortalidade e maior enraizamento. A injúria na base da estaca não afeta a mortalidade e o enraizamento das estacas, porém induz à formação de calo. Também não houve diferenças na mortalidade e no enraizamento das estacas quando as mesmas foram mantidas em recipiente de 50, 90 e 120 cm³. Comparado com vermiculita, areia e casca de arroz carbonizada, o solo foi o melhor substrato para estaquia, que na presença do ferimento, juntamente com o tratamento das estacas com 10 g L-1 de AIB promoveu a maior porcentagem de enraizamento. Todavia, ainda nessa condição a mortalidade média das estacas foi de 42%. O potencial de enraizamento do genótipo Comum foi superior ao do IAC 259 e F15.
Resumo:
The BBMCSFilter method was developed to solve mixed integer nonlinear programming problems. This kind of problems have integer and continuous variables and they appear very frequently in process engineering problems. The objective of this work is to analyze the performance of the method when the coordinate searches are interrupted in the context of the multistart strategy. From the numerical experiments, we observed a reduction on the number of function evaluations and on the CPU time.
Resumo:
In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.
Resumo:
The Consumer Finance Division of the South Carolina State Board of Financial Institutions is responsible for the supervision, licensing and examination of all consumer finance companies, deferred presentment companies, check cashing companies, and non-depository mortgage lenders and their loan originators. This project specifically focuses on the licensing of Mortgage Lender/Servicer ( company), Mortgage Lender/Servicer Branch (branch) and Mortgage Loan Originator (loan originator) licenses. The problem statement is how the Division can handle increasing the number of mortgage loan originators in the state without delaying the time to process applications. The goal of this project is to make the current licensing process more efficient so that the Division can handle the increased workload without having to hire additional personnel.
Error, Bias, and Long-Branch Attraction in Data for Two Chloroplast Photosystem Genes in Seed Plants
Resumo:
Sequences of two chloroplast photosystem genes, psaA and psbB, together comprising about 3,500 bp, were obtained for all five major groups of extant seed plants and several outgroups among other vascular plants. Strongly supported, but significantly conflicting, phylogenetic signals were obtained in parsimony analyses from partitions of the data into first and second codon positions versus third positions. In the former, both genes agreed on a monophyletic gymnosperms, with Gnetales closely related to certain conifers. In the latter, Gnetales are inferred to be the sister group of all other seed plants, with gymnosperms paraphyletic. None of the data supported the modern ‘‘anthophyte hypothesis,’’ which places Gnetales as the sister group of flowering plants. A series of simulation studies were undertaken to examine the error rate for parsimony inference. Three kinds of errors were examined: random error, systematic bias (both properties of finite data sets), and statistical inconsistency owing to long-branch attraction (an asymptotic property). Parsimony reconstructions were extremely biased for third-position data for psbB. Regardless of the true underlying tree, a tree in which Gnetales are sister to all other seed plants was likely to be reconstructed for these data. None of the combinations of genes or partitions permits the anthophyte tree to be reconstructed with high probability. Simulations of progressively larger data sets indicate the existence of long-branch attraction (statistical inconsistency) for third-position psbB data if either the anthophyte tree or the gymnosperm tree is correct. This is also true for the anthophyte tree using either psaA third positions or psbB first and second positions. A factor contributing to bias and inconsistency is extremely short branches at the base of the seed plant radiation, coupled with extremely high rates in Gnetales and nonseed plant outgroups. M. J. Sanderson,* M. F. Wojciechowski,*† J.-M. Hu,* T. Sher Khan,* and S. G. Brady
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
Resumo:
We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.