981 resultados para Set functions.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
n this paper we deal with the problem of obtaining the set of k-additive measures dominating a fuzzy measure. This problem extends the problem of deriving the set of probabilities dominating a fuzzy measure, an important problem appearing in Decision Making and Game Theory. The solution proposed in the paper follows the line developed by Chateauneuf and Jaffray for dominating probabilities and continued by Miranda et al. for dominating k-additive belief functions. Here, we address the general case transforming the problem into a similar one such that the involved set functions have non-negative Möbius transform; this simplifies the problem and allows a result similar to the one developed for belief functions. Although the set obtained is very large, we show that the conditions cannot be sharpened. On the other hand, we also show that it is possible to define a more restrictive subset, providing a more natural extension of the result for probabilities, such that it is possible to derive any k-additive dominating measure from it.
Resumo:
A pivotal problem in Bayesian nonparametrics is the construction of prior distributions on the space M(V) of probability measures on a given domain V. In principle, such distributions on the infinite-dimensional space M(V) can be constructed from their finite-dimensional marginals---the most prominent example being the construction of the Dirichlet process from finite-dimensional Dirichlet distributions. This approach is both intuitive and applicable to the construction of arbitrary distributions on M(V), but also hamstrung by a number of technical difficulties. We show how these difficulties can be resolved if the domain V is a Polish topological space, and give a representation theorem directly applicable to the construction of any probability distribution on M(V) whose first moment measure is well-defined. The proof draws on a projective limit theorem of Bochner, and on properties of set functions on Polish spaces to establish countable additivity of the resulting random probabilities.
Resumo:
Mode of access: Internet.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
(Summary of: Varbanova-Dencheva, K. Intellectual communications and contemporarly technologies. Alternatives of the science libraries. Sofia, Marin Drinov academic publishing house. 2003, 114p. ) The new technologies and the globalization are the factors which have brought essential changes in human society and its environment. The unceasing dynamic changes imposed new strategies for survival and prosperity of institutions and people in the new conditions. The spheres with greatest potential for achieving competition priority are compatible to the fastness of research results implementation in each field of human activity. The extended knowledge requires narrower specialization as well as interdisciplinarity to solve the arising problems. The new research fields and trends are a synthesis of science and high technologies determined by the new discoveries. The present study aims at finding answers to the questions about the place of science library in the dynamic restructuring of research environment. The necessity of transformation of the scientific library’s genetically set functions from a guardian of the achieved knowledge to an active participant in the creation of new knowledge is a natural consequence of the processes and tendencies of the social medium. The priorities of Europe and USA for intensive creation of knowledge economics are at the first place and this requires intensification of that research an integral part of which are the new communications realized at a new technological level.
Resumo:
Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.
Resumo:
Randomising set index functions can reduce the number of conflict misses in data caches by spreading the cache blocks uniformly over all sets. Typically, the randomisation functions compute the exclusive ors of several address bits. Not all randomising set index functions perform equally well, which calls for the evaluation of many set index functions. This paper discusses and improves a technique that tackles this problem by predicting the miss rate incurred by a randomisation function, based on profiling information. A new way of looking at randomisation functions is used, namely the null space of the randomisation function. The members of the null space describe pairs of cache blocks that are mapped to the same set. This paper presents an analytical model of the error made by the technique and uses this to propose several optimisations to the technique. The technique is then applied to generate a conflict-free randomisation function for the SPEC benchmarks. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
procera (pro) is a tall tomato (Solanum lycopersicum) mutant carrying a point mutation in the GRAS region of the gene encoding SlDELLA, a repressor in the gibberellin (GA) signaling pathway. Consistent with the SlDELLA loss of function, pro plants display a GA-constitutive response phenotype, mimicking wild-type plants treated with GA(3). The ovaries from both nonemasculated and emasculated pro flowers had very strong parthenocarpic capacity, associated with enhanced growth of preanthesis ovaries due to more and larger cells. pro parthenocarpy is facultative because seeded fruits were obtained by manual pollination. Most pro pistils had exserted stigmas, thus preventing self-pollination, similar to wild-type pistils treated with GA(3) or auxins. However, Style2.1, a gene responsible for long styles in noncultivated tomato, may not control the enhanced style elongation of pro pistils, because its expression was not higher in pro styles and did not increase upon GA(3) application. Interestingly, a high percentage of pro flowers had meristic alterations, with one additional petal, sepal, stamen, and carpel at each of the four whorls, respectively, thus unveiling a role of SlDELLA in flower organ development. Microarray analysis showed significant changes in the transcriptome of preanthesis pro ovaries compared with the wild type, indicating that the molecular mechanism underlying the parthenocarpic capacity of pro is complex and that it is mainly associated with changes in the expression of genes involved in GA and auxin pathways. Interestingly, it was found that GA activity modulates the expression of cell division and expansion genes and an auxin signaling gene (tomato AUXIN RESPONSE FACTOR7) during fruit-set.
Resumo:
Moment invariants have been thoroughly studied and repeatedly proposed as one of the most powerful tools for 2D shape identification. In this paper a set of such descriptors is proposed, being the basis functions discontinuous in a finite number of points. The goal of using discontinuous functions is to avoid the Gibbs phenomenon, and therefore to yield a better approximation capability for discontinuous signals, as images. Moreover, the proposed set of moments allows the definition of rotation invariants, being this the other main design concern. Translation and scale invariance are achieved by means of standard image normalization. Tests are conducted to evaluate the behavior of these descriptors in noisy environments, where images are corrupted with Gaussian noise up to different SNR values. Results are compared to those obtained using Zernike moments, showing that the proposed descriptor has the same performance in image retrieval tasks in noisy environments, but demanding much less computational power for every stage in the query chain.