939 resultados para graph anonymization
Resumo:
This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.
As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.
One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.
Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.
Resumo:
A classical question in combinatorics is the following: given a partial Latin square $P$, when can we complete $P$ to a Latin square $L$? In this paper, we investigate the class of textbf{$epsilon$-dense partial Latin squares}: partial Latin squares in which each symbol, row, and column contains no more than $epsilon n$-many nonblank cells. Based on a conjecture of Nash-Williams, Daykin and H"aggkvist conjectured that all $frac{1}{4}$-dense partial Latin squares are completable. In this paper, we will discuss the proof methods and results used in previous attempts to resolve this conjecture, introduce a novel technique derived from a paper by Jacobson and Matthews on generating random Latin squares, and use this novel technique to study $ epsilon$-dense partial Latin squares that contain no more than $delta n^2$ filled cells in total.
In Chapter 2, we construct completions for all $ epsilon$-dense partial Latin squares containing no more than $delta n^2$ filled cells in total, given that $epsilon < frac{1}{12}, delta < frac{ left(1-12epsilonright)^{2}}{10409}$. In particular, we show that all $9.8 cdot 10^{-5}$-dense partial Latin squares are completable. In Chapter 4, we augment these results by roughly a factor of two using some probabilistic techniques. These results improve prior work by Gustavsson, which required $epsilon = delta leq 10^{-7}$, as well as Chetwynd and H"aggkvist, which required $epsilon = delta = 10^{-5}$, $n$ even and greater than $10^7$.
If we omit the probabilistic techniques noted above, we further show that such completions can always be found in polynomial time. This contrasts a result of Colbourn, which states that completing arbitrary partial Latin squares is an NP-complete task. In Chapter 3, we strengthen Colbourn's result to the claim that completing an arbitrary $left(frac{1}{2} + epsilonright)$-dense partial Latin square is NP-complete, for any $epsilon > 0$.
Colbourn's result hinges heavily on a connection between triangulations of tripartite graphs and Latin squares. Motivated by this, we use our results on Latin squares to prove that any tripartite graph $G = (V_1, V_2, V_3)$ such that begin{itemize} item $|V_1| = |V_2| = |V_3| = n$, item For every vertex $v in V_i$, $deg_+(v) = deg_-(v) geq (1- epsilon)n,$ and item $|E(G)| > (1 - delta)cdot 3n^2$ end{itemize} admits a triangulation, if $epsilon < frac{1}{132}$, $delta < frac{(1 -132epsilon)^2 }{83272}$. In particular, this holds when $epsilon = delta=1.197 cdot 10^{-5}$.
This strengthens results of Gustavsson, which requires $epsilon = delta = 10^{-7}$.
In an unrelated vein, Chapter 6 explores the class of textbf{quasirandom graphs}, a notion first introduced by Chung, Graham and Wilson cite{chung1989quasi} in 1989. Roughly speaking, a sequence of graphs is called "quasirandom"' if it has a number of properties possessed by the random graph, all of which turn out to be equivalent. In this chapter, we study possible extensions of these results to random $k$-edge colorings, and create an analogue of Chung, Graham and Wilson's result for such colorings.
Resumo:
This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.
Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.
Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.
The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.
In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we
•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;
•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;
•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;
•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;
•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.
The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we
•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;
•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.
The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.
Resumo:
Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.
This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.
A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.
This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.
Resumo:
[EN] In today s economy, innovation is considered to be one of the main driving forces behind business competitiveness, if not the most relevant one. Traditionally, the study of innovation has been addressed from different perspectives. Recently, literature on knowledge management and intellectual capital has provided new insights. Considering this, the aim of this paper is to analyze the impact of different organizational conditions i.e. structural capital on innovation capability and innovation performance, from an intellectual capital (IC) perspective. As regards innovation capability, two dimensions are considered: new idea generation and innovation project management. The population subject to study is made up of technology-based Colombian firms. In order to gather information about the relevant variables involved in the research, a questionnaire was designed and addressed to the CEOs of the companies making up the target population. The sample analyzed is made up of 69 companies and is large enough to carry out a statistical study based on structural equation modelling (partial least squares approach) using PLS-Graph software (Chin and Frye, 2003). The results obtained show that structural capital explains to a great extent both the effectiveness of the new idea generation process and of innovation project management. However, the influence of each specific organizational component making up structural capital (organizational design, organizational culture, hiring and professional development policies, innovation strategy, technological capital, and external structure) varies. Moreover, successful innovation project management is the only innovation capability dimension that exerts a significant impact on company performance.
Resumo:
This report is an introduction to the concept of treewidth, a property of graphs that has important implications in algorithms. Some basic concepts of graph theory are presented in the first chapter for those readers that are not familiar with the notation. In Chapter 2, the definition of treewidth and some different ways of characterizing it are explained. The last two chapters focus on the algorithmic implications of treewidth, which are very relevant in Computer Science. An algorithm to compute the treewidth of a graph is presented and its result can be later applied to many other problems in graph theory, like those introduced in the last chapter.
Resumo:
Os métodos espectrais são ferramentas úteis na análise de dados, sendo capazes de fornecer informações sobre a estrutura organizacional de dados. O agrupamento de dados utilizando métodos espectrais é comumente baseado em relações de similaridade definida entre os dados. O objetivo deste trabalho é estudar a capacidade de agrupamento de métodos espectrais e seu comportamento, em casos limites. Considera-se um conjunto de pontos no plano e usa-se a similaridade entre os nós como sendo o inverso da distância Euclidiana. Analisa-se a qual distância mínima, entre dois pontos centrais, o agrupamento espectral é capaz de reagrupar os dados em dois grupos distintos. Acessoriamente, estuda-se a capacidade de reagrupamento caso a dispersão entre os dados seja aumentada. Inicialmente foram realizados experimentos considerando uma distância fixa entre dois pontos, a partir dos quais os dados são gerados e, então, reduziu-se a distância entre estes pontos até que o método se tornasse incapaz de efetuar a separação dos pontos em dois grupos distintos. Em seguida, retomada a distância inicial, os dados foram gerados a partir da adição de uma perturbação normal, com variância crescente, e observou-se até que valor de variância o método fez a separação dos dados em dois grupos distintos de forma correta. A partir de um conjunto de pontos obtidos com a execução do algoritmo de evolução diferencial, para resolver um problema multimodal, testa-se a capacidade do método em separar os indivíduos em grupos diferentes.
Resumo:
Através de estudo de evento este trabalho analisa o impacto provocado pelo anúncio de recompra de ações sobre os seus próprios preços, utilizando como referência, as empresas que anunciaram aquisição de ações de sua emissão, através de publicação de fato relevante na Comissão de Valores Mobiliário (CVM), nos exercícios de 2003 a 2009. O estudo pressupõe eficiência de mercado na sua forma semi-forte e identifica retorno anormal, estatisticamente significativo na data um do evento, ou seja, um dia após o anúncio. Os retornos acumulados de três dias são regredidos contra dados da recompra e os resultados reforçam a hipótese de sinalização, já sugerida pela análise do gráfico do retorno anormal acumulado, com os retornos indicando alta decorrente de pressão de preços. Os modelos de regressão utilizados, incluindo variáveis contábeis associadas a outras hipóteses explicativas, não encontram resultados significativos que dêem suporte a outras possíveis motivações propostas na literatura acadêmica.
Resumo:
Neste estudo, investigamos a organização qualitativa e quantitativa do tecido esplênico na fase aguda (nona semana) da infecção pelo Schistosoma mansoni em camundongos Swiss alimentados com uma dieta hipercolesterolêmica (29% lipídios) ou uma ração padrão (12% de lipídios). O volume do baço foi medido pelo método de Scherle. Os cortes histológicos (5m) foram corados com hematoxilina-eosina, Lennerts Giemsa e Picrosirius. O tecido foi avaliado por histopatologia, morfometria e estereologia. A análise estatística foi realizada utilizando-se o programa Graph Pad Instat. Nossos resultados demonstraram que tanto a dieta quanto a infecção determinam perturbações em compartimentos esplênicos. Camundongos infectados, independentemente da dieta, tiveram aumento do baço (P=0,004) e arquitetura esplênica desorganizada quando comparado aos não infectados. O compartimento de polpa branca foi reduzida, enquanto a polpa vermelha e centro germinativo foram aumentadas (P<0,01). Camundongos infectados alimentados com alto teor de gordura apresentaram maior diâmetro de polpa branca (P=0,013) do que os não infectados. O exame microscópico do baço de camundongos não infectados alimentados com ração hipercolesterolêmica indicou que a polpa branca foi reduzida e indistinta. Além disso, camundongos infectados mostraram infiltrados celulares caracterizados por células polimorfonucleares, com mitose linfocítica intensa e células de Mott. Hemossiderina tende a ser em menor grau em camundongos infectados em comparação com os controles não infectados. A polpa vermelha mostrou um aumento significativo (P=0,008) no número médio de megacariócitos em comparação com camundongos não infectados. Camundongos alimentados com dieta padrão mostraram granuloma exsudativo-produtivo em torno de ovos do S. mansoni distribuídas apenas escassamente na polpa vermelha, enquanto que uma resposta do tecido caracterizada por uma infiltração de células em camundongos alimentados com alto teor de gordura foi encontrado. Os resultados do estudo sugerem que a ingestão de dieta rica em gorduras e a infecção contribuíram para a desorganização esplênica
Resumo:
139 p.
Resumo:
A partir do entendimento das novas possibilidades sociais permitidas pela Internet, este trabalho tem por objetivo investigar a sociabilidade em redes sociais virtuais a partir do desenvolvimento do capital social entre os membros integrantes destas redes. Buscamos compreender as motivações que possibilitam que as relações sociais sejam construídas e mantidas no e a partir do espaço virtual determinando os fatores que tornam tais relações materializadas no espaço offline. Para tal, realizamos um estudo de caso de uma rede social constituída por motociclistas, o site Tornadeiros. Logramos apreender, o contexto de interação entre os membros desta rede e de que modo o fortalecimento do capital social é propulsor do deslocamento das relações no ambiente virtual para o espaço urbano, determinando a sedimentação de vínculos afetivos entre os indivíduos, inicialmente previstos como banais e efêmeros, dado a lacuna espaço-temporal existente entre estes atores. Para explorar estas dimensões iniciamos o trabalho etnográfico no ciberespaço e posteriormente no espaço urbano. A etnografia no ciberespaço consistiu na aplicação de um questionário online para determinar o perfil dos membros da rede social e na compilação de todo conteúdo de postagens disponível na memória coletiva do site. Os dados compilados foram tratados posteriormente para determinar a topologia da rede de interações entre os membros. Deste material, selecionamos 17 discursos para estudo, articulando a análise dos discursos com as observações produzidas pelo grafo da rede de interações do site. Finalmente, no segundo momento etnográfico, nós confrontamos os resultados com as entrevistas presenciais, tornando possível perceber o estabelecimento e manutenção das relações sociais a partir do capital social desenvolvido nesta rede.
Resumo:
Graphs of variations of zooplankton biomasses expressed as ash-free dry weight (i.e. organic matter) are presented for the 1969-1979 period. The graph of the average year shows: an enrichment season from mid-July till mid-November in which the biomass is 2.3 times higher than the rest of the year and characterized by a slight decrease of the biomass in late August or early September. The warm season is divided into a period of moderate biomass from November till February, a period of moderate biomass from November till February and a period of steady decline of the biomass till the start of the upwelling at the end of June.
Resumo:
Os aspectos quânticos de teorias de campo formuladas no espaço-tempo não comutativo têm sido amplamente estudados ao longo dos anos. Um dos principais aspectos é o que na literatura ficou conhecido como mixing IR/UV. Trata-se de uma mistura das divergências, que foi vista pela primeira vez no trabalho de Minwalla et al [28], onde num estudo do campo escalar não comutativo com interação quártica vemos já a 1 loop que o tadpole tem uma divergência UV associada a sua parte planar e, junto com ela, temos uma divergência IR associada com um gráfico não planar. Essa mistura torna a teoria não renormalizável. Dado tal problema, houve então uma busca por mecanismos que separassem essas divergências a fim de termos teorias renormalizáveis. Um mecanismo proposto foi a adição de um termo não local na ação U*(1) para que esta seja estável.Neste trabalho, estudamos através da renormalização algébrica a estabilidade deste modelo. Para tal, precisamos localizar o operador não local através de campos auxiliares e seus respectivos ghosts (metodo de Zwanziger) na intenção de retirar os graus de liberdade indesejados que surgem. Usamos o approachda quebra soft de BRST para analisar o termo que quebra BRST, que consiste em reescrevermos tal termo com o auxílio de fontes externas que num determinado limite físico voltam ao termo original.Como resultado, vimos que a teoria com a adição deste termo na ação só é renormalizável se tivermos que introduzir novos termos, sendo alguns deles quárticos. Porém, estes termos mudam a forma do propagador, que não desacopla as divergências. Um outro aspecto que podemos salientar é que, dependendo da escolha de alguns parâmetros, o propagador dá indícios de termos um fótonconfinante, seguindo o critério de Wilson e o critério da perda da positividade do propagador.