43 resultados para MATCHINGS
Resumo:
We consider two–sided many–to–many matching markets in which each worker may work for multiple firms and each firm may hire multiple workers. We study individual and group manipulations in centralized markets that employ (pairwise) stable mechanisms and that require participants to submit rank order lists of agents on the other side of the market. We are interested in simple preference manipulations that have been reported and studied in empirical and theoretical work: truncation strategies, which are the lists obtained by removing a tail of least preferred partners from a preference list, and the more general dropping strategies, which are the lists obtained by only removing partners from a preference list (i.e., no reshuffling). We study when truncation / dropping strategies are exhaustive for a group of agents on the same side of the market, i.e., when each match resulting from preference manipulations can be replicated or improved upon by some truncation / dropping strategies. We prove that for each stable mechanism, truncation strategies are exhaustive for each agent with quota 1 (Theorem 1). We show that this result cannot be extended neither to group manipulations (even when all quotas equal 1 – Example 1), nor to individual manipulations when the agent’s quota is larger than 1 (even when all other agents’ quotas equal 1 – Example 2). Finally, we prove that for each stable mechanism, dropping strategies are exhaustive for each group of agents on the same side of the market (Theorem 2), i.e., independently of the quotas.
Resumo:
This thesis proposes a solution to the problem of estimating the motion of an Unmanned Underwater Vehicle (UUV). Our approach is based on the integration of the incremental measurements which are provided by a vision system. When the vehicle is close to the underwater terrain, it constructs a visual map (so called "mosaic") of the area where the mission takes place while, at the same time, it localizes itself on this map, following the Concurrent Mapping and Localization strategy. The proposed methodology to achieve this goal is based on a feature-based mosaicking algorithm. A down-looking camera is attached to the underwater vehicle. As the vehicle moves, a sequence of images of the sea-floor is acquired by the camera. For every image of the sequence, a set of characteristic features is detected by means of a corner detector. Then, their correspondences are found in the next image of the sequence. Solving the correspondence problem in an accurate and reliable way is a difficult task in computer vision. We consider different alternatives to solve this problem by introducing a detailed analysis of the textural characteristics of the image. This is done in two phases: first comparing different texture operators individually, and next selecting those that best characterize the point/matching pair and using them together to obtain a more robust characterization. Various alternatives are also studied to merge the information provided by the individual texture operators. Finally, the best approach in terms of robustness and efficiency is proposed. After the correspondences have been solved, for every pair of consecutive images we obtain a list of image features in the first image and their matchings in the next frame. Our aim is now to recover the apparent motion of the camera from these features. Although an accurate texture analysis is devoted to the matching pro-cedure, some false matches (known as outliers) could still appear among the right correspon-dences. For this reason, a robust estimation technique is used to estimate the planar transformation (homography) which explains the dominant motion of the image. Next, this homography is used to warp the processed image to the common mosaic frame, constructing a composite image formed by every frame of the sequence. With the aim of estimating the position of the vehicle as the mosaic is being constructed, the 3D motion of the vehicle can be computed from the measurements obtained by a sonar altimeter and the incremental motion computed from the homography. Unfortunately, as the mosaic increases in size, image local alignment errors increase the inaccuracies associated to the position of the vehicle. Occasionally, the trajectory described by the vehicle may cross over itself. In this situation new information is available, and the system can readjust the position estimates. Our proposal consists not only in localizing the vehicle, but also in readjusting the trajectory described by the vehicle when crossover information is obtained. This is achieved by implementing an Augmented State Kalman Filter (ASKF). Kalman filtering appears as an adequate framework to deal with position estimates and their associated covariances. Finally, some experimental results are shown. A laboratory setup has been used to analyze and evaluate the accuracy of the mosaicking system. This setup enables a quantitative measurement of the accumulated errors of the mosaics created in the lab. Then, the results obtained from real sea trials using the URIS underwater vehicle are shown.
Resumo:
We consider the raise and peel model of a one-dimensional fluctuating interface in the presence of an attractive wall. The model can also describe a pair annihilation process in disordered unquenched media with a source at one end of the system. For the stationary states, several density profiles are studied using Monte Carlo simulations. We point out a deep connection between some profiles seen in the presence of the wall and in its absence. Our results are discussed in the context of conformal invariance ( c = 0 theory). We discover some unexpected values for the critical exponents, which are obtained using combinatorial methods. We have solved known ( Pascal`s hexagon) and new (split-hexagon) bilinear recurrence relations. The solutions of these equations are interesting in their own right since they give information on certain classes of alternating sign matrices.
Resumo:
A bipartite graph G = (V, W, E) is convex if there exists an ordering of the vertices of W such that, for each v. V, the neighbors of v are consecutive in W. We describe both a sequential and a BSP/CGM algorithm to find a maximum independent set in a convex bipartite graph. The sequential algorithm improves over the running time of the previously known algorithm and the BSP/CGM algorithm is a parallel version of the sequential one. The complexity of the algorithms does not depend on |W|.
Resumo:
O acesso ao capital por empreendedores em start-ups está tradicionalmente fundamentado na indústria de Venture Capital (VC). Nos países emergentes, incluindo o Brasil, foi somente na última década que essa indústria passou a ter uma importância relativa maior às outras fontes de capital disponíveis. Ocorreu que a indústria passou a contar com a migração de fundos estrangeiros tradicionais atraídos pelo potencial de crescimento, pelas oportunidades de novos negócios e incentivos governamentais nesses países. Dessa forma, pode-se considerar que o desenvolvimento da indústria de VC é algo relativamente embrionário no Brasil. Este estudo tem como objetivo principal identificar quais foram os fatores determinantes para que os investidores (Venture Capital) e os investidos (start-up) decidissem por desenvolver uma sociedade em determinado negócio, sob o contexto institucional brasileiro. A pesquisa qualitativa foi realizada pela abordagem exploratória, a partir de entrevistas em profundidade (quatorze, no total) com investidores e investidos brasileiros que já haviam realizado uma sociedade há até dois anos. As entrevistas totalizaram nove matchings, ou pares de investidores e investidos num negócio. Os resultados oriundos dessas entrevistas demonstraram padrões e processos muito similares aos estudados nos países em que essa indústria é considerada desenvolvida. Os dados demonstram, porém, que no Brasil, como em outros países emergentes, a questão do relacionamento entre investidor e investido representa o principal fator para a realização de um negócio entre as partes. Além disso, indica que o relacionamento interpessoal representa um peso maior quando comparado aos outros fatores identificados na pesquisa.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.
Resumo:
Chapter 1 is used to introduce the basic tools and mechanics used within this thesis. Some historical uses and background are touched upon as well. The majority of the definitions are contained within this chapter as well. In Chapter 2 we consider the question whether one can decompose λ copies of monochromatic Kv into copies of Kk such that each copy of the Kk contains at most one edge from each Kv. This is called a proper edge coloring (Hurd, Sarvate, [29]). The majority of the content in this section is a wide variety of examples to explain the constructions used in Chapters 3 and 4. In Chapters 3 and 4 we investigate how to properly color BIBD(v, k, λ) for k = 4, and 5. Not only will there be direct constructions of relatively small BIBDs, we also prove some generalized constructions used within. In Chapter 5 we talk about an alternate solution to Chapters 3 and 4. A purely graph theoretical solution using matchings, augmenting paths, and theorems about the edgechromatic number is used to develop a theorem that than covers all possible cases. We also discuss how this method performed compared to the methods in Chapters 3 and 4. In Chapter 6, we switch topics to Latin rectangles that have the same number of symbols and an equivalent sized matrix to Latin squares. Suppose ab = n2. We define an equitable Latin rectangle as an a × b matrix on a set of n symbols where each symbol appears either [b/n] or [b/n] times in each row of the matrix and either [a/n] or [a/n] times in each column of the matrix. Two equitable Latin rectangles are orthogonal in the usual way. Denote a set of ka × b mutually orthogonal equitable Latin rectangles as a k–MOELR(a, b; n). We show that there exists a k–MOELR(a, b; n) for all a, b, n where k is at least 3 with some exceptions.
Resumo:
The main problem of pedestrian dead-reckoning (PDR) using only a body-attached inertial measurement unit is the accumulation of heading errors. The heading provided by magnetometers in indoor buildings is in general not reliable and therefore it is commonly not used. Recently, a new method was proposed called heuristic drift elimination (HDE) that minimises the heading error when navigating in buildings. It assumes that the majority of buildings have their corridors parallel to each other, or they intersect at right angles, and consequently most of the time the person walks along a straight path with a heading constrained to one of the four possible directions. In this article we study the performance of HDE-based methods in complex buildings, i.e. with pathways also oriented at 45°, long curved corridors, and wide areas where non-oriented motion is possible. We explain how the performance of the original HDE method can be deteriorated in complex buildings, and also, how severe errors can appear in the case of false matches with the building's dominant directions. Although magnetic compassing indoors has a chaotic behaviour, in this article we analyse large data-sets in order to study the potential use that magnetic compassing has to estimate the absolute yaw angle of a walking person. Apart from these analysis, this article also proposes an improved HDE method called Magnetically-aided Improved Heuristic Drift Elimination (MiHDE), that is implemented over a PDR framework that uses foot-mounted inertial navigation with an extended Kalman filter (EKF). The EKF is fed with the MiHDE-estimated orientation error, gyro bias corrections, as well as the confidence over that corrections. We experimentally evaluated the performance of the proposed MiHDE-based PDR method, comparing it with the original HDE implementation. Results show that both methods perform very well in ideal orthogonal narrow-corridor buildings, and MiHDE outperforms HDE for non-ideal trajectories (e.g. curved paths) and also makes it robust against potential false dominant direction matchings.
Resumo:
Este trabalho aborda o problema de casamento entre duas imagens. Casamento de imagens pode ser do tipo casamento de modelos (template matching) ou casamento de pontos-chaves (keypoint matching). Estes algoritmos localizam uma região da primeira imagem numa segunda imagem. Nosso grupo desenvolveu dois algoritmos de casamento de modelos invariante por rotação, escala e translação denominados Ciratefi (Circula, radial and template matchings filter) e Forapro (Fourier coefficients of radial and circular projection). As características positivas destes algoritmos são a invariância a mudanças de brilho/contraste e robustez a padrões repetitivos. Na primeira parte desta tese, tornamos Ciratefi invariante a transformações afins, obtendo Aciratefi (Affine-ciratefi). Construímos um banco de imagens para comparar este algoritmo com Asift (Affine-scale invariant feature transform) e Aforapro (Affine-forapro). Asift é considerado atualmente o melhor algoritmo de casamento de imagens invariante afim, e Aforapro foi proposto em nossa dissertação de mestrado. Nossos resultados sugerem que Aciratefi supera Asift na presença combinada de padrões repetitivos, mudanças de brilho/contraste e mudanças de pontos de vista. Na segunda parte desta tese, construímos um algoritmo para filtrar casamentos de pontos-chaves, baseado num conceito que denominamos de coerência geométrica. Aplicamos esta filtragem no bem-conhecido algoritmo Sift (scale invariant feature transform), base do Asift. Avaliamos a nossa proposta no banco de imagens de Mikolajczyk. As taxas de erro obtidas são significativamente menores que as do Sift original.
Resumo:
In this paper, we present a novel coarse-to-fine visual localization approach: contextual visual localization. This approach relies on three elements: (i) a minimal-complexity classifier for performing fast coarse localization (submap classification); (ii) an optimized saliency detector which exploits the visual statistics of the submap; and (iii) a fast view-matching algorithm which filters initial matchings with a structural criterion. The latter algorithm yields fine localization. Our experiments show that these elements have been successfully integrated for solving the global localization problem. Context, that is, the awareness of being in a particular submap, is defined by a supervised classifier tuned for a minimal set of features. Visual context is exploited both for tuning (optimizing) the saliency detection process, and to select potential matching views in the visual database, close enough to the query view.
Resumo:
Let G be a graph that admits a perfect matching. A forcing set for a perfect matching M of G is a subset S of M, such that S is contained in no other perfect matching of G. This notion has arisen in the study of finding resonance structures of a given molecule in chemistry. Similar concepts have been studied for block designs and graph colorings under the name defining set, and for Latin squares under the name critical set. There is some study of forcing sets of hexagonal systems in the context of chemistry, but only a few other classes of graphs have been considered. For the hypercubes Q(n), it turns out to be a very interesting notion which includes many challenging problems. In this paper we study the computational complexity of finding the forcing number of graphs, and we give some results on the possible values of forcing number for different matchings of the hypercube Q(n). Also we show an application to critical sets in back circulant Latin rectangles. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.