159 resultados para Stereo matching
Resumo:
We consider a dynamic model where traders in each period are matched randomly into pairs who then bargain about the division of a fixed surplus. When agreement is reached the traders leave the market. Traders who do not come to an agreement return next period in which they will be matched again, as long as their deadline has not expired yet. New traders enter exogenously in each period. We assume that traders within a pair know each other's deadline. We define and characterize the stationary equilibrium configurations. Traders with longer deadlines fare better than traders with short deadlines. It is shown that the heterogeneity of deadlines may cause delay. It is then shown that a centralized mechanism that controls the matching protocol, but does not interfere with the bargaining, eliminates all delay. Even though this efficient centralized mechanism is not as good for traders with long deadlines, it is shown that in a model where all traders can choose which mechanism to
Resumo:
We give a simple and concise proof that so-called generalized median stable matchings are well-defined stable matchings for college admissions problems. Furthermore, we discuss the fairness properties of median stable matchings and conclude with two illustrative examples of college admissions markets, the lattices of stable matchings, and the corresponding generalized median stable matchings.
Resumo:
Ma (1996) studied the random order mechanism, a matching mechanism suggested by Roth and Vande Vate (1990) for marriage markets. By means of an example he showed that the random order mechanism does not always reach all stable matchings. Although Ma's (1996) result is true, we show that the probability distribution he presented - and therefore the proof of his Claim 2 - is not correct. The mistake in the calculations by Ma (1996) is due to the fact that even though the example looks very symmetric, some of the calculations are not as ''symmetric.''
Resumo:
We study two-sided matching markets with couples and show that for a natural preference domain for couples, the domain of weakly responsive preferences, stable outcomes can always be reached by means of decentralized decision making. Starting from an arbitrary matching, we construct a path of matchings obtained from `satisfying' blocking coalitions that yields a stable matching. Hence, we establish a generalization of Roth and Vande Vate's (1990) result on path convergence to stability for decentralized singles markets. Furthermore, we show that when stable matchings exist, but preferences are not weakly responsive, for some initial matchings there may not exist any path obtained from `satisfying' blocking coalitions that yields a stable matching.
Resumo:
We motivate procedural fairness for matching mechanisms and study two procedurally fair and stable mechanisms: employment by lotto (Aldershof et al., 1999) and the random order mechanism (Roth and Vande Vate, 1990, Ma, 1996). For both mechanisms we give various examples of probability distributions on the set of stable matchings and discuss properties that differentiate employment by lotto and the random order mechanism. Finally, we consider an adjustment of the random order mechanism, the equitable random order mechanism, that combines aspects of procedural and "endstate'' fairness. Aldershof et al. (1999) and Ma (1996) that exist on the probability distribution induced by both mechanisms. Finally, we consider an adjustment of the random order mechanism, the equitable random order mechanism.
Resumo:
For the many-to-one matching model in which firms have substitutable and quota q-separable preferences over subsets of workers we show that the workers-optimal stable mechanism is group strategy-proof for the workers. In order to prove this result, we also show that under this domain of preferences (which contains the domain of responsive preferences of the college admissions problem) the workers-optimal stable matching is weakly Pareto optimal for the workers and the Blocking Lemma holds as well. We exhibit an example showing that none of these three results remain true if the preferences of firms are substitutable but not quota q-separable.
Resumo:
This paper aims at assessing the importance of the initial technological endowments when firms decide to establish a technological agreement. We propose a Bertrand duopoly model where firms evaluate the advantages they can get from the agreement according to its length. Allowing them to exploit a learning process, we depict a strict connection between the starting point and the final result. Moreover, as far as learning is evaluated as an iterative process, the set of initial conditions that lead to successful ventures switches from a continuum of values to a Cantor set.
Resumo:
We propose a model based on competitive markets in order to analyze an economy with several principals and agents. We model the principal-agent economy as a two-sided matching game and characterize the set of stable outcomes of this principal-agent matching market. A simple mechanism to implement the set of stable outcomes is proposed. Finally, we put forward examples of principal-agent economies where the results fit into.
Resumo:
We study comparative statics of manipulations by women in the men-proposing deferred acceptance mechanism in the two-sided one-to-one marriage market. We prove that if a group of women employs truncation strategies or weakly successfully manipulates, then all other women weakly benefit and all men are weakly harmed. We show that our results do not appropriately generalize to the many-to-one college admissions model.
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
L'objectiu del TFC consisteix en desenvolupar una aplicació que permeti, per una banda, la definició d'una oferta de recursos; per altra banda el uns usuaris-consumidors puguéssin apuntar-se a dites ofertes i, finalment,
Resumo:
This paper proposes MSISpIC, a probabilistic sonar scan matching algorithm for the localization of an autonomous underwater vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), the robot displacement estimated through dead-reckoning using a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method is an extension of the pIC algorithm. An extended Kalman filter (EKF) is used to estimate the robot-path during the scan in order to reference all the range and bearing measurements as well as their uncertainty to a scan fixed frame before registering. The major contribution consists of experimentally proving that probabilistic sonar scan matching techniques have the potential to improve the DVL-based navigation. The algorithm has been tested on an AUV guided along a 600 m path within an abandoned marina underwater environment with satisfactory results
Resumo:
In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence
Resumo:
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production
Resumo:
In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.