942 resultados para Stable Matchings
Resumo:
We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2 + ε)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
Resumo:
The aim of this paper is to propose a new solution for the roommate problem with strict preferences. We introduce the solution of maximum irreversibility and consider almost stable matchings (Abraham et al. [2])and maximum stable matchings (Ta [30] [32]). We find that almost stable matchings are incompatible with the other two solutions. Hence, to solve the roommate problem we propose matchings that lie at the intersection of the maximum irreversible matchings and maximum stable matchings, which are called Q-stable matchings. These matchings are core consistent and we offer an effi cient algorithm for computing one of them. The outcome of the algorithm belongs to an absorbing set.
Resumo:
We prove with the help of a counterexample that Lemma 6 and Corollary 7 from Eeckhout [1] are incorrect.
Resumo:
We consider two–sided many–to–many matching markets in which each worker may work for multiple firms and each firm may hire multiple workers. We study individual and group manipulations in centralized markets that employ (pairwise) stable mechanisms and that require participants to submit rank order lists of agents on the other side of the market. We are interested in simple preference manipulations that have been reported and studied in empirical and theoretical work: truncation strategies, which are the lists obtained by removing a tail of least preferred partners from a preference list, and the more general dropping strategies, which are the lists obtained by only removing partners from a preference list (i.e., no reshuffling). We study when truncation / dropping strategies are exhaustive for a group of agents on the same side of the market, i.e., when each match resulting from preference manipulations can be replicated or improved upon by some truncation / dropping strategies. We prove that for each stable mechanism, truncation strategies are exhaustive for each agent with quota 1 (Theorem 1). We show that this result cannot be extended neither to group manipulations (even when all quotas equal 1 – Example 1), nor to individual manipulations when the agent’s quota is larger than 1 (even when all other agents’ quotas equal 1 – Example 2). Finally, we prove that for each stable mechanism, dropping strategies are exhaustive for each group of agents on the same side of the market (Theorem 2), i.e., independently of the quotas.
Resumo:
This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.
Resumo:
This version: August 15, 2017 (original version: December 7, 2016)
Resumo:
The member states of the European Union received 1.2 million first time asylum applications in 2015 (a doubling compared to 2014). Even if asylum will be granted for many of the refugees that made the journey to Europe, several obstacles for successful integration remain. This paper focuses on one of these obstacles, namely the problem of finding housing for refugees once they have been granted asylum. In particular, the focus is restricted to the situation in Sweden during 2015–2016 and it is demonstrated that market design can play an important role in a partial solution to the problem. More specifically, because almost all accommodation options are exhausted in Sweden, the paper investigates a matching system, closely related to the system adopted by the European NGO “Refugees Welcome”, and proposes an easy-to-implement algorithm that finds a stable maximum matching. Such matching guarantees that housing is provided to a maximum number of refugees and that no refugee prefers some landlord to their current match when, at the same time, that specific landlord prefers that refugee to his current match.
Resumo:
The following properties of the core of a one well-known: (i) the core is non-empty; (ii) the core is a lattice; and (iii) the set of unmatched agents is identical for any two matchings belonging to the core. The literature on two-sided matching focuses almost exclusively on the core and studies extensively its properties. Our main result is the following characterization of (von Neumann-Morgenstern) stable sets in one-to-one matching problem only if it is a maximal set satisfying the following properties : (a) the core is a subset of the set; (b) the set is a lattice; (c) the set of unmatched agents is identical for any two matchings belonging to the set. Furthermore, a set is a stable set if it is the unique maximal set satisfying properties (a), (b) and (c). We also show that our main result does not extend from one-to-one matching problems to many-to-one matching problems.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
The use of stable isotope ratios δ18O and δ2H are well established in assessment of groundwater systems and their hydrology. The conventional approach is based on x/y plots and relation to various MWL’s, and plots of either ratio against parameters such as Clor EC. An extension of interpretation is the use of 2D maps and contour plots, and 2D hydrogeological vertical sections. An enhancement of presentation and interpretation is the production of “isoscapes”, usually as 2.5D surface projections. We have applied groundwater isotopic data to a 3D visualisation, using the alluvial aquifer system of the Lockyer Valley. The 3D framework is produced in GVS (Groundwater Visualisation System). This format enables enhanced presentation by displaying the spatial relationships and allowing interpolation between “data points” i.e. borehole screened zones where groundwater enters. The relative variations in the δ18O and δ2H values are similar in these ambient temperature systems. However, δ2H better reflects hydrological processes, whereas δ18O also reflects aquifer/groundwater exchange reactions. The 3D model has the advantage that it displays borehole relations to spatial features, enabling isotopic ratios and their values to be associated with, for example, bedrock groundwater mixing, interaction between aquifers, relation to stream recharge, and to near-surface and return irrigation water evaporation. Some specific features are also shown, such as zones of leakage of deeper groundwater (in this case with a GAB signature). Variations in source of recharging water at a catchment scale can be displayed. Interpolation between bores is not always possible depending on numbers and spacing, and by elongate configuration of the alluvium. In these cases, the visualisation uses discs around the screens that can be manually expanded to test extent or intersections. Separate displays are used for each of δ18O and δ2H and colour coding for isotope values.