883 resultados para Fair Set
Resumo:
Given a non empty set S of vertices of a graph, the partiality of a vertex with respect to S is the di erence between maximum and minimum of the distances of the vertex to the vertices of S. The vertices with minimum partiality constitute the fair center of the set. Any vertex set which is the fair center of some set of vertices is called a fair set. In this paper we prove that the induced subgraph of any fair set is connected in the case of trees and characterise block graphs as the class of chordal graphs for which the induced subgraph of all fair sets are connected. The fair sets of Kn, Km;n, Kn e, wheel graphs, odd cycles and symmetric even graphs are identi ed. The fair sets of the Cartesian product graphs are also discussed
Resumo:
We motivate procedural fairness for matching mechanisms and study two procedurally fair and stable mechanisms: employment by lotto (Aldershof et al., 1999) and the random order mechanism (Roth and Vande Vate, 1990, Ma, 1996). For both mechanisms we give various examples of probability distributions on the set of stable matchings and discuss properties that differentiate employment by lotto and the random order mechanism. Finally, we consider an adjustment of the random order mechanism, the equitable random order mechanism, that combines aspects of procedural and "endstate'' fairness. Aldershof et al. (1999) and Ma (1996) that exist on the probability distribution induced by both mechanisms. Finally, we consider an adjustment of the random order mechanism, the equitable random order mechanism.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
A recent analysis of more than 100 countries found that the extent to which their languages grammatically allowed for an asymmetric treatment of men and women correlated with socio-economic indices of gender inequality (Prewitt-Freilino, Caswell, & Laakso, 2012). In a set of four studies we examine whether the availability of feminine forms as indicated by the most recent dictionaries (1) predicts the actual percentage of women and gender wage gap for all professions registered in Poland; (2) predicts the longitudinal pattern of use of the occupational job-titles; (3) relates to social perception of the sample of 150 professions.
Resumo:
The Culture Fair Test (CFT) is a psychometric test of fluid intelligence consisting of four subtests; Series, Classification, Matrices, and Topographies. The four subtests are only moderately intercorrelated, doubting the notion that they assess the same construct (i.e., fluid intelligence). As an explanation of these low correlations, we investigated the position effect. This effect is assumed to reflect implicit learning during testing. By applying fixed-links modeling to analyze the CFT data of 206 participants, we identified position effects as latent variables in the subtests; Classification, Matrices, and Topographies. These position effects were disentangled from a second set of latent variables representing fluid intelligence inherent in the four subtests. After this separation of position effect and basic fluid intelligence, the latent variables representing basic fluid intelligence in the subtests Series, Matrices, and Topographies could be combined to one common latent variable which was highly correlated with fluid intelligence derived from the subtest Classification (r=.72). Correlations between the three latent variables representing the position effects in the Classification, Matrices, and Topographies subtests ranged from r=.38 to r=.59. The results indicate that all four CFT subtests measure the same construct (i.e., fluid intelligence) but that the position effect confounds the factorial structure
Resumo:
The problem of fairly distributing the capacity of a network among a set of sessions has been widely studied. In this problem, each session connects via a single path a source and a destination, and its goal is to maximize its assigned transmission rate (i.e., its throughput). Since the links of the network have limited bandwidths, some criterion has to be defined to fairly distribute their capacity among the sessions. A popular criterion is max-min fairness that, in short, guarantees that each session i gets a rate λi such that no session s can increase λs without causing another session s' to end up with a rate λs/ <; λs. Many max-min fair algorithms have been proposed, both centralized and distributed. However, to our knowledge, all proposed distributed algorithms require control data being continuously transmitted to recompute the max-min fair rates when needed (because none of them has mechanisms to detect convergence to the max-min fair rates). In this paper we propose B-Neck, a distributed max-min fair algorithm that is also quiescent. This means that, in absence of changes (i.e., session arrivals or departures), once the max min rates have been computed, B-Neck stops generating network traffic. Quiescence is a key design concept of B-Neck, because B-Neck routers are capable of detecting and notifying changes in the convergence conditions of max-min fair rates. As far as we know, B-Neck is the first distributed max-min fair algorithm that does not require a continuous injection of control traffic to compute the rates. The correctness of B-Neck is formally proved, and extensive simulations are conducted. In them, it is shown that B-Neck converges relatively fast and behaves nicely in presence of sessions arriving and departing.
Resumo:
This layer is a georeferenced raster image of the United States Geological Survey 7.5 minute topographic sheet map entitled: New York and vicinity : Paterson, N.J.-N.Y., 1955. It is part of an 8 sheet map set covering the metropolitan New York City area. It was published in 1961. Scale 1:24,000. The source map was prepared by the Geological Survey from 1:24,000-scale maps of Hackensack, Paterson, Orange, and Weehawken 1955 7.5 minute quadrangles. The Orange quadrangle was previously compiled by the Army Map Service. Culture revised by the Geological Survey. Hydrography compiled from USC&GS charts 287 (1954), 745 (1956), and 746 (1956). The image inside the map neatline is georeferenced to the surface of the earth and fit to the Universal Transverse Mercator (UTM) Zone 18N NAD27 projection. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. USGS maps are typical topographic maps portraying both natural and manmade features. They show and name works of nature, such as mountains, valleys, lakes, rivers, vegetation, etc. They also identify the principal works of humans, such as roads, railroads, boundaries, transmission lines, major buildings, etc. Relief is shown with standard contour intervals of 10 and 20 feet; depths are shown with contours and soundings. Please pay close attention to map collar information on projections, spheroid, sources, dates, and keys to grid numbering and other numbers which appear inside the neatline. This layer is part of a selection of digitally scanned and georeferenced historic maps from The Harvard Map Collection as part of the Imaging the Urban Environment project. Maps selected for this project represent major urban areas and cities of the world, at various time periods. These maps typically portray both natural and manmade features at a large scale. The selection represents a range of regions, originators, ground condition dates, scales, and purposes.
Resumo:
This layer is a georeferenced raster image of the United States Geological Survey 7.5 minute topographic sheet map entitled: New York and vicinity : Sandy Hook, N.J.-N.Y., 1954. It is part of an 8 sheet map set covering the metropolitan New York City area. It was published in 1961. Scale 1:24,000. The source map was prepared by the Geological Survey from 1:24,000-scale maps of Sandy Hook, Keyport, Marlboro, and Long Branch 1954 7.5 minute quadrangles compiled by the Army Map Service. Culture revised by the Geological Survey. Hydrography compiled from USC&GS charts 286, 369, and 824. The image inside the map neatline is georeferenced to the surface of the earth and fit to the Universal Transverse Mercator (UTM) Zone 18N NAD27 projection. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. USGS maps are typical topographic maps portraying both natural and manmade features. They show and name works of nature, such as mountains, valleys, lakes, rivers, vegetation, etc. They also identify the principal works of humans, such as roads, railroads, boundaries, transmission lines, major buildings, etc. Relief is shown with standard contour intervals of 10 and 20 feet; depths are shown with contours and soundings. Please pay close attention to map collar information on projections, spheroid, sources, dates, and keys to grid numbering and other numbers which appear inside the neatline. This layer is part of a selection of digitally scanned and georeferenced historic maps from The Harvard Map Collection as part of the Imaging the Urban Environment project. Maps selected for this project represent major urban areas and cities of the world, at various time periods. These maps typically portray both natural and manmade features at a large scale. The selection represents a range of regions, originators, ground condition dates, scales, and purposes.
Resumo:
Labour mobility creates economic benefits for the EU at large and the mobile workforce. The same can be said for the special case of posted workers – a form of labour mobility that is crucial to the functioning of the internal market for services. Moreover, the number of posted workers is set to grow if the single market is further deepened. However, regulating the cross-border posting of workers – and ensuring a notion of ‘fair mobility’ – also epitomises the inherent difficulties in squaring the differences of 28 different sets of labour market regimes and regulations with the freedom to provide services in situ. In addition, the regulation has to work effectively in countries with large differences in income levels and social policies. This paper reviews the state of play with regard to posted workers and spell out the trade-offs involved to be kept in mind when considering the targeted revision of the posted workers Directive.
Resumo:
Objective: Secondary analyses of a previously conducted 1-year randomized controlled trial were performed to assess the application of responder criteria in patients with knee osteoarthritis (OA) using different sets of responder criteria developed by the Osteoarthritis Research Society International (OARSI) (Propositions A and B) for intra-articular drugs and Outcome Measures in Arthritis Clinical Trials (OMERACT)-OARSI (Proposition D). Methods: Two hundred fifty-five patients with knee OA were randomized to appropriate care with hylan G-F 20 (AC + H) or appropriate care without hylan G-F 20 (AC). A patient was defined as a responder at month 12 based on change in Western Ontario and McMaster Universities Osteoarthritis Index pain and function (0-100 normalized scale) and patient global assessment of OA in the study knee (at least one-category improvement in very poor, poor, fair, good and very good). All propositions incorporate both minimum relative and absolute changes. Results: Results demonstrated that statistically significant differences in responders between treatment groups, in favor of hylan G-F 20, were detected for Proposition A (AC + H = 53.5%, AC = 25.2%), Proposition B (AC + H = 56.7%, AC = 32.3%) and Proposition D (AC + H = 66.9%, AC = 42.5%). The highest effectiveness in both treatment groups was observed with Proposition D, whereas Proposition A resulted in the lowest effectiveness in both treatment groups. The treatment group differences always exceeded the required 20% minimum clinically important difference between groups established a priori, and were 28.3%, 24.4% and 24.4% for Propositions A, B and D, respectively. Conclusion: This analysis provides evidence for the capacity of OARSI and OMERACT-OARSI responder criteria to detect clinically important statistically detectable differences between treatment groups. (C) 2004 OsteoArthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Resumo:
Az új választási törvény egyik célja a korábbinál igazságosabb választási körzetek kialakítása. Ezt a Velencei Bizottság választási kódexében megfogalmazott ajánlásokhoz hasonló, bár azoknál némileg megengedőbb szabályok révén biztosítja. A szabályok rögzítik a körzetek számát, illetve hogy a körzetek nem oszthatnak ketté kisebb településeket, és nem nyúlhatnak át a megyehatárokon. Tanulmányunkban belátjuk, hogy a szabályok betartása mellett a körzetek kialakítása matematikailag lehetetlen. Javaslatot teszünk a probléma optimális megoldására elvi alapon is, vizsgáljuk a módszer tulajdonságait, majd az általunk megfogalmazott hatékony algoritmussal, a 2010. évi országgyűlési választások adatainak felhasználásával meghatározzuk a körzetek megyék közti elosztásának legjobb megoldását. Végül kitérünk a demográfiai változások várható hatásaira, és több javaslatot teszünk a korlátok hosszú távú betartására: javasoljuk a választási körzetek számának körülbelül 130-ra növelését; egy-egy felülvizsgálat alkalmával a választási körzetek számának megváltoztathatóságát; illetve a körzetek megyék helyett régiók szerinti szervezését. _______ One of the aims of the new electoral law of Hungary has been to apportion voters to voting districts more fairly. This is ensured by a set of rules rather more permissive than those put forward in the Code of Good Practice in Electoral Matters issued by the Venice Commission. These rules fix the size of the voting districts, and require voting districts not to split smaller towns and villages and not to cross county borders. The article shows that such an apportionment is mathematically impos-sible, and makes suggestions for a theoretical approach to resolving this problem: determine the optimal apportionment by studying the properties of their approach, and use the authors efficient algorithm on the data for the 2010 national elections. The article also examines the expected effect of demographic changes and formulates recommendations for adhering to the rules over the long term: increase the number of voting districts to about 130, allow the number of voting districts to change flexibly at each revision of the districts, and base the districts on regions rather than counties.
Resumo:
An interpretation of fairness based on the equal sacrifice principle is not clear; three taxation rules can be derived from it. Instead of searching for a fair tax system, ethical behavior of the taxpayer should be expected and set as a target. Ethical taxation can be encouraged and the propensity to pay taxes could be reinforced by abolishing the secrecy of individual and family tax returns, setting restrictions on cash operations which are associated with corruption, and gradually eliminating tax havens and offshore areas.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física