833 resultados para Many-To-One Matching Market
Resumo:
Pouvoir déterminer la provenance des sons est fondamental pour bien interagir avec notre environnement. La localisation auditive est une faculté importante et complexe du système auditif humain. Le cerveau doit décoder le signal acoustique pour en extraire les indices qui lui permettent de localiser une source sonore. Ces indices de localisation auditive dépendent en partie de propriétés morphologiques et environnementales qui ne peuvent être anticipées par l'encodage génétique. Le traitement de ces indices doit donc être ajusté par l'expérience durant la période de développement. À l’âge adulte, la plasticité en localisation auditive existe encore. Cette plasticité a été étudiée au niveau comportemental, mais on ne connaît que très peu ses corrélats et mécanismes neuronaux. La présente recherche avait pour objectif d'examiner cette plasticité, ainsi que les mécanismes d'encodage des indices de localisation auditive, tant sur le plan comportemental, qu'à travers les corrélats neuronaux du comportement observé. Dans les deux premières études, nous avons imposé un décalage perceptif de l’espace auditif horizontal à l’aide de bouchons d’oreille numériques. Nous avons montré que de jeunes adultes peuvent rapidement s’adapter à un décalage perceptif important. Au moyen de l’IRM fonctionnelle haute résolution, nous avons observé des changements de l’activité corticale auditive accompagnant cette adaptation, en termes de latéralisation hémisphérique. Nous avons également pu confirmer l’hypothèse de codage par hémichamp comme représentation de l'espace auditif horizontal. Dans une troisième étude, nous avons modifié l’indice auditif le plus important pour la perception de l’espace vertical à l’aide de moulages en silicone. Nous avons montré que l’adaptation à cette modification n’était suivie d’aucun effet consécutif au retrait des moulages, même lors de la toute première présentation d’un stimulus sonore. Ce résultat concorde avec l’hypothèse d’un mécanisme dit de many-to-one mapping, à travers lequel plusieurs profils spectraux peuvent être associés à une même position spatiale. Dans une quatrième étude, au moyen de l’IRM fonctionnelle et en tirant profit de l’adaptation aux moulages de silicone, nous avons révélé l’encodage de l’élévation sonore dans le cortex auditif humain.
Resumo:
Pouvoir déterminer la provenance des sons est fondamental pour bien interagir avec notre environnement. La localisation auditive est une faculté importante et complexe du système auditif humain. Le cerveau doit décoder le signal acoustique pour en extraire les indices qui lui permettent de localiser une source sonore. Ces indices de localisation auditive dépendent en partie de propriétés morphologiques et environnementales qui ne peuvent être anticipées par l'encodage génétique. Le traitement de ces indices doit donc être ajusté par l'expérience durant la période de développement. À l’âge adulte, la plasticité en localisation auditive existe encore. Cette plasticité a été étudiée au niveau comportemental, mais on ne connaît que très peu ses corrélats et mécanismes neuronaux. La présente recherche avait pour objectif d'examiner cette plasticité, ainsi que les mécanismes d'encodage des indices de localisation auditive, tant sur le plan comportemental, qu'à travers les corrélats neuronaux du comportement observé. Dans les deux premières études, nous avons imposé un décalage perceptif de l’espace auditif horizontal à l’aide de bouchons d’oreille numériques. Nous avons montré que de jeunes adultes peuvent rapidement s’adapter à un décalage perceptif important. Au moyen de l’IRM fonctionnelle haute résolution, nous avons observé des changements de l’activité corticale auditive accompagnant cette adaptation, en termes de latéralisation hémisphérique. Nous avons également pu confirmer l’hypothèse de codage par hémichamp comme représentation de l'espace auditif horizontal. Dans une troisième étude, nous avons modifié l’indice auditif le plus important pour la perception de l’espace vertical à l’aide de moulages en silicone. Nous avons montré que l’adaptation à cette modification n’était suivie d’aucun effet consécutif au retrait des moulages, même lors de la toute première présentation d’un stimulus sonore. Ce résultat concorde avec l’hypothèse d’un mécanisme dit de many-to-one mapping, à travers lequel plusieurs profils spectraux peuvent être associés à une même position spatiale. Dans une quatrième étude, au moyen de l’IRM fonctionnelle et en tirant profit de l’adaptation aux moulages de silicone, nous avons révélé l’encodage de l’élévation sonore dans le cortex auditif humain.
Resumo:
Matching theory and matching markets are a core component of modern economic theory and market design. This dissertation presents three original contributions to this area. The first essay constructs a matching mechanism in an incomplete information matching market in which the positive assortative match is the unique efficient and unique stable match. The mechanism asks each agent in the matching market to reveal her privately known type. Through its novel payment rule, truthful revelation forms an ex post Nash equilibrium in this setting. This mechanism works in one-, two- and many-sided matching markets, thus offering the first mechanism to unify these matching markets under a single mechanism design framework. The second essay confronts a problem of matching in an environment in which no efficient and incentive compatible matching mechanism exists due to matching externalities. I develop a two-stage matching game in which a contracting stage facilitates subsequent conditionally efficient and incentive compatible Vickrey auction stage. Infinite repetition of this two-stage matching game enforces the contract in every period. This mechanism produces inequitably distributed social improvement: parties to the contract receive all of the gains and then some. The final essay demonstrates the existence of prices which stably and efficiently partition a single set of agents into firms and workers, and match those two sets to each other. This pricing system extends Kelso and Crawford's general equilibrium results in a labor market matching model and links one- and two-sided matching markets as well.
Resumo:
The rank transform is a non-parametric technique which has been recently proposed for the stereo matching problem. The motivation behind its application to the matching problem is its invariance to certain types of image distortion and noise, as well as its amenability to real-time implementation. This paper derives an analytic expression for the process of matching using the rank transform, and then goes on to derive one constraint which must be satisfied for a correct match. This has been dubbed the rank order constraint or simply the rank constraint. Experimental work has shown that this constraint is capable of resolving ambiguous matches, thereby improving matching reliability. This constraint was incorporated into a new algorithm for matching using the rank transform. This modified algorithm resulted in an increased proportion of correct matches, for all test imagery used.
Resumo:
This paper examines whether innovation in market design can address persistent problems of housing choice and affordability in the ageing inner and middle suburbs of Australian cities. Despite policy consensus that urban intensification of these low density, ‘greyfield’ areas should be able to deliver positive social, economic and environmental outcomes, existing models of development have not increased housing stock or delivered adequate gains in sustainability, affordability or diversity of dwellings in greyfield localities. We argue that application of smart market and matching market principles to the supply of multi-unit housing can unlock land, reduce development costs and improve design.
Resumo:
The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.
Resumo:
Non-use values (i.e. economic values assigned by individuals to ecosystem goods and services unrelated to current or future uses) provide one of the most compelling incentives for the preservation of ecosystems and biodiversity. Assessing the non-use values of non-users is relatively straightforward using stated preference methods, but the standard approaches for estimating non-use values of users (stated decomposition) have substantial shortcomings which undermine the robustness of their results. In this paper, we propose a pragmatic interpretation of non-use values to derive estimates that capture their main dimensions, based on the identification of a willingness to pay for ecosystem protection beyond one's expected life. We empirically test our approach using a choice experiment conducted on coral reef ecosystem protection in two coastal areas in New Caledonia with different institutional, cultural, environmental and socio-economic contexts. We compute individual willingness to pay estimates, and derive individual non-use value estimates using our interpretation. We find that, a minima, estimates of non-use values may comprise between 25 and 40% of the mean willingness to pay for ecosystem preservation, less than has been found in most studies.
Resumo:
In its October 2003 report on the definition of disability used by the Social Security Administration’s (SSA’s) disability programs [i.e., Social Security Disability Insurance (SSDI) and Supplemental Security Income (SSI) for people with disabilities], the Social Security Advisory Board raises the issue of whether this definition is at odds with the concept of disability embodied in the Americans with Disabilities Act (ADA) and, more importantly, with the aspirations of people with disabilities to be full participants in mainstream social activities and lead fulfilling, productive lives. The Board declares that “the Nation must face up to the contradictions created by the existing definition of disability.” I wholeheartedly agree. Further, I have concluded that we have to make fundamental, conceptual changes to both how we define eligibility for economic security benefits, and how we provide those benefits, if we are ever to fulfill the promise of the ADA. To convince you of that proposition, I will begin by relating a number of facts that paint a very bleak picture – a picture of deterioration in the economic security of the population that the disability programs are intended to serve; a picture of programs that purport to provide economic security, but are themselves financially insecure and subject to cycles of expansion and cuts that undermine their purpose; a picture of programs that are facing their biggest expenditure crisis ever; and a picture of an eligibility determination process that is inefficient and inequitable -- one that rations benefits by imposing high application costs on applicants in an arbitrary fashion. I will then argue that the fundamental reason for this bleak picture is the conceptual definition of eligibility that these programs use – one rooted in a disability paradigm that social scientists, people with disabilities, and, to a substantial extent, the public have rejected as being flawed, most emphatically through the passage of the ADA. Current law requires eligibility rules to be based on the premise that disability is medically determinable. That’s wrong because, as the ADA recognizes, a person’s environment matters. I will further argue that programs relying on this eligibility definition must inevitably: reward people if they do not try to help themselves, but not if they do; push the people they serve out of society’s mainstream, fostering a culture of isolation and dependency; relegate many to a lifetime of poverty; and undermine their promise of economic security because of the periodic “reforms” that are necessary to maintain taxpayer support. I conclude by pointing out that to change the conceptual definition for program eligibility, we also must change our whole approach to providing for the economic security of people with disabilities. We need to replace our current “caretaker” approach with one that emphasizes helping people with disabilities help themselves. I will briefly describe features that such a program might require, and point out the most significant challenges we would face in making the transition.
Resumo:
Entrepreneurship, understood as the autonomous, effective pursuit of opportunities regardless of resources, is currently subject to a multitude of interests, expectations, and facilitation efforts. On the one hand, such entrepreneurial agency has broad appeal to individuals in Western market democracies and resonates with their longing for an autonomous, personally tailored, meaningful, and materially rewarding way of life. On the other hand, entrepreneurship represents a tempting and increasingly popular means of governance and policy making, and thus a model for the re-organization of a variety of societal sectors. This study focuses on the diffusion and reception of entrepreneurship discourse in the context of farming and agriculture, where pressures to adopt entrepreneurial orientations have been increasingly pronounced while, on the other hand, the context of farming has historically enjoyed state protection and adhered to principles that seem at odds with aspects of individualistic entrepreneurship discourse . The study presents an interpretation of the psychologically and politically appealing uses of the notion of entrepreneurial agency , reviews the historical and political background of the current situation of farming and agriculture with regard to entrepreneurship, and examines their relationships in four empirical studies. The study follows and develops a social psychological, situated relational approach that guides the qualitative analyses and interpretations of the empirical studies. Interviews with agents from the farm sector aim to stimulate evaluative responses and comments on the idea of entrepreneurship on farms. Analysis of the interview talk, in turn, detects the variety of evaluative responses and argumentative contexts with which the interviewees relate themselves to the entrepreneurship discourse and adopt, use, resist, or reject it. The study shows that despite the pressures towards entrepreneurialism, the diffusion of entrepreneurship discourse and the construction of entrepreneurial agency in farm context encounter many obstacles. These obstacles can be variably related to aspects dealing with the individual agent, the action situation, the characteristics of the action itself, or to the broader social, institutional and cultural context. Many aspects of entrepreneurial agency, such as autonomy, personal initiative and achievement orientation, are nevertheless familiar to farmers and are eagerly related to one s own farming activities. The idea of entrepreneurship is thus rarely rejected outright. The findings highlight the relational and situational preconditions for the construction of entrepreneurial agency in the farm context: When agents demonstrate entrepreneurial agency, they do so by drawing on available and accessed relational resources characteristic of their action context. Likewise, when agents fail or are reluctant to demonstrate entrepreneurial agency, they nevertheless actively account for their situation and demonstrate personal agency by drawing on the relational resources available to them.
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
Matching a new technology to an appropriate market is a major challenge for new technology-based firms (NTBF). Such firms are often advised to target niche-markets where the firms and their technologies can establish themselves relatively free of incumbent competition. However, technologies are diverse in nature and do not benefit from identical strategies. In contrast to many Information and Communication Technology (ICT) innovations which build on an established knowledge base for fairly specific applications, technologies based on emerging science are often generic and so have a number of markets and applications open to them, each carrying considerable technological and market uncertainty. Each of these potential markets is part of a complex and evolving ecosystem from which the venture may have to access significant complementary assets in order to create and sustain commercial value. Based on dataset and case study research on UK advanced material university spin-outs (USO), we find that, contrary to conventional wisdom, the more commercially successful ventures were targeting mainstream markets by working closely with large, established competitors during early development. While niche markets promise protection from incumbent firms, science-based innovations, such as new materials, often require the presence, and participation, of established companies in order to create value. © 2012 IEEE.
Resumo:
I have previously described psychophysical experiments that involved the perception of many transparent layers, corresponding to multiple matching, in doubly ambiguous random dot stereograms. Additional experiments are described in the first part of this paper. In one experiment, subjects were required to report the density of dots on each transparent layer. In another experiment, the minimal density of dots on each layer, which is required for the subjects to perceive it as a distinct transparent layer, was measured. The difficulties encountered by stereo matching algorithms, when applied to doubly ambiguous stereograms, are described in the second part of this paper. Algorithms that can be modified to perform consistently with human perception, and the constraints imposed on their parameters by human perception, are discussed.
Resumo:
Christoph Franz of Lufthansa recently identified Ryanair, easyJet, Air Berlin and Emirates as the company’s main competitors – gone are the days when it could benchmark itself against BA or Air France-KLM! This paper probes behind the headlines to assess the extent to which different airlines are in competition, using evidence from the UK and mainland European markets. The issue of route versus network competition is addressed. Many regulators have put an emphasis on the former whereas the latter, although less obvious, can be more relevant. For example, BA and American will cease to compete between London and Dallas Fort Worth if their alliance obtains anti-trust immunity but 80% of the passengers on this route are connecting at one or both ends and hence arguably belong to different markets (e.g. London-San Francisco, Zurich-Dallas, Edinburgh-New Orleans) which may be highly contested. The remaining 20% of local traffic is actually insufficient to support a single point to point service in its own right. Estimates are made of the seat capacity major airlines are offering to the local market as distinct from feeding other routes. On a sector such as Manchester–Amsterdam, 60% of KLM’s passengers are transferring at Schiphol as against only 1% of bmibaby’s. Thus although KLM operates 5 flights and 630 seats per day against bmibaby’s 2 flights and 298 seats, in the point to point market bmibaby offers more seats than KLM. The growth of the Low Cost Carriers (LCCs) means that competition increasingly needs to be viewed on city pair markets (e.g. London-Rome) rather than airport pair markets (e.g. Heathrow-Fiumicino). As the stronger LCCs drive out weaker rivals and mainline carriers retrench to their major hubs, some markets now have fewer direct options than existed prior to the low cost boom. Timings and frequencies are considered, in particular the extent to which services are a true alternative especially for business travellers. LCCs typically offer lower frequencies and more unsociable timings (e.g. late evening arrivals at remote airports) as they are more focused on providing the cheapest service rather than the most convenient schedule. Interesting findings on ‘monopoly’ services are presented (including alliances) - certain airlines have many more of these than others. Lufthansa has a significant number of sectors to itself whereas at the other extreme British Airways has direct competition on almost every route in its network. Ryanair and flybe have a higher proportion of monopoly routes than easyJet or Air Berlin. In the domestic US market it has become apparent since deregulation that better financial returns can come from dominating a large number of smaller markets rather than being heavily exposed in the major markets - which are hotly fought over. Regional niches that appear too thin for Ryanair to serve (with its all 189 seat 737-800 fleet) are identified. Fare comparisons in contrasting markets provide some insights to marketing and pricing strategies. Data sources used include OAG (schedules and capacity), AEA (traditional European airlines traffic by region), the UK CAA (airport, airline and route traffic plus survey information of passenger types) and ICAO (international route traffic and capacity by carrier). It is concluded that airlines often have different competitors depending on the context but in surprisingly many cases there are actually few or no direct substitutes. The competitive process set in train by deregulation of European air services in the 1990s is leading back to one of natural monopolies and oblique alternatives. It is the names of the main participants that have changed however!
Resumo:
According to UN Women, to build stronger economies, it is essential to empower women to participate fully in economic life across all sectors. Increasing women and girls’ education enhances their chances to participate in the labor market. In certain cultures, like in Saudi Arabia, women contribution to the public economy growth is very limited. According to the World Bank, less than 20 percent of the female population participate in the labor force. This low participation rate has many reasons. One of them, is the educational level and educational quality for females. Although Saudi Arabia has about thirty three universities, opportunities are still limited for women because of the restrictions of access put upon them. A mixture of local norms, traditions, social beliefs, and principles preventing women from receiving full benefits from the educational system. Gender segregation is one of the challenges that limits the women access for education. It causes a problem due to the shortage of female faculty throughout the country. To overcome this problem, male faculty are allowed to teach female students under certain regulations and following a certain method of education delivery and interaction. However, most of these methods lack face-to-face communication between the teacher and students, which lowers the interactivity level and, accordingly, the students’ engagement, and increases the need for other alternatives. The e-learning model is one of high benefit for female students in such societies. Recognizing the students’ engagement is not straightforward in the e-learning model. To measure the level of engagement, the learner’s mood or emotions should be taken into consideration to help understanding and judging the level of engagement. This paper is to investigate the relationship between emotions and engagement in the e-learning environment, and how recognizing the learner’s emotions and change the content delivery accordingly can affect the efficiency of the e-learning process. The proposed experiment alluded to herein should help to find ways to increase the engagement of the learners, hence, enhance the efficiency of the learning process and the quality of learning, which will increase the chances and opportunities for women in such societies to participate more effectively in the labor market.
Resumo:
Violence has always been a part of the human experience, and therefore, a popular topic for research. It is a controversial issue, mostly because the possible sources of violent behaviour are so varied, encompassing both biological and environmental factors. However, very little disagreement is found regarding the severity of this societal problem. Most researchers agree that the number and intensity of aggressive acts among adults and children is growing. Not surprisingly, many educational policies, programs, and curricula have been developed to address this concern. The research favours programs which address the root causes of violence and seek to prevent rather than provide consequences for the undesirable behaviour. But what makes a violence prevention program effective? How should educators choose among the many curricula on the market? After reviewing the literature surrounding violence prevention programs and their effectiveness, The Second Step Violence Prevention Curriculum surfaced as unique in many ways. It was designed to address the root causes of violence in an active, student-centred way. Empathy training, anger management, interpersonal cognitive problem solving, and behavioural social skills form the basis of this program. Published in 1992, the program has been the topic of limited research, almost entirely carried out using quantitative methodologies.The purpose of this study was to understand what happens when the Second Step Violence Prevention Curriculum is implemented with a group of students and teachers. I was not seeking a statistical correlation between the frequency of violence and program delivery, as in most prior research. Rather, I wished to gain a deeper understanding of the impact ofthe program through the eyes of the participants. The Second Step Program was taught to a small, primary level, general learning disabilities class by a teacher and student teacher. Data were gathered using interviews with the teachers, personal observations, staff reports, and my own journal. Common themes across the four types of data collection emerged during the study, and these themes were isolated and explored for meaning. Findings indicate that the program does not offer a "quick fix" to this serious problem. However, several important discoveries were made. The teachers feU that the program was effective despite a lack of concrete evidence to support this claim. They used the Second Step strategies outside their actual instructional time and felt it made them better educators and disciplinarians. The students did not display a marked change in their behaviour during or after the program implementation, but they were better able to speak about their actions, the source of their aggression, and the alternatives which were available. Although they were not yet transferring their knowledge into positive action,a heightened awareness was evident. Finally, staff reports and my own journal led me to a deeper understanding ofhow perception frames reality. The perception that the program was working led everyone to feel more empowered when a violent incident occurred, and efforts were made to address the cause rather than merely to offer consequences. A general feeling that we were addressing the problem in a productive way was prevalent among the staff and students involved. The findings from this investigation have many implications for research and practice. Further study into the realm of violence prevention is greatly needed, using a balance of quantitative and qualitative methodologies. Such a serious problem can only be effectively addressed with a greater understanding of its complexities. This study also demonstrates the overall positive impact of the Second Step Violence Prevention Curriculum and, therefore, supports its continued use in our schools.