51 resultados para gain with selection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interviewing in professional labor markets is a costly process for firms. Moreover, poor screening can have a persistent negative impact on firms bottom lines and candidates careers. In a simple dynamic model where firms can pay a cost to interview applicants who have private information about their own ability, potentially large inefficiencies arise from information-based unemployment, where able workers are rejected by firms because of their lack of offers in previous interviews. This effect may make the market less efficient than random matching. We show that the first best can be achieved using either a mechanism with transfers or one without transfers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For many goods (such as experience goods or addictive goods), consumers preferences may change over time. In this paper, we examine a monopolist s optimal pricing schedule when current consumption can affect a consumer s valuation in the future and valuations are unobservable. We assume that consumers are anonymous, i.e. the monopolist can t observe a consumer s past consumption history. For myopic consumers, the optimal consumption schedule is distorted upwards, involving substantial discounts for low valuation types. This pushes low types into higher valuations, from which rents can be extracted.For forward looking consumers, there may be a further upward distortion of consumption due to a reversal of the adverse selection effect; low valuation consumers now have a strong interest in consumption in order to increase their valuations. Firms will find it profitable to educate consumers and encourage forward looking behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new general concentration-of-measure inequality and illustrate its power by applications in random combinatorics. The results find direct applications in some problems of learning theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a matching model with adverse selection that explains why flows into and out of unemployment are much lower in Europe compared to North America, while employment-to-employment flows are similar in the two continents. In the model,firms use discretion in terms of whom to fire and, thus, low quality workers are more likely to be dismissed than high quality workers. Moreover, as hiring and firing costs increase, firms find it more costly to hire a bad worker and, thus, they prefer to hire out of the pool of employed job seekers rather than out of the pool of the unemployed, who are more likely to turn out to be 'lemons'. We use microdata for Spain and the U.S. and find that the ratio of the job finding probability of the unemployed to the job finding probability of employed job seekers was smaller in Spain than in the U.S. Furthermore, using U.S. data, we find that the discrimination of the unemployed increased over the 1980's in those states that raised firing costs by introducing exceptions to the employment-at-will doctrine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kahneman and Tversky asserted a fundamental asymmetry between gains and losses, namely a reflection effect which occurs when an individual prefers a sure gain of $ pz to anuncertain gain of $ z with probability p, while preferring an uncertain loss of $z with probability p to a certain loss of $ pz.We focus on this class of choices (actuarially fair), and explore the extent to which thereflection effect, understood as occurring at a range of wealth levels, is compatible with single-self preferences.We decompose the reflection effect into two components, a probability switch effect,which is compatible with single-self preferences, and a translation effect, which is not. To argue the first point, we analyze two classes of single-self, nonexpected utility preferences, which we label homothetic and weakly homothetic. In both cases, we characterize the switch effect as well as the dependence of risk attitudes on wealth.We also discuss two types of utility functions of a form reminiscent of expected utility but with distorted probabilities. Type I always distorts the probability of the worst outcome downwards, yielding attraction to small risks for all probabilities. Type II distorts low probabilities upwards, and high probabilities downwards, implying risk aversion when the probability of the worst outcome is low. By combining homothetic or weak homothetic preferences with Type I or Type II distortion functions, we present four explicit examples: All four display a switch effect and, hence, a form of reflection effect consistent a single self preferences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyse credit market equilibrium when banks screen loan applicants. When banks have a convex cost function of screening, a pure strategy equilibrium exists where banks optimally set interest rates at the same level as their competitors. This result complements Broecker s (1990) analysis, where he demonstrates that no pure strategy equilibrium exists when banks have zero screening costs. In our set up we show that interest rate on loansare largely independent of marginal costs, a feature consistent with the extant empirical evidence. In equilibrium, banks make positive profits in our model in spite of the threat of entry by inactive banks. Moreover, an increase in the number of active banks increases credit risk and so does not improve credit market effciency: this point has important regulatory implications. Finally, we extend our analysis to the case where banks havediffering screening abilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

That individuals contribute in social dilemma interactions even when contributing is costly is a well-established observation in the experimental literature. Since a contributor is always strictly worse off than a non-contributor the question is raised if an intrinsic motivation to contribute can survive in an evolutionary setting. Using recent results on deterministic approximation of stochastic evolutionary dynamics we give conditions for equilibria with a positive number of contributors to be selected in the long run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We perform an experiment on a pure coordination game with uncertaintyabout the payoffs. Our game is closely related to models that have beenused in many macroeconomic and financial applications to solve problemsof equilibrium indeterminacy. In our experiment each subject receives anoisy signal about the true payoffs. This game has a unique strategyprofile that survives the iterative deletion of strictly dominatedstrategies (thus a unique Nash equilibrium). The equilibrium outcomecoincides, on average, with the risk-dominant equilibrium outcome ofthe underlying coordination game. The behavior of the subjects convergesto the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that subjects do not apply through"a priori" reasoning the iterated deletion of dominated strategies.Instead, they adapt to the responses of other players. Thus, the lengthof the learning phase clearly varies for the different signals. We alsotest behavior in a game without uncertainty as a benchmark case. The gamewith uncertainty is inspired by the "global" games of Carlsson and VanDamme (1993).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has long been standard in agency theory to search for incentive-compatible mechanisms on the assumption that people care only about their own material wealth. However, this assumption is clearly refuted by numerous experiments, and we feel that it may be useful to consider nonpecuniary utility in mechanism design and contract theory. Accordingly, we devise an experiment to explore optimal contracts in an adverse-selection context. A principal proposes one of three contract menus, each of which offers a choice of two incentive-compatible contracts, to two agents whose types are unknown to the principal. The agents know the set of possible menus, and choose to either accept one of the two contracts offered in the proposed menu or to reject the menu altogether; a rejection by either agent leads to lower (and equal) reservation payoffs for all parties. While all three possible menus favor the principal, they do so to varying degrees. We observe numerous rejections of the more lopsided menus, and approach an equilibrium where one of the more equitable contract menus (which one depends on the reservation payoffs) is proposed and agents accept a contract, selecting actions according to their types. Behavior is largely consistent with all recent models of social preferences, strongly suggesting there is value in considering nonpecuniary utility in agency theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A stratified study of microhabitat use by grey mullet on the island of Minorca (Balearic archipelago, western Mediterranean) showed that the distribution of all the species was dramatically affected by salinity. Sites with a salinity level under 15 were positively selected in spring and summer by those species whose growth performance was the best in oligomesohaline water (Liza ramado and Mugil cephalus) but also by a species whose growth was not affected by salinity (Chelon labrosus). Liza aurata concentrated in polyhaline and euhaline sites, where growth was improved, a pattern also exhibited by Liza saliens. Both species avoided fresh water sites all year round. As a consequence, community structure was correlated with salinity. The above reported electivity patterns often disappeared in autumn, when most grey mullets migrate off-shore.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hepatitis A virus (HAV), the prototype of genus Hepatovirus, has several unique biological characteristics that distinguish it from other members of the Picornaviridae family. Among these, the need for an intact eIF4G factor for the initiation of translation results in an inability to shut down host protein synthesis by a mechanism similar to that of other picornaviruses. Consequently, HAV must inefficiently compete for the cellular translational machinery and this may explain its poor growth in cell culture. In this context of virus/cell competition, HAV has strategically adopted a naturally highly deoptimized codon usage with respect to that of its cellular host. With the aim to optimize its codon usage the virus was adapted to propagate in cells with impaired protein synthesis, in order to make tRNA pools more available for the virus. A significant loss of fitness was the immediate response to the adaptation process that was, however, later on recovered and more associated to a re-deoptimization rather than to an optimization of the codon usage specifically in the capsid coding region. These results exclude translation selection and instead suggest fine-tuning translation kinetics selection as the underlying mechanism of the codon usage bias in this specific genome region. Additionally, the results provide clear evidence of the Red Queen dynamics of evolution since the virus has very much evolved to re-adapt its codon usage to the environmental cellular changing conditions in order to recover the original fitness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Drug dosing errors are common in renal-impaired patients. Appropriate dosing adjustment and drug selection is important to ensure patients" safety and to avoid adverse drug effects and poor outcomes. There are few studies on this issue in community pharmacies. The aims of this study were, firstly, to determine the prevalence of dosing inadequacy as a consequence of renal impairment in patients over 65 taking 3 or more drug products who were being attended in community pharmacies and, secondly, to evaluate the effectiveness of the community pharmacist"s intervention in improving dosing inadequacy in these patients when compared with usual care. Methods: The study was carried out in 40 Spanish community pharmacies. The study had two phases: the first, with an observational, multicentre, cross sectional design, served to determine the dosing inadequacy, the drug-related problems per patient and to obtain the control group. The second phase, with a controlled study with historical control group, was the intervention phase. When dosing adjustments were needed, the pharmacists made recommendations to the physicians. A comparison was made between the control and the intervention group regarding the prevalence of drug dosing inadequacy and the mean number of drug-related problems per patient. Results: The mean of the prevalence of drug dosing inadequacy was 17.5% [95% CI 14.6-21.5] in phase 1 and 15.5% [95% CI 14.5-16.6] in phase 2. The mean number of drug-related problems per patient was 0.7 [95% CI 0.5-0.8] in phase 1 and 0.50 [95% CI 0.4-0.6] in phase 2. The difference in the prevalence of dosing inadequacy between the control and intervention group before the pharmacists" intervention was 0.73% [95% CI (−6.0) - 7.5] and after the pharmacists" intervention it was 13.5% [95% CI 8.0 - 19.5] (p < 0.001) while the difference in the mean of drug-related problems per patient before the pharmacists" intervention was 0.05 [95% CI( -0.2) - 0.3] and following the intervention it was 0.5 [95% CI 0.3 - 0.7] (p < 0.001). Conclusion: A drug dosing adjustment service for elderly patients with renal impairment in community pharmacies can increase the proportion of adequate drug dosing, and improve the drug-related problems per patient. Collaborative practice with physicians can improve these results.