922 resultados para selection model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The maintenance of genetic variation in a spatially heterogeneous environment has been one of the main research themes in theoretical population genetics. Despite considerable progress in understanding the consequences of spatially structured environments on genetic variation, many problems remain unsolved. One of them concerns the relationship between the number of demes, the degree of dominance, and the maximum number of alleles that can be maintained by selection in a subdivided population. In this work, we study the potential of maintaining genetic variation in a two-deme model with deme-independent degree of intermediate dominance, which includes absence of G x E interaction as a special case. We present a thorough numerical analysis of a two-deme three-allele model, which allows us to identify dominance and selection patterns that harbor the potential for stable triallelic equilibria. The information gained by this approach is then used to construct an example in which existence and asymptotic stability of a fully polymorphic equilibrium can be proved analytically. Noteworthy, in this example the parameter range in which three alleles can coexist is maximized for intermediate migration rates. Our results can be interpreted in a specialist-generalist context and (among others) show when two specialists can coexist with a generalist in two demes if the degree of dominance is deme independent and intermediate. The dominance relation between the generalist allele and the specialist alleles play a decisive role. We also discuss linear selection on a quantitative trait and show that G x E interaction is not necessary for the maintenance of more than two alleles in two demes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Competitive Market Segmentation Abstract In a two-firm model where each firm sells a high-quality and a low-quality version of a product, customers differ with respect to their brand preferences and their attitudes towards quality. We show that the standard result of quality-independent markups crucially depends on the assumption that the customers' valuation of quality is identical across firms. Once we relax this assumption, competition across qualities leads to second-degree price discrimination. We find that markups on low-quality products are higher if consuming a low-quality product involves a firm-specific disutility. Likewise, markups on high-quality products are higher if consuming a high-quality product creates a firm-specific surplus. Selection upon Wage Posting Abstract We discuss a model of a job market where firms announce salaries. Thereupon, they decide through the evaluation of a productivity test whether to hire applicants. Candidates for a job are locked in once they have applied at a given employer. Hence, such a market exhibits a specific form of the bargain-then-ripoff principle. With a single firm, the outcome is efficient. Under competition, what might be called "positive selection" leads to market failure. Thus our model provides a rationale for very small employment probabilities in some sectors. Exclusivity Clauses: Enhancing Competition, Raising Prices Abstract In a setting where retailers and suppliers compete for each other by offering binding contracts, exclusivity clauses serve as a competitive device. As a result of these clauses, firms addressed by contracts only accept the most favorable deal. Thus the contract-issuing parties have to squeeze their final customers and transfer the surplus within the vertical supply chain. We elaborate to what extent the resulting allocation depends on the sequence of play and discuss the implications of a ban on exclusivity clauses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is not sufficiently understood why some lineages of cichlid fishes have proliferated in the Great Lakes of East Africa much more than anywhere else in the world, and much faster than other cichlid lineages or any other group of freshwater fish. Recent field and experimental work on Lake Victoria haplochromines suggests that mate choice-mediated disruptive sexual selection on coloration, that can cause speciation even in the absence of geographical isolation, may explain it. We summarize the evidence and propose a hypothesis for the genetics of coloration that may help understand the phenomenon. By detl ning colour patterns by hue and arrangement of hues on the body, we could assign almost all observed phenotypes of Lake Victoria cichlids to one of three female («plain», «orange blotched», «black and white») and three male («blue», «red-ventrum», «reddorsum») colour patterns. These patterns diagnose species but frequently eo-occur also as morphs within the same population, where they are associated with variation in mate preferences, and appear to be transient stages in speciation. Particularly the male patterns occur in almost every genus of the species flock. We propose that the patterns and their association into polymorphisms express an ancestral trait that is retained across speciation. Our model for male colour pattern assumes two structural loci. When both are switched off, the body is blue. When switched on by a cascade of polymorphic regulatory genes, one expresses a yellow to red ventrum, the other one a yellow to red dorsum. The expression of colour variation initiates speciation. The blue daughter species will inherit the variation at the regulatory genes that can, without new mutational events, purely by recombination, again expose the colour polymorphism, starting the process anew. Very similar colour patterns also dominate among the Mbuna of Lake Malawi. In contrast, similar colour polymorphisms do not exist in the lineages that have not proliferated in the Great Lakes. The colour pattern polymorphism may be an ancient trait in the lineage (or lineages) that gave rise to the two large haplochromine radiations. We propose two tests of our hypothesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theoretical and empirical studies were conducted on the pattern of nucleotide and amino acid substitution in evolution, taking into account the effects of mutation at the nucleotide level and purifying selection at the amino acid level. A theoretical model for predicting the evolutionary change in electrophoretic mobility of a protein was also developed by using information on the pattern of amino acid substitution. The specific problems studied and the main results obtained are as follows: (1) Estimation of the pattern of nucleotide substitution in DNA nuclear genomes. The pattern of point mutations and nucleotide substitutions among the four different nucleotides are inferred from the evolutionary changes of pseudogenes and functional genes, respectively. Both patterns are non-random, the rate of change varying considerably with nucleotide pair, and that in both cases transitions occur somewhat more frequently than transversions. In protein evolution, substitution occurs more often between amino acids with similar physico-chemical properties than between dissimilar amino acids. (2) Estimation of the pattern of nucleotide substitution in RNA genomes. The majority of mutations in retroviruses accumulate at the reverse transcription stage. Selection at the amino acid level is very weak, and almost non-existent between synonymous codons. The pattern of mutation is very different from that in DNA genomes. Nevertheless, the pattern of purifying selection at the amino acid level is similar to that in DNA genomes, although selection intensity is much weaker. (3) Evaluation of the determinants of molecular evolutionary rates in protein-coding genes. Based on rates of nucleotide substitution for mammalian genes, the rate of amino acid substitution of a protein is determined by its amino acid composition. The content of glycine is shown to correlate strongly and negatively with the rate of substitution. Empirical formulae, called indices of mutability, are developed in order to predict the rate of molecular evolution of a protein from data on its amino acid sequence. (4) Studies on the evolutionary patterns of electrophoretic mobility of proteins. A theoretical model was constructed that predicts the electric charge of a protein at any given pH and its isoelectric point from data on its primary and quaternary structures. Using this model, the evolutionary change in electrophoretic mobilities of different proteins and the expected amount of electrophoretically hidden genetic variation were studied. In the absence of selection for the pI value, proteins will on the average evolve toward a mildly basic pI. (Abstract shortened with permission of author.) ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural selection is one of the major factors in the evolution of all organisms. Detecting the signature of natural selection has been a central theme in evolutionary genetics. With the availability of microsatellite data, it is of interest to study how natural selection can be detected with microsatellites. ^ The overall aim of this research is to detect signatures of natural selection with data on genetic variation at microsatellite loci. The null hypothesis to be tested is the neutral mutation theory of molecular evolution, which states that different alleles at a locus have equivalent effects on fitness. Currently used tests of this hypothesis based on data on genetic polymorphism in natural populations presume that mutations at the loci follow the infinite allele/site models (IAM, ISM), in the sense that at each site at most only one mutation event is recorded, and each mutation leads to an allele not seen before in the population. Microsatellite loci, which are abundant in the genome, do not obey these mutation models, since the new alleles at such loci can be created either by contraction or expansion of tandem repeat sizes of core motifs. Since the current genome map is mainly composed of microsatellite loci and this class of loci is still most commonly studied in the context of human genome diversity, this research explores how the current test procedures for testing the neutral mutation hypothesis should be modified to take into account a generalized model of forward-backward stepwise mutations. In addition, recent literature also suggested that past demographic history of populations, presence of population substructure, and varying rates of mutations across loci all have confounding effects for detecting signatures of natural selection. ^ The effects of the stepwise mutation model and other confounding factors on detecting signature of natural selection are the main results of the research. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Credit-rationing model similar to Stiglitz and Weiss [1981] is combined with the information externality model of Lang and Nakamura [1993] to examine the properties of mortgage markets characterized by both adverse selection and information externalities. In a credit-rationing model, additional information increases lenders ability to distinguish risks, which leads to increased supply of credit. According to Lang and Nakamura, larger supply of credit leads to additional market activities and therefore, greater information. The combination of these two propositions leads to a general equilibrium model. This paper describes properties of this general equilibrium model. The paper provides another sufficient condition in which credit rationing falls with information. In that, external information improves the accuracy of equity-risk assessments of properties, which reduces credit rationing. Contrary to intuition, this increased accuracy raises the mortgage interest rate. This allows clarifying the trade offs associated with reduced credit rationing and the quality of applicant pool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Practice Change Model, physicians act as key stakeholders, people who have both an investment in the practice and the capacity to influence how the practice performs. This leadership role is critical to the development and change of the practice. Leadership roles and effectiveness are an important factor in quality improvement in primary care practices.^ The study conducted involved a comparative case study analysis to identify leadership roles and the relationship between leadership roles and the number and type of quality improvement strategies adopted during a Practice Change Model-based intervention study. The research utilized secondary data from four primary care practices with various leadership styles. The practices are located in the San Antonio region and serve a large Hispanic population. The data was collected by two ABC Project Facilitators from each practice during a 12-month period including Key Informant Interviews (all staff members), MAP (Multi-method Assessment Process), and Practice Facilitation field notes. This data was used to evaluate leadership styles, management within the practice, and intervention tools that were implemented. The chief steps will be (1) to analyze if the leader-member relations contribute to the type of quality improvement strategy or strategies selected (2) to investigate if leader-position power contributes to the number of strategies selected and the type of strategy selected (3) and to explore whether the task structure varies across the four primary care practices.^ The research found that involving more members of the clinic staff in decision-making, building bridges between organizational staff and clinical staff, and task structure are all associated with the direct influence on the number and type of quality improvement strategies implemented in primary care practice.^ Although this research only investigated leadership styles of four different practices, it will offer future guidance on how to establish the priorities and implementation of quality improvement strategies that will have the greatest impact on patient care improvement. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When choosing among models to describe categorical data, the necessity to consider interactions makes selection more difficult. With just four variables, considering all interactions, there are 166 different hierarchical models and many more non-hierarchical models. Two procedures have been developed for categorical data which will produce the "best" subset or subsets of each model size where size refers to the number of effects in the model. Both procedures are patterned after the Leaps and Bounds approach used by Furnival and Wilson for continuous data and do not generally require fitting all models. For hierarchical models, likelihood ratio statistics (G('2)) are computed using iterative proportional fitting and "best" is determined by comparing, among models with the same number of effects, the Pr((chi)(,k)('2) (GREATERTHEQ) G(,ij)('2)) where k is the degrees of freedom for ith model of size j. To fit non-hierarchical as well as hierarchical models, a weighted least squares procedure has been developed.^ The procedures are applied to published occupational data relating to the occurrence of byssinosis. These results are compared to previously published analyses of the same data. Also, the procedures are applied to published data on symptoms in psychiatric patients and again compared to previously published analyses.^ These procedures will make categorical data analysis more accessible to researchers who are not statisticians. The procedures should also encourage more complex exploratory analyses of epidemiologic data and contribute to the development of new hypotheses for study. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.