921 resultados para Heckman selection model
Resumo:
Theoretical and empirical studies were conducted on the pattern of nucleotide and amino acid substitution in evolution, taking into account the effects of mutation at the nucleotide level and purifying selection at the amino acid level. A theoretical model for predicting the evolutionary change in electrophoretic mobility of a protein was also developed by using information on the pattern of amino acid substitution. The specific problems studied and the main results obtained are as follows: (1) Estimation of the pattern of nucleotide substitution in DNA nuclear genomes. The pattern of point mutations and nucleotide substitutions among the four different nucleotides are inferred from the evolutionary changes of pseudogenes and functional genes, respectively. Both patterns are non-random, the rate of change varying considerably with nucleotide pair, and that in both cases transitions occur somewhat more frequently than transversions. In protein evolution, substitution occurs more often between amino acids with similar physico-chemical properties than between dissimilar amino acids. (2) Estimation of the pattern of nucleotide substitution in RNA genomes. The majority of mutations in retroviruses accumulate at the reverse transcription stage. Selection at the amino acid level is very weak, and almost non-existent between synonymous codons. The pattern of mutation is very different from that in DNA genomes. Nevertheless, the pattern of purifying selection at the amino acid level is similar to that in DNA genomes, although selection intensity is much weaker. (3) Evaluation of the determinants of molecular evolutionary rates in protein-coding genes. Based on rates of nucleotide substitution for mammalian genes, the rate of amino acid substitution of a protein is determined by its amino acid composition. The content of glycine is shown to correlate strongly and negatively with the rate of substitution. Empirical formulae, called indices of mutability, are developed in order to predict the rate of molecular evolution of a protein from data on its amino acid sequence. (4) Studies on the evolutionary patterns of electrophoretic mobility of proteins. A theoretical model was constructed that predicts the electric charge of a protein at any given pH and its isoelectric point from data on its primary and quaternary structures. Using this model, the evolutionary change in electrophoretic mobilities of different proteins and the expected amount of electrophoretically hidden genetic variation were studied. In the absence of selection for the pI value, proteins will on the average evolve toward a mildly basic pI. (Abstract shortened with permission of author.) ^
Resumo:
Natural selection is one of the major factors in the evolution of all organisms. Detecting the signature of natural selection has been a central theme in evolutionary genetics. With the availability of microsatellite data, it is of interest to study how natural selection can be detected with microsatellites. ^ The overall aim of this research is to detect signatures of natural selection with data on genetic variation at microsatellite loci. The null hypothesis to be tested is the neutral mutation theory of molecular evolution, which states that different alleles at a locus have equivalent effects on fitness. Currently used tests of this hypothesis based on data on genetic polymorphism in natural populations presume that mutations at the loci follow the infinite allele/site models (IAM, ISM), in the sense that at each site at most only one mutation event is recorded, and each mutation leads to an allele not seen before in the population. Microsatellite loci, which are abundant in the genome, do not obey these mutation models, since the new alleles at such loci can be created either by contraction or expansion of tandem repeat sizes of core motifs. Since the current genome map is mainly composed of microsatellite loci and this class of loci is still most commonly studied in the context of human genome diversity, this research explores how the current test procedures for testing the neutral mutation hypothesis should be modified to take into account a generalized model of forward-backward stepwise mutations. In addition, recent literature also suggested that past demographic history of populations, presence of population substructure, and varying rates of mutations across loci all have confounding effects for detecting signatures of natural selection. ^ The effects of the stepwise mutation model and other confounding factors on detecting signature of natural selection are the main results of the research. ^
Resumo:
With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^
Resumo:
Credit-rationing model similar to Stiglitz and Weiss [1981] is combined with the information externality model of Lang and Nakamura [1993] to examine the properties of mortgage markets characterized by both adverse selection and information externalities. In a credit-rationing model, additional information increases lenders ability to distinguish risks, which leads to increased supply of credit. According to Lang and Nakamura, larger supply of credit leads to additional market activities and therefore, greater information. The combination of these two propositions leads to a general equilibrium model. This paper describes properties of this general equilibrium model. The paper provides another sufficient condition in which credit rationing falls with information. In that, external information improves the accuracy of equity-risk assessments of properties, which reduces credit rationing. Contrary to intuition, this increased accuracy raises the mortgage interest rate. This allows clarifying the trade offs associated with reduced credit rationing and the quality of applicant pool.
Resumo:
In the Practice Change Model, physicians act as key stakeholders, people who have both an investment in the practice and the capacity to influence how the practice performs. This leadership role is critical to the development and change of the practice. Leadership roles and effectiveness are an important factor in quality improvement in primary care practices.^ The study conducted involved a comparative case study analysis to identify leadership roles and the relationship between leadership roles and the number and type of quality improvement strategies adopted during a Practice Change Model-based intervention study. The research utilized secondary data from four primary care practices with various leadership styles. The practices are located in the San Antonio region and serve a large Hispanic population. The data was collected by two ABC Project Facilitators from each practice during a 12-month period including Key Informant Interviews (all staff members), MAP (Multi-method Assessment Process), and Practice Facilitation field notes. This data was used to evaluate leadership styles, management within the practice, and intervention tools that were implemented. The chief steps will be (1) to analyze if the leader-member relations contribute to the type of quality improvement strategy or strategies selected (2) to investigate if leader-position power contributes to the number of strategies selected and the type of strategy selected (3) and to explore whether the task structure varies across the four primary care practices.^ The research found that involving more members of the clinic staff in decision-making, building bridges between organizational staff and clinical staff, and task structure are all associated with the direct influence on the number and type of quality improvement strategies implemented in primary care practice.^ Although this research only investigated leadership styles of four different practices, it will offer future guidance on how to establish the priorities and implementation of quality improvement strategies that will have the greatest impact on patient care improvement. ^
Resumo:
Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^
Resumo:
Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^
Resumo:
When choosing among models to describe categorical data, the necessity to consider interactions makes selection more difficult. With just four variables, considering all interactions, there are 166 different hierarchical models and many more non-hierarchical models. Two procedures have been developed for categorical data which will produce the "best" subset or subsets of each model size where size refers to the number of effects in the model. Both procedures are patterned after the Leaps and Bounds approach used by Furnival and Wilson for continuous data and do not generally require fitting all models. For hierarchical models, likelihood ratio statistics (G('2)) are computed using iterative proportional fitting and "best" is determined by comparing, among models with the same number of effects, the Pr((chi)(,k)('2) (GREATERTHEQ) G(,ij)('2)) where k is the degrees of freedom for ith model of size j. To fit non-hierarchical as well as hierarchical models, a weighted least squares procedure has been developed.^ The procedures are applied to published occupational data relating to the occurrence of byssinosis. These results are compared to previously published analyses of the same data. Also, the procedures are applied to published data on symptoms in psychiatric patients and again compared to previously published analyses.^ These procedures will make categorical data analysis more accessible to researchers who are not statisticians. The procedures should also encourage more complex exploratory analyses of epidemiologic data and contribute to the development of new hypotheses for study. ^
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
Gynuity
Resumo:
These data are provided to allow users for reproducibility of an open source tool entitled 'automated Accumulation Threshold computation and RIparian Corridor delineation (ATRIC)'
Resumo:
In this paper, a new high-resolution elevation model of Greenland, including the ice sheet as well as the ice free regions, is presented. It is the first published full coverage model, computed with an average resolution of 2 km and providing an unprecedented degree of detail. The topography is modeled from a wide selection of data sources, including satellite radar altimetry from Geosat and ERS 1, airborne radar altimetry and airborne laser altimetry over the ice sheet, and photogrammetric and manual map scannings in the ice free region. The ice sheet model accuracy is evaluated by omitting airborne laser data from the analysis and treating them as ground truth observations. The mean accuracy of the ice sheet elevations is estimated to be 12-13 m, and it is found that on surfaces of a slope between 0.2° and 0.8°, corresponding to approximately 50% of the ice sheet, the model presents a 40% improvement over models based on satellite altimetry alone. On coastal bedrock, the model is compared with stereo triangulated reference points, and it is found that the model accuracy is of the order of 25-35 m in areas covered by stereo photogrammetry scannings and between 200 and 250 m elsewhere.
Resumo:
In 2003 the Restriction of Hazardous Substances (RoHS) was established in the EU, which limited the trade of machinery, electrical and electronic equipment that have at least one of the substances considered hazardous under RoHS directive. Since countries trading with the EU must comply with this new regulation, it is expected a decrease in value of imports to the EU. In this paper, it is followed the procedures used in Heckman (1979), as well as the extended procedure suggested by Helpman, Melitz, and Rubinstein (2008) to ascertain the effects on the persistence of trade and values of trade.
Resumo:
Species selection for forest restoration is often supported by expert knowledge on local distribution patterns of native tree species. This approach is not applicable to largely deforested regions unless enough data on pre-human tree species distribution is available. In such regions, ecological niche models may provide essential information to support species selection in the framework of forest restoration planning. In this study we used ecological niche models to predict habitat suitability for native tree species in "Tierra de Campos" region, an almost totally deforested area of the Duero Basin (Spain). Previously available models provide habitat suitability predictions for dominant native tree species, but including non-dominant tree species in the forest restoration planning may be desirable to promote biodiversity, specially in largely deforested areas were near seed sources are not expected. We used the Forest Map of Spain as species occurrence data source to maximize the number of modeled tree species. Penalized logistic regression was used to train models using climate and lithological predictors. Using model predictions a set of tools were developed to support species selection in forest restoration planning. Model predictions were used to build ordered lists of suitable species for each cell of the study area. The suitable species lists were summarized drawing maps that showed the two most suitable species for each cell. Additionally, potential distribution maps of the suitable species for the study area were drawn. For a scenario with two dominant species, the models predicted a mixed forest (Quercus ilex and a coniferous tree species) for almost one half of the study area. According to the models, 22 non-dominant native tree species are suitable for the study area, with up to six suitable species per cell. The model predictions pointed to Crataegus monogyna, Juniperus communis, J.oxycedrus and J.phoenicea as the most suitable non-dominant native tree species in the study area. Our results encourage further use of ecological niche models for forest restoration planning in largely deforested regions.