921 resultados para Generalized Variational Inequality
Resumo:
The study compares a measure of income inequality with polarization scores of U.S. Representatives from the 104th to the 109th Congresses. It attempts to explain the link, on the abstract level, between high inequality and high polarization. The end findings indicate that inequality increases a Representative's likelihood to act liberally.
Resumo:
Determining the profit maximizing input-output bundle of a firm requires data on prices. This paper shows how endogenously determined shadow prices can be used in place of actual prices to obtain the optimal input-output bundle where the firm.s shadow profit is maximized. This approach amounts to an application of the Weak Axiom of Profit Maximization (WAPM) formulated by Varian (1984) based on shadow prices rather than actual prices. At these prices the shadow profit of a firm is zero. Thus, the maximum profit that could have been attained at some other input-output bundle is a measure of the inefficiency of the firm. Because the benchmark input-output bundle is always an observed bundle from the data, it can be determined without having to solve any elaborate programming problem. An empirical application to U.S. airlines data illustrates the proposed methodology.
Resumo:
The variational calculation of the energy of the Hydrogen Molecular (H2+) Cation's LCAO-MO for the sigma and sigma* states as functions of the AO's screening constant and the internuclear distance is carried out explicitly in great detail.
Resumo:
The pi and pi-star orbitals of the hydrogen molecular cation are obtained using Maple in the same manner as the sigma and sigma-star orbitals were obtained in paper-36.
Resumo:
Using data from March Current Population Surveys we find gains from economic growth over the 1990s business cycle (1989-2000) were more equitably distributed than over the 1980s business cycle (1979-1989) using summary inequality measures as well as kernel density estimations. The entire distribution of household size-adjusted income moved upwards in the 1990s with profound improvements for African Americans, single mothers and those living in households receiving welfare. Most gains occurred over the growth period 1993-2000. Improvements in average income and income inequity over the latter period are reminiscent of gains seen in the first three decades after World War II.
Resumo:
We propose a nonparametric model for global cost minimization as a framework for optimal allocation of a firm's output target across multiple locations, taking account of differences in input prices and technologies across locations. This should be useful for firms planning production sites within a country and for foreign direct investment decisions by multi-national firms. Two illustrative examples are included. The first example considers the production location decision of a manufacturing firm across a number of adjacent states of the US. In the other example, we consider the optimal allocation of US and Canadian automobile manufacturers across the two countries.
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
Complex diseases, such as cancer, are caused by various genetic and environmental factors, and their interactions. Joint analysis of these factors and their interactions would increase the power to detect risk factors but is statistically. Bayesian generalized linear models using student-t prior distributions on coefficients, is a novel method to simultaneously analyze genetic factors, environmental factors, and interactions. I performed simulation studies using three different disease models and demonstrated that the variable selection performance of Bayesian generalized linear models is comparable to that of Bayesian stochastic search variable selection, an improved method for variable selection when compared to standard methods. I further evaluated the variable selection performance of Bayesian generalized linear models using different numbers of candidate covariates and different sample sizes, and provided a guideline for required sample size to achieve a high power of variable selection using Bayesian generalize linear models, considering different scales of number of candidate covariates. ^ Polymorphisms in folate metabolism genes and nutritional factors have been previously associated with lung cancer risk. In this study, I simultaneously analyzed 115 tag SNPs in folate metabolism genes, 14 nutritional factors, and all possible genetic-nutritional interactions from 1239 lung cancer cases and 1692 controls using Bayesian generalized linear models stratified by never, former, and current smoking status. SNPs in MTRR were significantly associated with lung cancer risk across never, former, and current smokers. In never smokers, three SNPs in TYMS and three gene-nutrient interactions, including an interaction between SHMT1 and vitamin B12, an interaction between MTRR and total fat intake, and an interaction between MTR and alcohol use, were also identified as associated with lung cancer risk. These lung cancer risk factors are worthy of further investigation.^
Resumo:
The CMCC Global Ocean Physical Reanalysis System (C-GLORS) is used to simulate the state of the ocean in the last decades. It consists of a variational data assimilation system (OceanVar), capable of assimilating all in-situ observations along with altimetry data, and a forecast step performed by the ocean model NEMO coupled with the LIM2 sea-ice model. KEY STRENGTHS: - Data are available for a large number of ocean parameters - An extensive validation has been conducted and is freely available - The reanalysis is performed at high resolution (1/4 degree) and spans the last 30 years KEY LIMITATIONS: - Quality may be discontinuos and depend on observation coverage - Uncertainty estimates are simply derived through verification skill scores
Resumo:
At least since Thomas Piketty's best-selling \Capital in the Twenty- First Century" (2014, Cambridge, MA: The Belknap Press), percentile shares have become a popular approach for analyzing distributional inequalities. In their work on the development of top incomes, Piketty and collaborators typically report top- percentage shares, using varying percentages as thresholds (top 10%, top 1%, top 0.1%, etc.). However, analysis of percentile shares at other positions in the distri- bution may also be of interest. In this paper I present a new Stata command called pshare that estimates percentile shares from individual-level data and displays the results using histograms or stacked bar charts.
Resumo:
At least since Thomas Piketty's best-selling "Capital in the Twenty-First Century" (2014, Cambridge, MA: The Belknap Press), percentile shares have become a popular approach for analyzing distributional inequalities. In their work on the development of top incomes, Piketty and collaborators typically report top-percentage shares, using varying percentages as thresholds (top 10%, top 1%, top 0.1%, etc.). However, analysis of percentile shares at other positions in the distribution may also be of interest. In this paper I present a new Stata command called -pshare- that estimates percentile shares from individual-level data and displays the results using histograms or stacked bar charts.