33 resultados para inequality constraint
em CentAUR: Central Archive University of Reading - UK
Resumo:
The SCoTLASS problem-principal component analysis modified so that the components satisfy the Least Absolute Shrinkage and Selection Operator (LASSO) constraint-is reformulated as a dynamical system on the unit sphere. The LASSO inequality constraint is tackled by exterior penalty function. A globally convergent algorithm is developed based on the projected gradient approach. The algorithm is illustrated numerically and discussed on a well-known data set. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Development geography has long sought to understand why inequalities exist and the best ways to address them. Dependency theory sets out an historical rationale for under development based on colonialism and a legacy of developed core and under-developed periphery. Race is relevant in this theory only insofar that Europeans are white and the places they colonised were occupied by people with darker skin colour. There are no innate biological reasons why it happened in that order. However, a new theory for national inequalities proposed by Lynn and Vanhanen in a series of publications makes the case that poorer countries have that status because of a poorer genetic stock rather than an accident of history. They argue that IQ has a genetic basis and IQ is linked to ability. Thus races with a poorer IQ have less ability, and thus national IQ can be positively correlated with performance as measured by an indicator like GDP/capita. Their thesis is one of despair, as little can be done to improve genetic stock significantly other than a programme of eugenics. This paper summarises and critiques the Lynn and Vanhanen hypothesis and the assumptions upon which it is based, and uses this analysis to show how a human desire to simplify in order to manage can be dangerous in development geography. While the attention may naturally be focused on the 'national IQ' variables as a proxy measure of 'innate ability', the assumption of GDP per capita as an indicator of 'success' and 'achievement' is far more readily accepted without criticism. The paper makes the case that the current vogue for indicators, indices and cause-effect can be tyrannical.
Resumo:
We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Results concerning the eigenvalues of the preconditioned matrix and its minimum polynomial are given. Numerical experiments validate these conclusions.
Resumo:
Critics of genetically modified (GM) crops often contend that their introduction enhances the gap between rich and poor farmers, as the former group are in the best position to afford the expensive seed as well as provide other inputs such as fertilizer and irrigation. The research reported in this paper explores this issue with regard to Bt cotton (cotton with the endotoxtin gene from Bacillus thuringiensis conferring resistance to some insect pests) in Jalgaon, Maharashtra State, India, spanning the 2002 and 2003 seasons. Questionnaire–based survey results from 63 non–adopting and 94 adopting households of Bt cotton were analyzed, spanning 137 Bt cotton plots and 95 non–Bt cotton plots of both Bt adopters and non–adopters. For these households, cotton income accounted for 85 to 88% of total household income, and is thus of vital importance. Results suggest that in 2003 Bt adopting households have significantly more income from cotton than do non–adopting households (Rp 66,872 versus Rp 46,351) but inequality in cotton income, measured with the Gini coefficient (G), was greater amongst non–adopters than adopters. While Bt adopters had greater acreage of cotton in 2003 (9.92 acres versus 7.42 for non–adopters), the respective values of G were comparable. The main reason for the lessening of inequality amongst adopters would appear to be the consistency in the performance of Bt cotton along with the preferred non–Bt cultivar of Bt adopters—Bunny. Taking gross margin as the basis for comparison, Bt plots had 2.5 times the gross margin of non–Bt plots of non–adopters, while the advantage of Bt plots over non–Bt plots of adopters was 1.6 times. Measured in terms of the Gini coefficient of gross margin/acre it was apparent that inequality was lessened with the adoption of Bunny (G = 0.47) and Bt (G = 0.3) relative to all other non–Bt plots (G = 0.63). Hence the issue of equality needs to be seen both in terms of differences between adopters and non–adopters as well as within each of the groups.
Resumo:
The recently formulated metabolic theory of ecology has profound implications for the evolution of life histories. Metabolic rate constrains the scaling of production with body mass, so that larger organisms have lower rates of production on a mass-specific basis than smaller ones. Here, we explore the implications of this constraint for life-history evolution. We show that for a range of very simple life histories, Darwinian fitness is equal to birth rate minus death rate. So, natural selection maximizes birth and production rates and minimizes death rates. This implies that decreased body size will generally be favored because it increases production, so long as mortality is unaffected. Alternatively, increased body size will be favored only if it decreases mortality or enhances reproductive success sufficiently to override the preexisting production constraint. Adaptations that may favor evolution of larger size include niche shifts that decrease mortality by escaping predation or that increase fecundity by exploiting new abundant food sources. These principles can be generalized to better understand the intimate relationship between the genetic currency of evolution and the metabolic currency of ecology.
Resumo:
Typically, the relationship between insect development and temperature is described by two characteristics: the minimum temperature needed for development to occur (T-min) and the number of day degrees required (DDR) for the completion of development. We investigated these characteristics in three English populations of Thrips major and T tabaci [Cawood, Yorkshire (N53degrees49', W1degrees7'); Boxworth, Cambridgeshire (N52degrees15', W0degrees1'); Silwood Park, Berkshire (N51degrees24', W0degrees38')], and two populations of Frankliniella occidentalis (Cawood; Silwood Park). While there were no significant differences among populations in either T-min (mean for T major = 7.0degreesC; T tabaci = 5.9degreesC; F. occidentalis = 6.7degreesC) or DDR (mean for T major = 229.9; T tabaci = 260.8; F occidentalis = 233.4), there were significant differences in the relationship between temperature and body size, suggesting the presence of geographic variation in this trait. Using published data, in addition to those newly collected, we found a negative relationship between T-min. and DDR for F occidentalis and T tabaci, supporting the hypothesis that a trade-off between T-min and DDR may constrain adaptation to local climatic conditions.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
We present extensive molecular dynamics simulations of the dynamics of diluted long probe chains entangled with a matrix of shorter chains. The chain lengths of both components are above the entanglement strand length, and the ratio of their lengths is varied over a wide range to cover the crossover from the chain reptation regime to tube Rouse motion regime of the long probe chains. Reducing the matrix chain length results in a faster decay of the dynamic structure factor of the probe chains, in good agreement with recent neutron spin echo experiments. The diffusion of the long chains, measured by the mean square displacements of the monomers and the centers of mass of the chains, demonstrates a systematic speed-up relative to the pure reptation behavior expected for monodisperse melts of sufficiently long polymers. On the other hand, the diffusion of the matrix chains is only weakly perturbed by the diluted long probe chains. The simulation results are qualitatively consistent with the theoretical predictions based on constraint release Rouse model, but a detailed comparison reveals the existence of a broad distribution of the disentanglement rates, which is partly confirmed by an analysis of the packing and diffusion of the matrix chains in the tube region of the probe chains. A coarse-grained simulation model based on the tube Rouse motion model with incorporation of the probability distribution of the tube segment jump rates is developed and shows results qualitatively consistent with the fine scale molecular dynamics simulations. However, we observe a breakdown in the tube Rouse model when the short chain length is decreased to around N-S = 80, which is roughly 3.5 times the entanglement spacing N-e(P) = 23. The location of this transition may be sensitive to the chain bending potential used in our simulations.
Resumo:
The problem of the appropriate distribution of forces among the fingers of a four-fingered robot hand is addressed. The finger-object interactions are modelled as point frictional contacts, hence the system is indeterminate and an optimal solution is required for controlling forces acting on an object. A fast and efficient method for computing the grasping and manipulation forces is presented, where computation has been based on using the true model of the nonlinear frictional cone of contact. Results are compared with previously employed methods of linearizing the cone constraints and minimizing the internal forces.
Resumo:
A new sparse kernel probability density function (pdf) estimator based on zero-norm constraint is constructed using the classical Parzen window (PW) estimate as the target function. The so-called zero-norm of the parameters is used in order to achieve enhanced model sparsity, and it is suggested to minimize an approximate function of the zero-norm. It is shown that under certain condition, the kernel weights of the proposed pdf estimator based on the zero-norm approximation can be updated using the multiplicative nonnegative quadratic programming algorithm. Numerical examples are employed to demonstrate the efficacy of the proposed approach.