38 resultados para Fisher Hypothesis

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the dangers inherent in allempting to simplify something as complex as development. It does this by exploring the Lynn and Vanhanen theory of deterministic development which asserts that varying levels of economic development seen between countries can be explained by differences in 'national intelligence' (national IQ). Assuming that intelligence is genetically determined, and as different races have been shown to have different IQ, then they argue that economic development (measured as GDP/capita) is largely a function of race and interventions to address imbalances can only have a limited impact. The paper presents the Lynne and Vanhanen case and critically discusses the data and analyses (linear regression) upon which it is based. It also extends the cause-effect basis of Lynne and Vanhanen's theory for economic development into human development by using the Human Development Index (HDI). It is argued that while there is nothing mathematically incorrect with their calculations, there are concerns over the data they employ. Even more fundamentally it is argued that statistically significant correlations between the various components of the HDI and national IQ can occur via a host of cause-effect pathways, and hence the genetic determinism theory is far from proven. The paper ends by discussing the dangers involved in the use of over-simplistic measures of development as a means of exploring cause-effect relationships. While the creators of development indices such as the HDI have good intentions, simplistic indices can encourage simplistic explanations of under-development. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Development geography has long sought to understand why inequalities exist and the best ways to address them. Dependency theory sets out an historical rationale for under development based on colonialism and a legacy of developed core and under-developed periphery. Race is relevant in this theory only insofar that Europeans are white and the places they colonised were occupied by people with darker skin colour. There are no innate biological reasons why it happened in that order. However, a new theory for national inequalities proposed by Lynn and Vanhanen in a series of publications makes the case that poorer countries have that status because of a poorer genetic stock rather than an accident of history. They argue that IQ has a genetic basis and IQ is linked to ability. Thus races with a poorer IQ have less ability, and thus national IQ can be positively correlated with performance as measured by an indicator like GDP/capita. Their thesis is one of despair, as little can be done to improve genetic stock significantly other than a programme of eugenics. This paper summarises and critiques the Lynn and Vanhanen hypothesis and the assumptions upon which it is based, and uses this analysis to show how a human desire to simplify in order to manage can be dangerous in development geography. While the attention may naturally be focused on the 'national IQ' variables as a proxy measure of 'innate ability', the assumption of GDP per capita as an indicator of 'success' and 'achievement' is far more readily accepted without criticism. The paper makes the case that the current vogue for indicators, indices and cause-effect can be tyrannical.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many families of interspersed repetitive DNA elements, including human Alu and LINE (Long Interspersed Element) elements, have been proposed to have accumulated through repeated copying from a single source locus: the "master gene." The extent to which a master gene model is applicable has implications for the origin, evolution, and function of such sequences. One repetitive element family for which a convincing case for a master gene has been made is the rodent ID (identifier) elements. Here we devise a new test of the master gene model and use it to show that mouse ID element sequences are not compatible with a strict master gene model. We suggest that a single master gene is rarely, if ever, likely to be responsible for the accumulation of any repeat family.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ulcerative colitis (UC) is characterized by impairment of the epithelial barrier and the formation of ulcer-type lesions, which result in local leaks and generalized alterations of mucosal tight junctions. Ultimately, this results in increased basal permeability. Although disruption of the epithelial barrier in the gut is a hallmark of inflammatory bowel disease and intestinal infections, it remains unclear whether barrier breakdown is an initiating event of UC or rather a consequence of an underlying inflammation, evidenced by increased production of proinflammatory cytokines. UC is less common in smokers, suggesting that the nicotine in cigarettes may ameliorate disease severity. The mechanism behind this therapeutic effect is still not fully understood, and indeed it remains unclear if nicotine is the true protective agent in cigarettes. Nicotine is metabolized in the body into a variety of metabolites and can also be degraded to form various breakdown products. It is possible these metabolites or degradation products may be the true protective or curative agents. A greater understanding of the pharmacodynamics and kinetics of nicotine in relation to the immune system and enhanced knowledge of out permeability defects in UC are required to establish the exact protective nature of nicotine and its metabolites in UC. This review suggests possible hypotheses for the protective mechanism of nicotine in UC, highlighting the relationship between gut permeability and inflammation, and indicates where in the pathogenesis of the disease nicotine may mediate its effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The potential of clarification questions (CQs) to act as a form of corrective input for young children's grammatical errors was examined. Corrective responses were operationalized as those occasions when child speech shifted from erroneous to correct (E -> C) contingent on a clarification question. It was predicted that E -> C sequences would prevail over shifts in the opposite direction (C -> E), as can occur in the case of nonerror-contingent CQs. This prediction was tested via a standard intervention paradigm, whereby every 60s a sequence of two clarification requests (either specific or general) was introduced into conversation with a total of 45 2- and 4-year-old children. For 10 categories of grammatical structure, E -> C sequences predominated over their C -> E counterparts, with levels of E -> C shifts increasing after two clarification questions. Children were also more reluctant to repeat erroneous forms than their correct counterparts, following the intervention of CQs. The findings provide support for Saxton's prompt hypothesis, which predicts that error-contingent CQs bear the potential to cue recall of previously acquired grammatical forms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Problematic trace-antecedent relations between deep and surface structure have been a dominant theme in sentence comprehension in agrammatism. We challenge this view and propose that the comprehension in agrammatism in declarative sentences and wh-questions stems from impaired processing in logical form. We present new data from wh-questions and declarative sentences and advance a new hypothesis which we call the set partition hypothesis. We argue that elements that signal set partition operations influence sentence comprehension while trace-antecedent relations remain intact. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life-history theories of the early programming of human reproductive strategy stipulate that early rearing experience, including that reflected in infant-parent attachment security, regulates psychological, behavioral, and reproductive development. We tested the hypothesis that infant attachment insecurity, compared with infant attachment security, at the age of 15 months predicts earlier pubertal maturation. Focusing on 373 White females enrolled in the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, we gathered data from annual physical exams from the ages of 9½ years to 15½ years and from self-reported age of menarche. Results revealed that individuals who had been insecure infants initiated and completed pubertal development earlier and had an earlier age of menarche compared with individuals who had been secure infants, even after accounting for age of menarche in the infants’ mothers. These results support a conditional-adaptational view of individual differences in attachment security and raise questions about the biological mechanisms responsible for the attachment effects we discerned.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we discuss current work concerning Appearance-based and CAD-based vision; two opposing vision strategies. CAD-based vision is geometry based, reliant on having complete object centred models. Appearance-based vision builds view dependent models from training images. Existing CAD-based vision systems that work with intensity images have all used one and zero dimensional features, for example lines, arcs, points and corners. We describe a system we have developed for combining these two strategies. Geometric models are extracted from a commercial CAD library of industry standard parts. Surface appearance characteristics are then learnt automatically by observing actual object instances. This information is combined with geometric information and is used in hypothesis evaluation. This augmented description improves the systems robustness to texture, specularities and other artifacts which are hard to model with geometry alone, whilst maintaining the advantages of a geometric description.