977 resultados para Classification theory
Resumo:
We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Resumo:
Cannabis use among adolescents and young adults has become a major public health challenge. Several European countries are currently developing short screening instruments to identify 'problematic' forms of cannabis use in general population surveys. One such instrument is the Cannabis Use Disorders Identification Test (CUDIT), a 10-item questionnaire based on the Alcohol Use Disorders Identification Test. Previous research found that some CUDIT items did not perform well psychometrically. In the interests of improving the psychometric properties of the CUDIT, this study replaces the poorly performing items with new items that specifically address cannabis use. Analyses are based on a sub-sample of 558 recent cannabis users from a representative population sample of 5722 individuals (aged 13-32) who were surveyed in the 2007 Swiss Cannabis Monitoring Study. Four new items were added to the original CUDIT. Psychometric properties of all 14 items, as well as the dimensionality of the supplemented CUDIT were then examined using Item Response Theory. Results indicate the unidimensionality of CUDIT and an improvement in its psychometric performance when three original items (usual hours being stoned; injuries; guilt) are replaced by new ones (motives for using cannabis; missing out leisure time activities; difficulties at work/school). However, improvements were limited to cannabis users with a high problem score. For epidemiological purposes, any further revision of CUDIT should therefore include a greater number of 'easier' items.
Resumo:
We estimate a forward-looking monetary policy reaction function for thepostwar United States economy, before and after Volcker's appointmentas Fed Chairman in 1979. Our results point to substantial differencesin the estimated rule across periods. In particular, interest ratepolicy in the Volcker-Greenspan period appears to have been much moresensitive to changes in expected inflation than in the pre-Volckerperiod. We then compare some of the implications of the estimated rulesfor the equilibrium properties of inflation and output, using a simplemacroeconomic model, and show that the Volcker-Greenspan rule is stabilizing.
Resumo:
Standard economic analysis holds that labor market rigidities are harmfulfor job creation and typically increase unemployment. But many orthodoxreforms of the labor market have proved difficult to implement because ofpolitical opposition. For these reasons it is important to explain why weobserve such regulations. In this paper I outline a theory of how they may arise and why they fit together. This theory is fully developed in aforthcoming book (Saint-Paul (2000)), to which the reader is referred forfurther details.
Resumo:
Returns to scale to capital and the strength of capital externalities play a key role for the empirical predictions and policy implications of different growth theories. We show that both can be identified with individual wage data and implement our approach at the city-level using US Census data on individuals in 173 cities for 1970, 1980, and 1990. Estimation takes into account fixed effects, endogeneity of capital accumulation, and measurement error. We find no evidence for human or physical capital externalities and decreasing aggregate returns to capital. Returns to scale to physical and human capital are around 80 percent. We also find strong complementarities between human capital and labor and substantial total employment externalities.
Resumo:
BACKGROUND AND PURPOSE: MCI was recently subdivided into sd-aMCI, sd-fMCI, and md-aMCI. The current investigation aimed to discriminate between MCI subtypes by using DTI. MATERIALS AND METHODS: Sixty-six prospective participants were included: 18 with sd-aMCI, 13 with sd-fMCI, and 35 with md-aMCI. Statistics included group comparisons using TBSS and individual classification using SVMs. RESULTS: The group-level analysis revealed a decrease in FA in md-aMCI versus sd-aMCI in an extensive bilateral, right-dominant network, and a more pronounced reduction of FA in md-aMCI compared with sd-fMCI in right inferior fronto-occipital fasciculus and inferior longitudinal fasciculus. The comparison between sd-fMCI and sd-aMCI, as well as the analysis of the other diffusion parameters, yielded no significant group differences. The individual-level SVM analysis provided discrimination between the MCI subtypes with accuracies around 97%. The major limitation is the relatively small number of cases of MCI. CONCLUSIONS: Our data show that, at the group level, the md-aMCI subgroup has the most pronounced damage in white matter integrity. Individually, SVM analysis of white matter FA provided highly accurate classification of MCI subtypes.
Resumo:
Alan S. Milward was an economic historian who developed an implicit theory ofhistorical change. His interpretation which was neither liberal nor Marxist positedthat social, political, and economic change, for it to be sustainable, had to be agradual process rather than one resulting from a sudden, cataclysmicrevolutionary event occurring in one sector of the economy or society. Benignchange depended much less on natural resource endowment or technologicaldevelopments than on the ability of state institutions to respond to changingpolitical demands from within each society. State bureaucracies were fundamentalto formulating those political demands and advising politicians of ways to meetthem. Since each society was different there was no single model of developmentto be adopted or which could be imposed successfully by one nation-state onothers, either through force or through foreign aid programs. Nor coulddevelopment be promoted simply by copying the model of a more successfuleconomy. Each nation-state had to find its own response to the political demandsarising from within its society. Integration occurred when a number of nation states shared similar political objectives which they could not meet individuallybut could meet collectively. It was not simply the result of their increasinginterdependence. It was how and whether nation-states responded to thesedomestic demands which determined the nature of historical change.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
The mechanisms in the Nash program for cooperative games are madecompatible with the framework of the theory of implementation. This is donethrough a reinterpretation of the characteristic function that avoids feasibilityproblems, thereby allowing an analysis that focuses exclusively on the payoff space. In this framework, we show that the core is the only majorcooperative solution that is Maskin monotonic. Thus, implementation of mostcooperative solutions must rely on refinements of the Nash equilibrium concept(like most papers in the Nash program do). Finally, the mechanisms in theNash program are adapted into the model.
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
A method to evaluate cyclical models not requiring knowledge of the DGP and the exact specificationof the aggregate decision rules is proposed. We derive robust restrictions in a class of models; use someto identify structural shocks in the data and others to evaluate the class or contrast sub-models. Theapproach has good properties, even in small samples, and when the class of models is misspecified. Themethod is used to sort out the relevance of a certain friction (the presence of rule-of-thumb consumers)in a standard class of models.
Resumo:
Introduction: Quantitative measures of degree of lumbar spinal stenosis (LSS) such as antero-posterior diameter of the canal or dural sac cross sectional area vary widely and do not correlate with clinical symptoms or results of surgical decompression. In an effort to improve quantification of stenosis we have developed a grading system based on the morphology of the dural sac and its contents as seen on T2 axial images. The grading comprises seven categories ranging form normal to the most severe stenosis and takes into account the ratio of rootlet/CSF content. Material and methods: Fifty T2 axial MRI images taken at disc level from twenty seven symptomatic lumbar spinal stenosis patients who underwent decompressive surgery were classified into seven categories by five observers and reclassified 2 weeks later by the same investigators. Intra- and inter-observer reliability of the classification were assessed using Cohen's and Fleiss' kappa statistics, respectively. Results: Generally, the morphology grading system itself was well adopted by the observers. Its success in application is strongly influenced by the identification of the dural sac. The average intraobserver Cohen's kappa was 0.53 ± 0.2. The inter-observer Fleiss' kappa was 0.38 ± 0.02 in the first rating and 0.3 ± 0.03 in the second rating repeated after two weeks. Discussion: In this attempt, the teaching of the observers was limited to an introduction to the general idea of the morphology grading system and one example MRI image per category. The identification of the dimension of the dural sac may be a difficult issue in absence of complete T1 T2 MRI image series as it was the case here. The similarity of the CSF to possibly present fat on T2 images was the main reason of mismatch in the assignment of the cases to a category. The Fleiss correlation factors of the five observers are fair and the proposed morphology grading system is promising.